id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
235684148
pes2o/s2orc
v3-fos-license
Staphylococcus aureus induces neutrophil extracellular traps (NETs) and neutralizes their bactericidal potential Graphical abstract Introduction The main mechanisms used by neutrophils to clear pathogens include a) phagocytosis, b) release of antimicrobial peptides, c) production of antimicrobial reactive oxygen and nitrogen species, and d) production of neutrophil extracellular traps (NETs) [1]. NETs are fragile structures composed of a branching network of extracellular DNA filaments decorated with cytoskeletal proteins and proteases mainly involved in host defense during bacterial, fungal and viral infections [2]. NETs not only contribute to pathogen elimination but in parallel can cause damage to bystander cells. In sepsis, a systemic acute infective disease with high morbidity and mortality, NETs may promote patient survival [3], although as the disease progresses, they may accumulate in organs and cause detrimental tissue damage [4,5]. Platelets and NETs cooperate to promote intravascular coagulation during sepsis in mice [6] and plateletneutrophil interactions induce the release of intravascular NETs that in turn ensnare bacteria from the bloodstream and cause liver damage [7]. Moreover, as revealed by intravital imaging, during a bloodstream infection with methicillin-resistant S. aureus, neutrophils infiltrate the liver and release NETs into the vasculature. The NETs remain anchored to the vascular wall via von Willebrand factor and produce profound hepatic injury [8] (Fig. 1). In the face of this potentially harmful arsenal of weapons associated with NETs, microorganisms are endowed with a variety of virulence factors that effectively contrast the action of these host structures. Staphylococcus aureus is the etiological agent of diseases ranging from mild infections to severe diseases such as sepsis, pneumonia, endocarditis, and medical device-associated infections [19]. Among the variety of virulence factors to combat host defenses [20,21], S. aureus has evolved pathogenetic activities to escape NETs or induce NET formation to facilitate bacterial performance in the colonized host (Fig. 2). In this mini-review, we discuss the mechanisms of NET induction and formation by S. aureus, as well as the S. aureus virulence factors and the role they may have in counteracting the bactericidal activities of NETs. NET formation NET formation or NETosis can occur by two mechanisms: Lytic or suicidal mechanism Stimulation of neutrophils with phorbol myristate acetate (PMA), autoantibodies, and cholesterol crystals results in the activation of NADPH oxidase, via PKC and RAF-MEK-ERK Fig. 1. Schematic representation of NET involvement in infectious and noninfectious diseases. NET formation induced by staphylococcal bacteremia and sepsis can cause tissue damage and enhance bacterial permeation of deeper structures in the body. NETosis also has a role in noninfectious disease such as autoimmunity (systemic lupus erythematosus and rheumatoid arthritis), atherosclerosis, vasculitis, and cancer. signaling pathways and the consequent generation of reactive oxygen species (ROS). This activates peptidyl arginine deiminase (PAD4), which induces the conversion of arginine to citrulline (citrullination) in histones. Through citrullination, PAD4 modulates the conversion of positively charged arginine side chains into uncharged side chains of histones causing chromatin decondensation. The hydrogen peroxide is in turn consumed by MPO to produce hypochlorous acid and other oxidants and the generation of oxidants liberates neutrophil elastase from azurophilic granules, allowing it to translocate to the nucleus where it promotes the further unfolding of chromatin and nuclear membrane disruption. After nuclear membrane disintegration, chromatin is released into the cytosol where it associates with granular and cytosolic proteins [22,23]. NE also cleaves gasdermin D (GSDMD) in the cytosol to its active form (GSDMD-NT), which forms pores in the plasma membrane and granular membranes. Finally, NETs are released into the extracellular space and neutrophils die [24] (Fig. 3). Nonlytic or vital NET release mechanism Nonlytic NET formation is a rapid process induced by the recognition of stimuli through complement receptors and does not depend on NADPH activation. Vital NET formation is induced by S. aureus through both complement receptors and TLR-2 ligands, or by Escherichia coli directly via TLR-4 or indirectly via TLR-4 activated platelets. As reported for lytic NET formation, during nonlytic NET formation PAD4 and elastase are activated and translocate to the nucleus where they promote chromatin decondensation. The unfolded chromatin is released into the cytosol, becomes decorated with cytosolic proteins and is finally expelled via vesicular export into the extracellular environment [25]. Importantly, after the release of NETs, this pathway preserves the integrity of neutrophils' plasma membranes and the anucleated neutrophils named cytoplasts remain alive and retain the capacity to migrate and phagocytose [26,27] (Fig. 4). NET composition, structure, and bactericidal function The composition of NETs is critical for their pathological impact. Due to its structure and charge, DNA represents the structural unit around which the other components of NETs are assembled. The DNA backbone in NETs is coated with at least 20-30 different proteins including nuclear proteins (histones), granule proteins (NE (neutrophyl elastase), MPO (myeloperoxidase), lactoferrin, cathepsin G, proteinase-3) and cytosolic proteins (S100 calcium-binding proteins A8, A9, A12, as well as actin, a-actinin and calprotectin), which are attached to DNA by electrostatic forces [28][29][30]. The mechanisms by which NETs contrast and eventually kill microbial invaders remain controversial. Menegazzi et al showed that when captured by NETs, microorganisms such as S. aureus and Candida albicans are trapped but not killed by these structures [31]. On the other hand, several reports state that NETs efficiently kill bacteria. According to Halverson et al, DNA possesses a rapid bactericidal activity due to its ability to sequester surface bound cations, disrupt membrane integrity and lyse bacterial cells [32]. The cationic antimicrobial peptides such as cathelicidin LL-37 can protect neutrophil-derived DNA from bacterial nuclease degradation [33]. The antimicrobial function of NETs has also been attributed to NET-bound proteins including histones [34,35], the zinc-chelating protein calprotectin [30], and the granular serine protease cathepsin G [36]. Moreover, MPO associated with NETs exhibits a bactericidal activity needed to kill the pathogen in the presence of hydrogen peroxide [37]. The molecular details by which the different components of NETs interfere with the viability of the microorganisms remain to be elucidated. S. aureus and its arsenal of virulence factors The explanation for the high pathogenetic potential of S. aureus lies in the ability of the bacterium to express a large number of virulence factors and the capacity to colonize and infect host tissues 3. Lytic NET formation. Different stimuli activate neutrophils via the activation of NADPH oxidase and induce consequent ROS formation. Then, PAD4 is activated and citrullinates histones in the nucleus, causing chromatin decondensation. At same time, myeloperoxidase and elastase translocate from cytosol to the nucleus where they contribute to further unfolding of chromatin. Elastase also activates gasdermin D, which forms pores in the nuclear and plasma membranes. Consequently, chromatin is released in the cytosol and mixed with cytosolic proteins forming NETs. After the secretion of NETs, neutrophils die. and organs. The virulence factors are mostly surface-associated or secreted proteinaceous products. A significant number of cell wallanchored (CWA) proteins act as receptors for extracellular matrix components (fibronectin, fibrinogen, collagen) and play additional roles in biofilm formation [38]. Biofilms are multicellular microbial communities formed on either biological or inorganic surfaces that are encased within a self-produced matrix. Fibronectin-binding proteins FnBPA and FnBPB and clumping factor A (ClfA) are the most relevant adhesins that bind to fibrinogen. Furthermore, as a result of their fibronectin-binding activity, FnBPA and FnBPB mediate S. aureus colonization of fibronectin rich-tissues and fibronectin-mediated host cell invasion. Protein A (SpA), another important CWA protein, binds to the Fc domain of IgG in an incorrect orientation which results in the protection of staphylococci from opsonophagocytosis and killing. SpA also binds to the Fab region of surface IgM located on B lymphocytes, triggering the proliferation and apoptotic collapse of adaptive immune responses [39][40][41] (Fig. 2). S. aureus expresses more than 60 surface-exposed lipoproteins (Lpp), which are involved in a number of metabolic processes such as nutrient uptake (ion, sugar, and amino acids and oligopeptide transporters), enzymes and foldases [42]. S. aureus has also evolved a series of secreted proteins/peptides that functionally interfere with complement C3 and C5 convertase activities and reduce the chemotactic activity of neutrophils. For example, the extracellular adherence protein Eap specifically inhibits both the classical and lectin pathways, disrupting the formation of C4bC2 proconvertase. This results in the inhibition of C3b formation and consequent reduction of S. aureus phagocytosis and killing by neutrophils. Moreover, S. aureus secretes a peptide named CHIPS (Chemotaxis Inhibitory Protein of S. aureus) that binds to formyl peptides and C5a receptors and this reduces neutrophil activation and migration to the site of infection [21]. To further contribute to pathogenesis, S. aureus secretes a vast group of cytotoxins, among them Hla, leukocidins and phenol-soluble modulins (PSMs). After binding to the host receptor ADAM-10, the Hla monomer oligomerizes to form heptameric pores in host cell membranes, causing the lysis and death of many cell types. Leukocidins are composed of two monomers, termed S-and F-subunits. The Ssubunit recognizes a specific receptor on the plasma membrane and then dimerizes with the F-subunit. This is followed by oligomerization of three additional dimers to form a complete octameric pore in the plasma membrane of leukocytes. PSMs are cytotoxins belonging to a family of amphipathic peptides (20-25 amino acid residues) that have a variety of roles in S. aureus pathogenesis such as cell lysis, immune modulation and biofilm formation [43] (Fig. 2). S. aureus induces NET formation by specific, secreted virulence factors Nonlytic NET formation is caused primarily by intact cells of S. aureus (Fig. 2). NET formation is also promoted by released staphylococcal products. PVL (Panton-Valentine Leukocidin), but not the structurally similar leukotoxin gamma hemolysin CB (HlgCB), was reported as the dominant inducer of lytic NET formation [25,44]. There have been recent insights concerning the mechanism of PVL-promoted NET formation [25]. The mechanism starts with PVL binding to its specific receptor, which in turn leads to its endocytosis by neutrophils. In parallel, cytosolic intracellular Ca 2+ is rapidly mobilized from endoplasmic reticulum and a direct interaction of PVL with mitochondrial membranes occurs. The increase of intracellular calcium triggers the activation of small conductance potassium channels, SK and PVL binding to neutrophil mitochondria induces an increase of reactive oxygen species (ROS) and triggers the enzymatic activity of MPO. Finally, PVL promotes PAD4 activation and the consequent formation of citrullinated histone 3. Importantly, PVL is also a potent cytotoxic factor and can induce rapid death in human and rabbit neutrophils [44]. We can hypothesize that the differential effects of the PVL action in the studies reported above (cell lysis and NETosis in one case and cell death in the second) may be explained, in part, by the experimental conditions used. Specifically, under a critical threshold of PVL concentration, NET formation and release might favorably prepare the host to resist infection. Aside from PVL, other staphylococcal molecules contribute to NET formation. For example, leukotoxin LukGH (also named LukAB) promotes the release of NETs which, in turn, ensnare but do not kill S. aureus cells. It has been proposed that the ability of LukAB to promote the formation of NETs contributes to the inflammatory response and host defense against S. aureus infection [45]. Notably, LukAB released by staphylococcal biofilm contributes to the neutralization of NETs (see below) [46]. Moreover, phenol-soluble modulin a (PSMa) induces rapid formation of NETs through a ROS-independent pathway [47]. Finally, staphylococcal SpA has been shown to be involved in NET formation [48]. NET formation by S. aureus is facilitated by the cathelicidin LL37 released by epithelial cells and phagocytes upon infection [49] (Fig. 4). It must be kept in mind that the biological activity of these staphylococcal virulence factors has been determined in vitro and in environmental conditions of pH and ionic strength that may differ from in vivo operational conditions. Moreover, it is not known whether the expression of each of the above factors is expressed in bloodstream and host tissues in sufficient amounts to induce NET formation. Therefore, to validate the supposed role of these NET inducers, more studies with animal models and the use of specific deletion mutants will be needed. 6. NET neutralization by S. Aureus Degradation of NETs by staphylococcal nuclease Several Gram-positive bacteria including S. aureus express 5 0nucleotidases (5 0 -NTs), enzymes that catalyze the hydrolysis of nucleoside monophosphates to produce nucleosides and phosphate [50]. In a study performed to examine the potential role of S. aureus nuclease in NET degradation and virulence in a murine respiratory tract infection model, Berends et al showed that an isogenic deficient-nuclease mutant lacked the ability to degrade NETs compared with the parental strain and consequently appeared to be more susceptible to extracellular killing by activated neutrophils. On the contrary, nuclease expression by S. aureus enhanced the escape of bacteria from NETs in an in vivo mouse model of S. aureus respiratory tract injection [51]. In a more detailed study Thammavongsa and colleagues found that S. aureus escapes host defenses by converting DNA in NETs to deoxyadenosine (dAdo) through the concerted action of two enzymes, nuclease and adenosine synthase (AdsA). dAdo in turn kills macrophages, preventing their infiltration into S. aureus-induced abscesses and thereby reducing their antimicrobial action [52]. Moreover, data produced by Herzog et al demonstrates that high nuclease activity of S. aureus isolates correlates with long-term persistence and survival within the airways of cystic fibrosis patients due to the protection against NET-mediated killing [53] (Fig. 4). In summary, S. aureus nuclease exhibits a critical role in NET degradation in vivo. However, due to the presence of a number of DNase inhibitors in serum (for example, C1q of the complement system), it remains to be determined whether S. aureus nuclease keeps its activity intact in body fluids. Eap protein and its effect on the bactericidal activity of NETs The Extracellular Adherence Protein Eap blocks NETs formation and activities. By using atomic force microscopy, evidence has been provided that Eap can bind and aggregate linearized DNA. Consistent with this, Eap interferes with the formation of NETs, suggesting that it may protect bacteria from being trapped by structures such as microthrombi (Fig. 1) [54]. Notably, Eap and its homologues Ehp1 and Ehp2 potently inhibit the neutrophil serine proteases (NSPs) elastase (NE), proteinase 3, and cathepsin G [55]. Thus, Eap proteins could potentially block the enzymatic activities of NET proteases. However, considering that the NET-bound NSP could be inactivated by high concentrations of NSP inhibitors such as alpha1-proteinase inhibitor in serum [56], it is unclear whether the serine-protease inhibitory activity of Eap proteins plays an effective role in blocking the anti-bacterial activity of NETs (Fig. 4). FnBPB confers resistance to bactericidal activity of NETs As reported by several authors, histones are expressed and extruded in NETs in abundant amounts, estimated at 2.5 lg/10 6 neutrophils, such that histones comprise more than two thirds of the total protein content within the NET structure. In a recent study it was discovered that fibronectin-binding protein B (FnBPB) is the main histone receptor and that histone H3 displays the highest affinity. Importantly, an FnBPB-deletion mutant bound less H3 and was more susceptible to histone bactericidal activity, whereas a mutant overexpressing FnBPB bound more H3 and was more resistant to killing by histone. This information raised the question whether inhibition by FnBPB of histone-mediated bacterial killing is biologically significant. As a matter of fact, in a bactericidal assay promoted by NETs it was shown that FnBPB protected staphylococci from killing by NETs, demonstrating that FnBPB-mediated resistance is important when histones are present in a biological relevant milieu [57] (Fig. 4). The histone-neutralizing activity of FnBPB is reminiscent of the behavior of the M1 protein, a classical Streptococcus pyogenes surface virulence factor, which also protects bacteria against released extracellular histones in NETs [58] (Fig. 4). S. aureus biofilm induces NETosis and blocks their anti-microbial activity When they encounter neutrophils, S. aureus biofilms release the leukocidins PVL and HlgAB and produce NETs and cytoplasts. The generated anuclear neutrophils, although still capable of permeating the biofilm structure and phagocytosing bacteria, were not effective at clearing the biofilms. Likewise, the induced NETs were not sufficient for clearing S. aureus biofilms. The inefficiency of these structures is attributed to the leukocidin LukAB, a toxin which promotes S. aureus survival during phagocytosis [46]. The persistence of biofilm bacteria trapped in NETs is also facilitated by S. aureus nuclease-mediated degradation of DNA in NETs, resulting in the dispersal of bacteria and persistence of the chronic infection [59]. Conclusions Virulence factors of S. aureus and the role they play in the formation and destruction of NETs were examined here. Up to now, at least three staphylococcal factors have been identified which interfere with the anti-microbial activity of NETs: a) a nuclease that degrades DNA in NETs; b) the Eap protein that forms complexes with linear DNA and, possibly, stabilizes the DNA structure and protects the molecule from enzymatic attack; 3) FnBPB, a staphylococcal surface protein that neutralizes the antimicrobial activity of histones (Fig. 2). Due to the complexity of the structure of NETs, other staphylococcal factors may target constituents of NETs. Scl-1, a streptococcal collagen-like protein in M1T1 group A Streptococcus, interferes with MPO activity and mediates bacterial survival in NETs [60]. Additionally, the M1 protein allows the survival of S. pyogenes in phagocytic extracellular traps through LL-37 inhibition [61]. Thus, we speculate that unidentified surface or secreted factors of S. aureus can further neutralize the function of enzymes, antimicrobial peptides, or ion chelating agents of NETs. S. aureus cells can also induce the formation of NETs (Fig. 2) [44]. Specific S. aureus factors such as SpA [62], PVL [25], LukAB [45] and PSMa [49] directly elicit NET formation. The induction of NETosis is promoted by the alpha enolase and pneumolysin from Streptococcus pneumoniae [49,63], by gingipains of Porphyromonas gingivalis [64], and by hydrogen peroxide produced by Streptococcus sanguinis [65]. Therefore, NET formation induced by bacterial species is a common event. How can we solve the apparent paradox that virulence factors produced to protect bacteria from the host defense mechanisms also promote the formation of anti-bacterial structures? As reported above, NETs may either induce bacterial killing or mediate tissue damage both during acute and chronic inflammation [9,66,67]. Thus, it is possible that under specific pathological circumstances, S. aureus (and other bacterial species) benefit more from inducing the formation of NETs and using these structures to damage host tissues than from blocking the antibacterial activity of NETs. This strategy could allow bacteria to have better access to metabolic resources, favor the colonization of deeper tissues and definitively ensure safer and optimal survival in the host. A better understanding of the biochemical details of these alternative strategies and a clearer definition of the role played by new staphylococcal factors in NET formation and destruction in vivo is key for the development of novel therapeutic approaches to control and combat this formidable pathogen. Author statement P.S. and G.P. conceptualized and wrote the manuscript. P.S. and G.P. supervised the overall conceptualization and draft writing. The authors have read and agreed to the published version of the manuscript. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-07-01T05:13:51.723Z
2021-06-06T00:00:00.000
{ "year": 2021, "sha1": "5476c25e9cf49468fb6f1224dda922f89075da5c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.csbj.2021.06.012", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5476c25e9cf49468fb6f1224dda922f89075da5c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
212427399
pes2o/s2orc
v3-fos-license
Preparation, Physico-chemical Characterization and Photocatalytic Properties of Se Doped TiO2 Nanoparticles SIMONA CAVALU*, SIMONA VICAS, TRAIAN COSTEA, LUMINITA FRITEA, DANA COPOLOVICI, VASILE LASLO University of Oradea, Faculty of Medicine and Pharmacy, 10 1 Decembrie Sq., 410081 Oradea, Romania University of Oradea, Faculty of Environmental Protection, 26 Gen. Magheru Str., 410048, Oradea, Romania University of Oradea, Industrial Engineering Doctoral School, 1 University Str., 410087, Oradea, Romania Aurel Vlaicu University of Arad, Faculty of Food Engineering, Tourism and Environmental Protection, Institute of Research, Innovation and Development in Technical and Natural Sciences, 2 Elena Dragoi Str., 310330, Arad, Romania Nanosized titania has been intensively used in the last decades due to its special chemical and physical properties compared with the bulk. Many possible applications such as gas sensing, electronic devices, photovoltaic cells, heterogeneous catalysis, photocatalysis and lithium batteries [1][2][3][4][5] have been investigated. Until now, TiO2 nanostructures of nanotubes, nanorods and nanowires, or nanosheets have been fabricated by various techniques, including sol-gel and hydrothermal techniques or microwave-solvo-thermal method [6,7]. Moreover, titanium dioxide-associated photocatalysis under ultraviolet (UV) irradiation was investigated as a strategy for developing bioactivity and antibacterial properties on biomaterials [7]. Titanium dioxide is considered very close to an ideal semiconductor for photocatalysis because of its high stability, low cost and safety toward both humans and the environment. For the first time, Matsunaga et al. [8] reported the photocatalytic activity of TiO2 as an effective microbiocide at photokilling Lactobacillus acidophilus, Saccharomyces cerevisiae and Escherichia coli. However, phototoxicity caused by photocatalytic activity may differ between anatase and rutile nanoparticles [9]. On the other hand, environmental applications, including photocatalytic treatment of wastewater, pesticide degradation and water splitting by TiO2 to produce hydrogen are extensively reported in research studies [1,10]. Anatase TiO2 is considered to be the active photocatalytic component based on charge carrier dynamics, chemical properties and the activity of photocatalytic degradation of organic compounds. Previously research studies demonstrated that heat treatment is crucial in the synthesis of particles, being the most important factor regarding morphology, crystallinity and porosity modifications. For example, Wang et al. [11] investigated the relationship between the phase transformation and photocatalytic activity of nanosized anatase powder. They pointed out that, once the rutile phase TiO2 formed separately, the photocatalytic activity began to decrease rapidly. In order to enhance the photocatalytic properties of TiO2, especially in the visible light range, ion doping was considered to be an efficient approach [12]. So, many elements were doped into TiO2, such as silver, iron, nitrogen, sulphur, carbon, boron, aiming to decrease the band gap energy and enhance the photocatalytic activity of TiO2. An interesting approach was reported by doping selenium into anatase TiO2 by various methods, with narrowed band gap energy and effectively extended absorption to the visible light range [13][14][15]. The aim of our work was to prepare and characterize selenium doped titania nanoparticles for improved photocatalytic performances, with potential application in organic dyes and pollutants in wastewater. Experimental part Preparation of Se-doped Ti nanoparticles Hydrothermal reaction method was applied for the production of Se-doped TiO2 nanoparticles, according to Zavala et al [16] and Liu et al [17].Briefly, 7 g of TiO2 power (Sigma-Aldrich, CAS No-13463-76-7) was dissolved in 70 mL NaOH10N, by continuously stirring at 30°C, for 2 hours. The solution was introduced in autoclave at 1400 ͦ C for 24 hours. The supernatant was removed and the TiO2 particles were washed with HCl 0.1 N, until the pH=7 was reached, followed by followed by centrifugation at 6000 r/min for 5 minute. Finally, the particles were washed three times with ultrapure water and dried at 80 °C during 18 hours. The calcination of TiO2 particles was performed in standard furnaces (Nabertherm GmbH) at 800 °C for 2 hours. The resulted particles were allowed to react with a mixture of HNaO3Se (Alfa Aesar 96%), 10.000ppm and lactose monohydrate (VWR Chemicals) in a molar ratio 1:3. The final mixture was heated at 100 °C until the characteristic red color was achieved, as an indicator of Se nanoparticles formation [18]. The resulted particles were washed three times with deionized water, filtered and dried at room temperature. Structural and morphological characterization of Se-doped TiO2nanoparticles. The TiO2 particles obtained by calcination at 800 ͦ C and Se-doped TiO2 particles obtained by hydrothermal reaction were characterized by dynamic light scattering(DLS), Fourier Transform Infrared Spectroscopy (ATR-FTIR), X-rays diffraction pattern (XRD) and scanning electron microscopy (SEM). DLS was applied to selenium nano-colloidal sol using ZEN 3690(Malvern Instruments) in order to determine the average particle size, size distribution and Zeta potential. In order to reveal the X-ray profile, analyses of the samples we used a X-ray diffractometer (MiniFlex 600, Rigaku, Japan). The following conditions were used for analyses: 40 kV, 15 mA, with CuKα monochromatic radiation, SC-70 detector and an automatic sample changer (ASC-6). The scan range was 5°-60°, with step width 0.02°, scan speed 5°/min, at room temperature. The PDXL2 Version 2.4.2.0 containing powder diffraction analysis package (PDXL Comprehensive Analysis) was used to analyze and calculate the lattice strain and crystallite size with Halder-Wagner method. The powder samples were investigated by FTIR in the range 400-4000 cm -1 , using Spectrum BXII spectrophotometer (PerkinElmer), equipped with MIRacle ATR accessory (ZnSe crystal), at scanning speed of 32cm -1 and spectral width2.0 cm -1 . Morphological details of the TiO2 nanoparticles and Se-doped TiO2 nanoparticles were investigated using the scanning electron microscope Leo 438VP SEM, with variable vacuum capability (maintained at low value). Photocatalytic activity test The test developed in this study, further referred to as the 'standard test', was a dye degradation test. The photocatalytic activity was evaluated in aqueous solution, by using methylene blue as a model chemical, in batch photoreactor, under UVlight irradiation with 400W Kr lamp (Osram). The illumination power of the lamp was mainly in the UV-A region. The photochemical reactor was a beaker containing suspension of nanoparticles solution and methylene blue which was placed in a continuously ventilated chamber. 0.9 g of photocatalyst was mixed with 180 mL methylene blue solution (8 mg/L) with continuous stirring. The distance between the lamp and the reactor was 20 cm. After scheduled time interval (30 minutes), 4 mL solution was collected for spectral analysis and monitored by using a UV-vis spectrophotometer (Shimadzu UV-VIS 1700 Pharma Spec). Results and discussions Immediately after the thermal reaction, the colloidal sol of TiO2 particles respectively Se-doped TiO2particles was analyzed by dynamic light scattering (DLS) in order to evaluate the particles size distribution and apparent Zeta-potential, data being presented in Fig. 1. One can observe that TiO2 particles present two different components: the first one, with low concentration and maximum size distribution at about 20 nm, and second one, with high concentration and size distribution ranging from 40 to 250 nm (maximum at 100 nm). After doping procedure and thermal treatment at 800 ͦ C, the size distribution reveals the formation of nanoparticles with wide range of diameters, from 20 nm to 500 nm (maximum at 180 nm). Three repeated measurements were performed for each sample. The corresponding Zeta-potential was -19 mV for TiO2 and -25 mV for Se/TiO2 particles. These values indicates a good stability, especially for Se/TiO2 nanoparticles. It is well known that the criteria of nanoparticles stability is correlated with the values of zeta potential ranged from higher than +20 mV to lower than -20 mV [19]. http://www.revistadechimie.ro It was previously reported that the spectral region from1000 to 400 cm -1 is ascribed to the Ti-O stretching and Ti-O-Ti bending vibrational modes in anatase [20]. So, we can assign the band at 950 cm -1 to Ti-O-Ti stretching vibrations in TiO2 particles, while the broad band at 670 cm -1 is assigned to banding vibration. After doping procedure, the stretching vibration of Se-O bonds are observed as a distinct band at 870 cm -1 and the bending vibration is visible at 570 cm -1 , which corresponds to the vibrational features of the commercial selenium nanoparticles powder [21]. In the same time, the intensity of Ti-O-Ti stretching vibrations are drastically reduced concomitant with a shift to lower wavenumber, at 930 cm -1 , as shown in Fig.2(b). The morphological features of TiO2 and Se-doped TiO2 particles obtained after calcination at 800 ͦ C are displayed in One can observe that TiO2 particles present a rode-shaped structure with diameter of about 150 nm and variable length, agglomerated and randomly distributed (Fig. 3a). Upon the hydrothermal reaction and Se-doping of TiO2 particles, the formation of TiO2 nano-wires was noticed, with the diameter of about 80 nm and spherical Se nanoparticles with diameter of about 200 nm, agglomerated, surrounding the TiO2 nano-wires. We have previously demonstrated the formation of spherical Se nanoparticles in a hydrothermal reaction, with diameter ranging from 55 nm to 290 nm, depending on the reducing agent and reaction time [19]. In the present case, the spherical Se nanoparticles were produced in situ, as a consequence of doping process followed by annealing of TiO2 particles. In the same time, the morphological changes of titania particles from nanorodes to nanowires occurred as a consequence of annealing, which is not surprising. In the literature it is mentioned the formation of crystallized titania nanotubes/nanorods and their transformation into nanowires depending on the synthesis conditions in autoclave [22]. In order to identify the crystalline structure of TiO2 and Se-doped TiO2 particles, XRD spectra were recorded and presented in Figure 4. The XRD spectrum of TiO2 starting material (Fig. 4a) shows the typical crystalline pattern of anatase (tetragonal), with sharp two peaks from 24.99 ° and 47.75°, in good agreement with the standard diffraction data, while a mixture of anatase and rutile (tetragonal) was obtained after the annealing process (Fig. 4b) [23]. The estimated average crystallite size was 535(6) and 143(2), respectively for starting material and annealed sample. After Se-doping of TiO2 particles, the XRD pattern is changed, as it is shown in Figure 4c, and the crystallite size was 230 (14). The crystallite size and lattice strain normally influence the X-ray diffraction profiles. The results of photocatalytic activity test (standard test) are presented in figure 5, showing the typical time-dependent UV-Vis spectra of methylene blue solution by TiO2 and Se doped TiO2 in photochemical reaction. It can be seen that the intensity of the characteristic absorption peak of methylene blue solution decreases with time, in both cases, demonstrating that TiO2 and Se-doped TiO2 nanoparticles can effectively decompose methylene blue under the visible light irradiation. With Se doping, the photocatalytic capability is significantly improved, as can be seen comparing the absorption spectra in Fig. 5 (a) and (b). The photoinduced electrons and holes play important role in the photocatalytic reaction, by undergoing the following steps: 1) The electrons are adsorbed by O2 to produce superoxide radical anions of O 2-; 2) The holes can oxidize OH − to produce OH. 3) Both can further react with methylene blue which is decomposed to CO2 and H2O. In a recent paper, W. Xie &al. explained the detailed reaction mechanism in a similar situation, concluded that improved photocatalytic activity of Se-doped TiO2 under visible light can be first understood by the decreasing band gap into the visible range with increasing Se doping concentration, which might increase the absorption efficiency of the visible light [15]. On the other hand, it is known that the photocatalytic activity of the TiO2 catalyst is highly dependent on crystal size and crystallinity [23,24]. Some other papers pointed out that the visible photocatalytic activity was decreased with the higher calcination temperature (900 o C). As a matter of facts, the highest photoactivity shown by the conditions can be due to the synergistic effects of higher surface area, lower crystal size and higher dopant content [24]. Conclusions Anatase TiO2 is considered to be the active photocatalytic component based on charge carrier dynamics, chemical properties and the activity of photocatalytic degradation of organic compounds. In our work, we demonstrated that selenium doped titania nanoparticles may improve the photocatalytic performances of TiO2 nanoparticles, with possible application in degradation of the organic dyes and pollutants in wastewater. The fabrication and structural characterization of TiO2 and Se-doped TiO2 nanoparticles was presented in our study, including analysis of FTIR spectroscopy, XRD diffraction pattern and SEM morphological details. We assume that high photocatalytic activity of SeTiO2 nanoparticles is due to the synergistic effects of higher surface area, lower crystal size and higher dopant content.
2020-02-13T09:22:26.225Z
2020-02-07T00:00:00.000
{ "year": 2020, "sha1": "7e7110eb8f6d65070787d77e0314bc61d8aed6ba", "oa_license": "CCBY", "oa_url": "https://revistadechimie.ro/pdf/3%20CAVALU%201%2020.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9e205387e4b6fce86f51768239b0c46d046ddaa1", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
122553899
pes2o/s2orc
v3-fos-license
Association between Green Tea Consumption and Risk of Stroke in Middle-Aged and Older Korean Men: The Health Examinees (HEXA) Study Green tea consumption is known to have varying effects on health and disease. The aim of this study was to investigate the association between green tea consumption and risk of stroke in Korean adult men. Data were obtained from the Health Examinees (HEXA) Study, which included 50,439 subjects aged 40 years and older. Information regarding dietary intake was collected from semi-quantified food frequency questionnaires consisting of 106 items. Green tea consumption was categorized as none, <1 cup/d, 1 to <3 cups/d, and ≥3 cups/d. Binary logistic regression models were used to estimate odds ratios (OR) and 95% confidence intervals (CIs) to examine a possible association between green tea consumption and risk of stroke by controlling for potential confounders. Subgroup analyses by age, body mass index, hypertension, diabetes mellitus, smoking status, and alcohol consumption were also performed. Compared with green tea non-drinkers, individuals that consumed 1 to <3 cups/d or ≥3 cups/d of green tea had multivariable adjusted OR (CI) of stroke of 0.75 (0.59~0.97) and 0.62 (0.39~0.98), respectively, after adjusting for age and various confounders. In the subgroup analyses, an inverse association between green tea consumption and risk of stroke was identified among younger, non-hypertensive, and non-diabetic men. Higher consumption of green tea was inversely associated with stroke risk in middle-aged and older Korean men. INTRODUCTION In 2017, cerebrovascular disease was the third leading cause of death in Korea, followed by heart disease and cancer (1). Ten years ago, cerebrovascular disease was the second leading cause of death. Despite a decrease in its incidence, cerebrovascular disease remains a major health concern due to serious side effects and reduced quality of life (2). Generally, stroke is accompanied by severe physical disabilities and other health problems in addition to disease burden (3). Risk factors for stroke include age, family history, race, gender, transient ischemic attack history, a history of cardiac arrest, hypertension, smoking, drinking, diabetes, arterial disease, atrial fibrillation, heart disease, sickle cell disease, hyperlipidemia, poor diet, lack of physical activity, obesity, drug use, stress, and geographic location (4,5). Tea is one of the most popular natural beverages worldwide. There are various kinds of teas which have different fermentation processes, including non-fermented green tea, semi-fermented oolong tea, and fully fermented black tea (6). Green tea currently accounts for 20% of the world's tea consumption market and is mainly consumed in Asian countries, including Korea, China, and Japan (6). According to health statistics issued by the Ministry of Health and Welfare and the Korea Centers for Disease Control and Prevention in 2016, average green tea consumption per week is 0.66 cups, with 0.49 cups consumed by women and 0.83 cups consumed by men (7). The relationship between green tea consumption and risk of stroke has previously been studied in Asia. An inverse association between green tea consumption and risk of stroke has previously been identified in a Japanese cohort study (8), a case-control study in Korea (9), and in a cross-sectional study conducted in China (10). In addition, several epidemiological studies have reported that green tea has favorable effects on cardiovascular and ischemic-related diseases (11), blood pressure (12,13), liver cancer (14), and colorectal cancer (15). In meta-analyses published in 2009 (16) and 2016 (11), consumption of >1 cup of green tea/d was associated with a lower risk of stroke compared to consumption of <1 cup of green tea/d. However, neither meta-analysis included data from Korea. Therefore, the purpose of this study was to investigate a possible association between green tea consumption and stroke prevalence in Korean adult men by using data available from the large, population-based the Health Examinees (HEXA) baseline Study. Study population HEXA, a large, population-based prospective cohort study, was conducted in Korea from 2004 through 2013 by the Korean Centers for Disease Control and Prevention. Details regarding the methods of this study have previously been described (17). The main objective of this largescale genomic cohort study was to identify general epidemiological characteristics of major chronic diseases in Korea. A total of 173,357 subjects were enrolled, 40 years of age or older, who were treated at major hospitals (n =38) and local health examination centers located in eight regions in Korea. Strict standardized study criteria were employed and socio-demographic characteristics, medical and family history, medication usage, lifestyle factors, diet, and physical activity data were collected from interviews conducted by well-trained research staff. At baseline, 59,294 men were recruited in the HEXA Study. Among these participants, those without information regarding a stroke diagnosis (n=258) and those with a family history of stroke (n=7,987) were excluded. After excluding participants who did not report information regarding green tea consumption (n=610), data from 50,439 men were analyzed. Written informed consent was obtained from all participants prior to the study start, and the study protocol was approved by the Institutional Review Board of Ewha Womans University in Seoul, . Data collection Information about the general demographic characteristics, lifestyle factors, and diagnosis of stroke for the cohort examined were obtained from a structured questionnaire. However, stroke was evaluated only based on answers obtained for the question "Have you ever received a diagnosis of stroke by a doctor?". Consequently, the type of stroke (e.g. ischemic vs. hemorrhagic) could not be distinguished and separately analyzed. Categorization of body mass index (BMI) and definition of hypertension and diabetes mellitus were determined as previously described (18). Obesity was defined as a BMI≥25 kg/m 2 according to the guidelines of the Steering Committee of the Regional Office for the West-ern Pacific Region of the World Health Organization (19). Marital status was categorized as married or single (with the latter including unmarried, divorced, separated, cohabiting, and widowed individuals). Education level was classified as middle school or less, high school graduate, and college or above. Household income was categorized as <200, 200 to 400, and >400 (10,000 won/ month). Smoking status was categorized as non-smoker, ex-smoker, and current smoker, and alcohol drinking status was categorized as non-drinker, ex-drinker, and current drinker. Regular exercise was evaluated based on responses to the question "Do you regularly exercise enough to the point you are sweating?". Chronic diseases, including hypertension, diabetes, and hyperlipidemia, were defined based on specific criteria of each disease or according to diagnosis by a doctor. Hypertensive subjects were defined as having a systolic blood pressure ≥140 mmHg or a diastolic blood pressure ≥90 mmHg (20), diabetes was defined as a fasting blood glucose level ≥126 mg/dL, and hyperlipidemia was defined as a total cholesterol level ≥240 (21). The dietary intake and green tea consumption of participants were estimated based on a semi-quantitative food frequency questionnaire that consisted of 106 food items. This questionnaire was developed by the Korea Centers for Disease Control and Prevention and has previously been assessed for its reliability and validity for the Korean population (22). Participants were queried by trained interviewers regarding their usual intake amount of foods (including green tea) over the previous year. In the questionnaire, green tea consumption was categorized according to frequency (none, once a month, 2 to 3 times a month, 1 to 2 times a week, 3 to 4 times a week, 5 to 6 times a week, once a day, twice a day, or 3 times a day) and average intake per serving (1/2 cup, 1 cup, and 2 cups). However, for the present analysis, we adjusted these categories to consider both intake and frequency as follows: none, <1 cup a day, between 1 and <3 cups a day, and ≥3 cups a day. Statistical analysis All statistical analyses were performed with SAS software (version 9.4; SAS Institute Inc., Cary, NC, USA). Statistical significance was set at P<0.05. General characteristics of subjects were compared using the Chi-square test for categorical variables and general linear regression for continuous variables. Based on previously published literature, the following potential confounding factors were analyzed: age, gender, education level, alcohol consumption, regular exercise, BMI, smoking habits, carotid or other artery disease, transient ischemic attacks, hypertension, hyperlipidemia, ethnicity, heart failure, drug abuse, and poor diet (4,5,8,16,23,24). Pearson correlation coefficient and variance inflation factor were used to investigate multicollinearity between stroke confounding factors. The final model adjusted for age, education level, alcohol consumption, smoking habits, regular exercise, BMI, diagnosis of hypertension, diabetes, hyperlipidemia, intake of vegetables and fruits, red meat intake, coffee consumption, and total energy consumed. Carotid or other artery disease, transient ischemic attacks, ethnicity, heart failure, and drug abuse were not considered to be confounding factors due to lack of information about these factors in present data. Logistic regression models were used to assess a possible association between green tea consumption and stroke prevalence. Odds ratios (OR) and 95% confidence intervals (CIs) for each category of green tea consumption were calculated using dichotomous logistic regression, and trends were assessed in further analysis. To check additional interactions, stratification analyses according to age, BMI, hypertension, diabetes, smoking status, and alcohol consumption were performed. RESULTS General characteristics of the participants according to green tea consumption are presented in Table 1 (none, n =19,801; <1 cup/d, n=21,350; 1 to <3 cups/d, n=7,064; ≥3 cups/d, n=2,224). The prevalence of stroke in the study population was 1.54% (data not shown). The mean intake of green tea was 0.52 cups/d; 4.4% of men drank ≥3 cups/d, and 39.2% of men did not drink any green tea (data not shown). All of the variables examined (e.g., age group, marital status, education level, household income, BMI, alcohol consumption, smoking status, regular exercise, diabetes, hyperlipidemia, intake of vegetables and fruits, red meat intake, and coffee consumption), except hypertension, showed significant differences based on green tea consumption. Compared with those who did not drink green tea, participants who reported an intake of ≥3 cups of green tea/d were younger in age, had a higher education level and income, and were more likely to be non-smokers. Those who reported higher Adjusted by education, alcohol consumption, regular exercise, BMI, smoking, hypertension, diabetes mellitus and hyperlipidemia, vegetable and fruits intake, red meat intake, coffee consumption, and total energy intake. 2) Adjusted by age, education, alcohol consumption, regular exercise, smoking, hypertension, diabetes mellitus and hyperlipidemia, vegetable and fruits intake, red meat intake, coffee consumption, and total energy intake. 3) Adjusted by age, education, alcohol consumption, regular exercise, BMI, smoking, diabetes mellitus and hyperlipidemia, vegetable and fruits intake, red meat intake, coffee consumption, and total energy intake. 4) Adjusted by age, education, alcohol consumption, regular exercise, BMI, smoking, hypertension and hyperlipidemia, vegetable and fruits intake, red meat intake, coffee consumption, and total energy intake. 5) Adjusted by age, education, alcohol consumption, regular exercise, BMI, hypertension, diabetes mellitus and hyperlipidemia, vegetable and fruits intake, red meat intake, coffee consumption, and total energy intake. 6) Adjusted by age, education, regular exercise, BMI, smoking, hypertension, diabetes mellitus and hyperlipidemia, vegetable and fruits intake, red meat intake, coffee consumption, and total energy intake. Adjusted for age, education, alcohol consumption, smoking, regular exercise, body mass index, hypertension, diabetes mellitus, hyperlipidemia, vegetable and fruits intake, red meat intake, coffee consumption, and total energy intake. frequencies of green tea consumption tended to also undertake more exercise, and have a higher intake of energy, vegetables, fruit, red meat, and coffee. When stratified analyses was performed on the association between stroke prevalence and green tea consumption (Table 3), an inverse association between stroke prevalence and green tea consumption was observed according to age (<65 y), history of hypertension, and history of diabetes. Moreover, an inverse association be-tween green tea consumption and stroke prevalence was predominantly observed among participants younger than 65 years (OR, 0. 48 DISCUSSION In the present study, an association between the prevalence of stroke and green tea consumption in Korean men over 40 years of age was examined using data from the HEXA baseline Study. After controlling for potential confounding factors, the prevalence of stroke was found to be inversely associated with consumption of green tea. Moreover, a clear dose-response relationship was identified between prevalence of stroke and green tea consumption. Black tea is very popular in Europe and North America, while green tea or oolong tea remains popular in Asian countries. Studies examining the relationship between green tea intake and stroke have mainly been conducted in Asia, and the results from most of these studies are consistent with the present findings. For example, in a cohort study of 6,358 people aged 40 to 89 years in Japan, the group that consumed ≥5 cups of green tea per day had a 59% reduced risk of stroke compared with those who consumed up to 1 cup per week. However, while the risk of stroke was reduced by 73% in men, no significant reduction in stroke risk was observed in women (23). In another cohort study of 82,369 Japanese participants, those who consumed ≥4 cups of green tea per day had a 20% reduced risk of stroke compared with those who did not drink green tea (8). In addition, a case-control study of 940 patients who had experienced a hemorrhagic stroke and 940 age-matched controls from the same community in Korea showed a 39% reduced risk of stroke in those who consumed 7 to <14 cups of green tea/week compared with those who did not drink green tea (9). In cross-sectional studies, it was also observed that increased green tea consumption was associated with a decrease in prevalence of stroke in a Chinese population (10). In a meta-analysis published in 2009, which included six observational studies, ≥3 cups of green tea per day was found to prevent development of strokes (16). Furthermore, a meta-analysis of nine studies published in 2016 showed that an intake of 1 to 3 cups of green tea per day was inversely associated with risk of myocardial infarction and stroke (11). The mechanism(s) responsible for the effect of green tea consumption on stroke prevention remain unclear. Polyphenols have been reported to have a positive effect on a variety of chronic diseases. Catechin is a flavan-3-ol compound and is a major candidate for mediating the beneficial effects of green tea for coronary heart disease (25,26). The four major types of catechins are (-)-epigallocatechin gallate (EGCG), (-)-epigallocatechin (EGC), (-)-epicatechin gallate (ECG), and (-)-epicatechin (EC) (25,27). Among these green tea catechins, EGCG is the most abundant and has been shown to exert positive effects on hypertension, metabolic syndrome, thrombosis, and cardiovascular diseases (12,28,29). It has been hypothesized that the catechin molecules in green tea may prevent vascular diseases and stroke based on the antioxidant, anti-inflammatory, anti-proliferative, and antithrombotic effects of catechins (25,(30)(31)(32). In animal studies, consumption of green tea at an early age prevented the onset of stroke. The authors suggested that green tea also has potential to block onset of hypertension and stroke in the elderly (33). Despite the positive effects, green tea can also cause insomnia, headaches, vomiting, and tachycardia due to the presence of polyphenols, such as caffeine and catechin. The tannin component of green tea may also cause anemia by interfering with absorption of iron (34). However, these polyphenol components may vary depending on the type of green tea and the method by which it is brewed. This heterogeneity may explain the differences in results between studies. Thus, further investigations at the molecular level of anti-stroke mechanisms involving green tea are warranted. In the present study, the association between green tea consumption and stroke prevalence differed according to gender. A significant association was observed among men but not women (data not shown). This is consistent with results obtained in a previous Japanese study (23). In the present study, it was not possible to identify a clear mechanism for the observed gender-related differences in stroke risk. Gender differences are important risk factors for stroke for multiple reasons, as well as in other chronic diseases. Numerous studies have found that reproductive factors, including hormone replacement therapy use, oral contraceptive use, menopause, and pregnancy, also increase the risk of stroke (35)(36)(37). Specific risk factors include excess androgen secretion and decreased estrogen during the menopause transition period, as well as abdominal obesity and levels of triglycerides, total cholesterol, low density lipoprotein cholesterol, and fasting blood glucose (38,39). A cohort study conducted in the United States reported an increased risk of ischemic stroke in men receiving testosterone therapy (40). Other factors, such as the presence of chronic diseases (e.g., diabetes, atherosclerosis, and dyslipidemia), physical activity, absorption rate, bioavailability of green tea consumption, eating habits, and lifestyle differences between genders may also influence this association. Further research is needed to investigate the different risk factors effecting the association between green tea and stroke prevalence; these studies are warranted to explore gender-related differences in stroke risk. In the present subgroup analyses, the association between green tea consumption and a decreased risk of stroke prevalence was more pronounced in participants younger than 65 years of age and who had never been diagnosed with diabetes mellitus or hypertension. While the mechanistic details of these results remain unclear, we propose several possibilities. As men get older, levels of testosterone decrease. McInnes et al. (41) reported that mice with impaired testosterone function in adipose tissue have a greater tendency to develop insulin resistance compared with uninjured mice. Similarly, in human, low levels of testosterone have been associated with type 2 diabetes (42). Oral consumption of green tea extract significantly improved levels of testosterone in serum of acrylamide-induced testicular-damaged albino rats (43). Based on these results, green tea intake may affect the levels of testosterone. There is lack of evidence explaining the differing association between green tea consumption and stroke by age. It has been proposed that genetic factors, including genetic predisposition, are more prevalent for younger stroke patients (44). Although clinical risk factors were increased in older patients, other modifiable factors such as dyslipidemia, smoking, physical activity, and diet must be considered (45). The prevalence of diabetes, hypertension, and heart disease differed between young and old stroke patients, which may have affected the risk of stroke (46). The rate of calcium and xanthohumol absorption decreases with age (47,48). Although these are not major components of green tea, absorption of other bioactive components in green tea may also be decreased by age. Future research into the mechanisms of bioactive green tea components are warranted. A study conducted in Japan reported that administration of the hypertensive drug nadolol in combination with green tea leads to lower plasma nadolol concentrations and an anti-hypertensive effect (49). This supports the observation of our present study, that hypertensive patients that were taking medication did not exhibit any beneficial effects of green tea for risk of stroke. Furthermore, age, hypertension, and diabetes mellitus are wellknown risk factors for stroke. In the present study, the participants that consumed ≥3 cups of green tea per day were the youngest participants. Therefore, it can be predicted that young, non-diabetic and non-hypertensive individuals have fewer risk factors for stroke, and that lifestyle and multiple unknown factors may explain these data. However, additional mechanistic studies are needed. Nevertheless, this study has several limitations. First, it was difficult to investigate a cause-and-effect relationship between green tea consumption and stroke risk due to the cross-sectional study design. To compensate, we adjusted for several confounding factors and stratified the data. Second, information about the types of ischemic and hemorrhagic strokes experienced by the cohort was not collected. Consequently, the possibility that the results are diluted cannot be excluded. Third, each stroke was evaluated according to a self-reported answer about diagnosis by a doctor. Lastly, due to lack of information in the original study, other stroke risk factors (e.g., history of prior stroke, transient ischemic attack, myocardial infarction, carotid disease, peripheral artery disease, atrial fibrillation, heart failure, and drug abuse involving cocaine, amphetamines, and/or heroin), which have been identified by the American Heart Association and American Stroke Association, could not be controlled. The major strength of this study is the large number of stroke patients that were examined, which provided high statistical power during the analysis. Moreover, to our knowledge, this is the first large-scale study to investigate a possible association between stroke prevalence and green tea consumption in Korea. Recently, coffee consumption has markedly increased in the Korean population. However, many Koreans still enjoy drinking green tea. Therefore, this study provides useful health information that may be of interest to the public. In conclusion, data from this cross-sectional study indicates that green tea consumption is inversely associated with stroke prevalence in Korean adult men. These results are consistent with previous epidemiological studies, thereby suggesting that green tea intake is beneficial for stroke. Thus, additional long-term cohort studies are warranted to confirm this result.
2019-04-23T13:22:21.389Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "be4c85e6d6a11b55fbb3b2f150543cb69d69320b", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc6456242?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "be4c85e6d6a11b55fbb3b2f150543cb69d69320b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
44545400
pes2o/s2orc
v3-fos-license
RESEARCH ARTICLE Study on Theoretical Models of Regional Humanity Lung Cancer Hazards Assessment Purpose: To establish the concept of lung cancer hazard assessment theoretical models, evaluating the degree of lung cancer risk of Beijing for regional population lung cancer hazard assessment to provide a basis for technical support. Materials and Methods: ISO standards were used to classify stratified analysis for the entire population, life cycle, processes and socioeconomic management. Associated risk factors were evaluated as lung cancer hazard risk assessment first class indicators. Study design: Using the above materials, indicators were given the weight coefficients, building lung cancer risk assessment theoretical models. Regional data for Beijing were entered into the theoretical model to calculate the parameters of each indicator and evaluate the degree of local lung cancer risk. Results: Adopting the concept of lung cancer hazard assessment and theoretical models for regional populations, we established a lung cancer hazard risk assessment system, including 2 first indicators, 8 secondary indicators and 18 third indicators. All indicators were given weight coefficients and used as information sources. Score of hazard for lung cancer was 84.4 in Beijing. Conclusions: Comprehensively and systematically building a lung cancer risk assessment theoretical model for regional populations in conceivable, evaluating the degree of lung cancer risk of Beijing, providing technical support and scientific basis for interventions for prevention. Introduction Lung cancer has become a global major public health problem which was harmful to human health (Stewart et al., 2014) and caused by multiple risk factors, including social and environmental factors, unhealthy behaviors and lifestyle. Smoking and long-term exposure to air pollution is the main factor increasing the risk for lung cancer. There were some specific, service and managed research on local courses and fields among the past research, but often lacking of comprehensive and systematic risk assessment studies and theoretical models studies. Thus they were unable to clarify the severity of its risk and to carry out a quantitative assessment. In addition to certain countries, the effect of prevention and treatment of lung cancer is not obvious especially in developing countries. The risks of lung cancer among majority of developing countries were increasing. To establish the theoretical models of lung cancer hazard assessments, evaluating the degree of lung cancer risk of Beijing, providing the basis and technical support for regional population lung cancer hazard assessment and the scientific understanding of lung cancer hazards to reduce risks and protect human health. Study on Theoretical Models of Regional Humanity Lung Cancer Hazards Assessment Chuan Zhang 1,2 , Xing Gao 1,2 * Reference Risk assessment was made according to IEC/ISO 31010:2009 risk assessment criteria (ISO, 2009). Public health risk assessment was made according to WHO "reduce the risk and emergency preparedness" (Cluster et al., 2006). Lung cancer risk assessment was according to the WHO "Cancer Control: Knowledge into action" -2008(WHO, 2005-2008. Therefore, the theoretical model of lung cancer was composed by three criteria above, as to indicators system framework and rating criteria. Objective The primary perspective is that of health care, which included majority of statistical information including [2005][2006][2007][2008][2009][2010][2011][2012][2013] year population mortality among Beijing,atrisk populations, incidence or prevalence of lung cancer and healthy population. The lung pathological observation information including squamous cell carcinoma, adenocarcinoma and small cell lung cancer, pathological stage (stage 0-IV). The society factors included socioeconomic, environmental, behavioral and biological aspects. Material This analysis is based on data from " 2014, 2008and 2003WHO World Cancer Report", "2013IARC air pollution and cancer research reports", "2005 Chinese cancer registry annual report", "Report on 2009-2013 health and population health status in Beijing", "2012 Sunshine Wall: lung cancer prevention and action in Beijing", etc. A subset of these was used to construct the original risk model. Methods Standard Method: Based on ISO risk assessment standards, WHO public health risk assessment criteria and lung cancer risk assessment methods, we proposed to define the risk of lung cancer, the theory and framework of indicators, including: the hazards, vulnerability, prevention and control capabilities. Hazards are divided into risk factors and severity. Review: Global, European Organization for Economic Cooperation (OECD), China and Beijing lung cancer related information and reports were analyzed for classification to identify cancer risk factors, clinical features, histopathological observation, the causal relationship and hazard risk control points. AHP: The lung cancer risk factors and their severity for whole population were stratified analysis, seeking to risk factors level and various stages of disease ranging from development to death, which analyzed interaction between these and attributable risk. Study Design Using the above materials, establishing lung cancer hazard risk assessment indicator system, the whole indicators were given the weight coefficients, building a lung cancer risk assessment theoretical models.Take the regional data in Beijing into the theoretical model to calculate the parameters of each indicator and the result risk of Beijing, evaluating the degree of lung cancer risk of Beijing. Lung Cancer Hazard Assessment and theoretical models Risk referred to the effect of something being risk factors coming to damage or unexpected events, namely the probability of an event occurring. The probability of the public health risk factors was increasing depending on cause of disease or injury, we often told it health risk factor. The serve of the public health risk is determined by three factors, including hazard, vulnerability and prevention capability. Expressed by equation, the risk of hazard was equal to vulnerability multiplied by prevention capability. The more serious hazard would be, the greater vulnerability was increasing, the weaker prevention and control capacity would have, the greater the risk of disease would be. The public health risk assessment model for: Public health risk(R)= Hazard(H)* Vulnerability(V) Capability(C) The risk represented by R, Hazard represented by H, Vulnerability represented by V, Capability represented by C, the lung cancer risk assessment model for: Lung cancer risk(LCR)= Lung Cancer(LCH)* Lung Cancer Vulnerability(LCV) Lung Cancer Capability(LCC) Thus, regional lung cancer risk assessment model formula was established depending on above four factors: (R y j : the regional score of lung cancer risk c y h : the regional score of lung cancer hazard c y v : the regional score of lung cancer vulnerability c y c : the regional score of lung cancer prevention capabilities) In the case of vulnerability and prevention capacity approached to zero, the hazards and risks would be equal, which namely hazards were risks that can be called the hazards of risk assessment. Lung cancer assessment related to social, environmental, behavioral and genetic factors that influence on long-term interaction with genetic factors, which effected on lung cancer gene mutations, abnormal cell proliferation and the whole population and entire life cycle. The formula was: (c h the score of lung cancer hazard indicators c i,h the score of the fourth indicators of lung cancer hazard indicators w i,h the weight coefficient of lung cancer hazard, Ni the quantity of lung cancer hazard indicators) According to the index system and assessment criteria, when evaluating the hazards of a regional population of lung cancer, all indicators were scoring and given the weight coefficients. Then using the regional population hazards of lung cancer risk assessment formula was calculating weighted scores to evaluate of regional population hazard risks. Indicators system for theoretical model Based on lung cancer related data information of global, OECD, China and Beijing for determining lung cancer hazard and risk factors affected the populations' health (Colditz et al., 2000;Spitz et al., 2007;Spitz et al., 2008;Cassidy et al., 2008). Establish hazards assessment of lung cancer index system according to ISO risk assessment standard as the first class indicator (Table 1: The lung cancer hazard assessments indicators system framework and rating criteria). The first indicator: Based on WHO Public health risk assessment guidelines, the hazards of lung cancer can be divided into risk factors and severity indicators. The second indicator: Risk factors were divided into the social healthy determinants, environmental and behavioral risk factors, lifestyle risk factors and others. The severity was divided into epidemiology, clinical stage, histopathological observation, carcinogenic mechanisms and recoverability, which form of the second indicators of lung cancer. The third indicators: The social determinants of health can be divided into income levels, education levels, the level of population growth, urbanization degree, industrialization degree, the total energy and vehicle ownership, etc. The environmental risk factors can be divided into air pollution and fog haze. The behavioral risk factors can be divided into smoking, lacking of exercise and mental stress. Epidemiological indicators can be further divided into morbidity, mortality. Clinical features can be further represented by a clinical stage. Histopathological observation was classified by pathology. Pathogenesis of diseases can be classified according to the different type of risk factors. Recoverability can be divided into reversible and irreversible. The third indicators of hazards of lung cancer risk assessment were made up these indicators together, and thus further subdivided into fourth indicators. Determinants of risk factors Social determinants of health Social determinants of health can be further divided into the economy (WHO, 2008), culture (WHO, 2010), education, population, urbanization (Cohen et al., 2004;Pruss-Ustun., 2006), industrialization, energy and vehicle ownership (Garshick et al., 2008) indicators. These could directly affect people's social status and fairness and accessibility of health care resources, but also indirectly affect human health including development and progression of lung cancer. In the generally, the total GDP, per capita GDP were used to measure the level of economic development. While, use the illiteracy rate and education measured level of education. Total population and population density were measured to describe demographic conditions. The rate of urbanization and urban zoning were measured to assess degree of urbanization. Used the Huffman coefficient measured the degree of industrialization. Use the number of motor vehicles to measure vehicle pollution situation. Measure the total energy production with the extent of environmental pollution, which reflected a country or region social determinants of hazards level of lung cancer. Environmental risk factors IARC have reported (Straif et al., 2013) that air pollution, fog haze would directly or indirectly increase human lung cancer morbidity or mortality which contributed to human lung cancer death rate about 8% (WHO, 2009). The total suspended particulate air pollution (TSP), respirable particulate matter (PM10) and fine particulate matter (PM2.5), which contained benzopyrene, hexavalent chromium, arsenic and its compounds, mutagens, teratogens and other harmful substances, so that did increase number of incidence lung cancer significantly (Pope et al., 2002;Yang et al., 2005;Cui et al., 2009). Currently, using the average concentration of PM2.5/per year reflected environmental risk factors (Pongpiachan et al., 2013). When PM2.5 air concentrations increased each additional 10μg/m3, the incidence of lung cancer would increase 1.29-fold (Hystad et al., 2013). Since 2013, fog haze was frequently in China and PM2.5 average concentration was high over 150μg/ m3, greatly harmed the health of population, in particular caused lung cancer. Behavioral risk factors Smoking (Zeegers et al., 2000;WHO, 2004;Wang et al., 2008;Hwang et al., 2013), lacking of physical activity (Sun et al., 2012) and mental stress (WHO, 2010) were all the main risk factors of lung cancer increased incidence hazards. Where smoking is the primary risk factor for lung cancer, the active and passive smoking rate were available to assess the target population which contributed to lung cancer death rate about 71% (WHO, 2009). Lacking of physical activity (WHO, 2009) and available psychological pressure rates were used lacking of physical activity rate and psychological disorders as population indicators. Severity determinants Epidemiological characteristics: The incidence, mortality and rank order of causes of death were epidemiological indicators to evaluate the severity of lung cancer, based on WHO chronic non-communicable diseases evaluation methods as (1/100,000) a unit, compared with no matter what countries incidence and death situation comprehensively and systematically. Clinical Features According to the United States "Non-Small Cell Lung Cancer, NSCL" and Ministry of health of the people's republic of China "primary lung cancer clinical diagnostic and treatment guideline", clinical features can be represented by the clinical stage. They were carried out in accordance with TNM stage, T for the primary tumor, N for regional lymph nodes, M for distant metastasis, therefore lung cancer can be divided into invisible lung, 0, Ⅰ, Ⅱ, Ⅲ and Ⅳ stage. Histopathological observation According to WHO the degree of differentiation and morphological characteristics of various types of lung cancer, which were divided into two categories, including small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC), the latter consisted of squamous cell carcinoma, adenocarcinoma and large cell carcinoma. Early (IA) (Ahmed et al., 2013) five-year survival rate can reach 70% for non-small cell lung cancer. However, the overall five-year survival rate was only 5% for small cell lung cancer, the average 5-year survival in patients with advanced was less than 1% (Minna et al., 2008). Carcinogenic mechanisms Lung squamous cell carcinoma (LSCC) was a kind of malignant epithelial tumors derived from bronchial epithelial, it may show keratosis and/or intercellular bridge features. Spindle cell carcinoma was the most common and accounted for 40%-50% of primary lung cancer, of which more than 90% occurred in smokers. Lung adenocarcinoma (LAC) related to air pollution, more likely to occur in women and non-smokers. Its incidence was lower than squamous cell carcinoma and Recoverability lung cancer diagnosis based on WHO criteria, 0 and Ⅰstage though surgical resection would be completely cured. Otherwise, the other types of lung cancer were considered unrecoverable. Weight coefficients for theoretical model According to domestic and foreign Meta-analysis of the results of lung cancer-related literature, identified by RR values, OR value as quantitative indicators of lung cancer risk factors. While expert analysis and Delphi method as qualitative indicators determined. Above all determined first, second, third and fourth indicators respectively and weight coefficients. The hazards were first indicator and set to score 100 based on ISO, whose weighting factor was 1. The first indicator: according to WHO public health risks, the lung cancer risk factors and severity were the assigned by 50 points, each weighted coefficient of 0.5. The secondary indicators: the contribution of risk factors for lung cancer morbidity and mortality: the social risk factors were 20%, the environmental risk factors were 10%, behavior risk factors were 70%. The social risk factors was assigned to 10 points. The environment risk factors were assigned to 5 points. Behavioral risk factors were assigned to 35 points. The contribution of severity for lung cancer at different levels of morbidity and mortality included epidemiology, clinical features, pathology, pathogenesis and reversible reaction from different aspects, respectively assigned to 10 points. The third indicators: data and information sources were collected and divided relevant to three indicators according to WHO, OECD and literature to identify criteria to determine the levels. (Table 1: The lung cancer hazard assessments indicators system framework and Study on Theoretical Models of Regional Humanity Lung Cancer Hazards Assessment rating criteria) Evaluation the hazard of lung cancer risk in Beijing Use the regional population of lung cancer hazard assessments theoretical model formula and combined with data sources in Beijing and divided into the different levels to calculate score of lung cancer risk. It shows that the hazards of lung cancer risk population in Beijing was 84.4375, lung cancer had a significant risk of harm. Discussion The purpose and significance of regional population lung cancer risk assessment: Over the years, human society has always unswervingly stick to study of lung cancer prevention, diagnosis, treatment, prevention and control, the majority were from their respective disciplines and administrative advance, however were rarely assessed the long-term interactions between the socio-economic, environmental pollution and fog haze weather, bad behavior and lifestyle, genetic factors, especially lung cancer. The purpose and significance of regional population lung cancer risk assessment is to identify the risk size and risk point of the area.Then build a comprehensive system of assessment methods to provide a comprehensive and scientific effectively recognized the hazards of discipline and find the prevention and control measures to provide theoretical support for all government. The theoretical basis of regional population lung cancer risk assessment: The lung cancer risk assessment of regional population base on risk, public risk and the special risk of cancer. Lung cancer risk refers to the probability of human exposure to harmful risk factors to cause lung cancer,it is the possibility of lung cancer casused by the multiple health risk factors of long-term interaction inhealthy population, high risk population exposed,.The serve of risk is determined by three factors, including hazard, vulnerability and prevention capability. Expressed by equation, the risk of hazard was equal to vulnerability multiplied by prevention capability. The more serious hazard would be, the greater vulnerability was increasing, the weaker prevention and control capacity would have, the greater the risk of disease would be. In the case of vulnerability and prevention capacity approached to zero, the hazards and risks would be equal, the hazards of risk assessment to establish a theoretical model of lung cancer involved the whole population, the whole life cycle and whole process of disease contacting different risk factors to surveillance and evaluation. Then build a comprehensive system of assessment methods to provide a comprehensive and scientific effectively recognized the hazards of discipline and find the prevention and control measures to provide theoretical support for all government. The key points of lung caner hazard theoretical model: Firstly, the theoretical model evaluation method is scientific and effective.This study based on international standards and combined with practical and theoretical models to propose the risk factors theoretical model of lung cancer hazards. According to severity of disease and risk factors, classify all indicators by hierarchical stepwise selection. Evaluated scores obtained from regional health statistics and health information reports, socioeconomic annual report, social development index report, international health information statistics and major disease reports. This method can be carried out the risk population with cancer hazard analysis in different countries and cities to find out similarities and differences and the reasons for the difference. To find out the key hazards of lung cancer risk priority control points, provide effective control strategies for scientific basis. Secondly,establish scientific indicators and assessment criteria. Based on the evaluation system, each indicator was summated to calculate the score of regional population lung cancer hazards. The higher the score was, the higher their risk of lung cancer hazards were. Its value from 0 to 100 which 100 was the highest risk and zero was risk-free. Thirdly, To set up a software regarding the lung cancer hazard risk assessment of regional population according to theoretical model, effectively analyze the difference in data of different regions, improve efficiency, and reach a basis for risk prediction. Evaluating the regional population lung cancer hazard assessment for Beijing at first time: As the capital of China, Beijing has growing morbidity and mortality concerning malignant tumor year by year, and lung cancer ranked first. According to the theoretical moble of lung cancer hazard, collect, process and grade the indicators of Beijing area, calculate the risk scores by using the formula, and then conclude that the hazard of lung cancer in Beijing area is in high risk level. Therefore, the improvement of social concern and attention is needed, provide theoretical support for the government to coordinate, control, and make policies and measures of prevention. Prospect: WHO report showed that all kinds of causes resulted in different income groups, health inequality between countries, which included the economic income, rights, status, uneven distribution of products and services, medical health care, culture, levels of education, place of work and life, living conditions, community environment, urban and rural environment, the opportunity to enjoy a variety of life. All these were present injustice. So that resulted in smoking, drinking, lacking of physical activity and unhealthy life behavior were usually obvious. Thus, social determinants are the most elementary and important risk factors of choronic non-communicable diseases, but it have not been known and valued by the government administration and masses. If people want to reduce the harm that diseases bring human to the minimum, the elementary measures are needed, the government should enact relevant policy, system and laws, improve the economy, education and culture level of the masses, reduce the difference between person and person, reduce the unfair and unjust phenomenon. To achieve that everyone is equal, everyone enjoys health, everyone enjoys the primary health care, improve the overall health level of population, and create a better future Summary: This article first putted forward the concept of regional population lung cancer risk assessment, including theoretical model, indicators system, evaluation standard and determined factors. Indicators system included 2 first indicators, 8 secondary indicators and 18 third indicators, as well as 18 forth indicators corresponding to the third. The whole indicators were given the weight coefficients and used as information sources. Based on the contribution degree of lung cancer risk, total score was 100 points. Establish the regional population lung cancer hazard assessments theory and technical method to calculate the hazards of lung cancer was 84.4375 in Beijing, belonged to the great risk.
2018-04-03T05:47:06.546Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "7ec67a2d1100a4af68de153e71287c6afa220d53", "oa_license": "CCBY", "oa_url": "http://koreascience.or.kr/article/JAKO201528551641982.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "cfc3a63fc9a2f56275c2fe80ef0298d566887c0f", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
123898688
pes2o/s2orc
v3-fos-license
Comparing Generalized Median Voter Schemes According to their Manipulability We propose a simple criterion to compare generalized median voter schemes according to their manipulability. We identify three necessary and su¢ cient conditions for the comparability of two generalized median voter schemes in terms of their vulnerability to manipulation. The three conditions are stated using the two associated families of monotonic …xed ballots and depend very much on the power each agent has to unilaterally change the outcomes of the two generalized median voter schemes. We perform a speci…c analysis of all median voter schemes, the anonymous subfamily of generalized median voter schemes. Keywords: Generalized Median Voting Schemes; Strategy-proofness; Anonymity. Journal of Economic Literature Classi…cation Numbers: C78; D78. We are grateful to Gris Ayllón, David Cantala, Bernardo Moreno and Fernando Tohmé for comments and suggestions. Arribillaga acknowledges …nancial support from the Universidad Nacional de San Luis, through grant 319502, and from the Consejo Nacional de Investigaciones Cientí…cas y Técnicas (CONICET), through grant PIP 112-200801-00655. Massó acknowledges …nancial support from the Spanish Ministry of Economy and Competitiveness, through the Severo Ochoa Programme for Centres of Excellence in R&D (SEV-2011-0075) and through grant ECO2008-0475FEDER (Grupo Consolidado-C), and from the Generalitat de Catalunya, through the prize “ICREA Academia”for excellence in research and grant SGR2009-419. yInstituto de Matemática Aplicada San Luis (UNSL-CONICET). Ejército de los Andes 950, 5700 San Luis, Argentina. E-mail: rarribi@unsl.edu.ar zUniversitat Autònoma de Barcelona and Barcelona GSE. Departament d’Economía i Història Econòmica. Edi…ci B, UAB. 08193, Bellaterra (Barcelona), Spain. E-mail: jordi.masso@uab.es Introduction Consider a set of agents that has to collectively choose an alternative. Each agent has a preference relation on the set of alternatives. We would like the chosen alternative to depend on the preference pro…le (a list of preference relations, one for each agent). But preference relations are private information and, to be used to choose the alternative, they have to be revealed by the agents. A social choice function collects individual preference relations and selects an alternative for each declared preference pro…le. Hence, a social choice function induces a game form that generates, at every preference pro…le, a strategic problem to each agent. An agent manipulates a social choice function if there exist a preference pro…le and a di¤erent preference relation for the agent such that, if submitted, the social choice function selects a strictly better alternative according to the preference relation of the agent of the original preference pro…le. A social choice function is strategy-proof if no agent can manipulate it. That is, the game form induced by a strategy-proof social choice function has the property that, at every preference pro…le, to declare the true preference relation is a weakly dominant strategy for all agents. Hence, each agent has an optimal strategy (to truth-tell) independently of the agent's beliefs about the other agents'declared preference relations. This absence of any informational hypothesis about the others' preference relations is one of the main reasons of why strategy-proofness is an extremely desirable property of social choice functions. However, the Gibbard-Satterthwaite Theorem establishes that nontrivial strategyproof social choice functions do not exist on universal domains. Strategy-proofness is a strong requirement since a social choice function is not longer strategy-proof as soon as there exist a preference pro…le and an agent that can manipulate the social choice function by submitting another preference relation that if submitted, the social choice function selects another alternative that is strictly preferred by the agent. Nevertheless, there are many social choice problems where the structure of the set of alternatives restricts the set of conceivable preference relations, and hence the set of strategies available to agents. For instance, when the set of alternatives has a natural order, in which all agents agree upon. The localization of a public facility, the temperature of a room, the platform of political parties in the left-right spectrum, or the income tax rate are all examples of such structure that imposes natural restrictions on agents'preference relations. Black (1948) was the …rst to argue that in those cases agents'preference relations have to be single-peaked (relative to the unanimous order on the set of alternatives). A preference relation is single-peaked if there exists a top alternative that is strictly preferred to all other alternatives and at each of the two sides of the top alternative the preference relation is monotonic, increasing in the left and decreasing in the right. A social choice function operating only on a restricted domain of preference pro-…les may become strategy-proof. The elimination of preference pro…les restricts the normal form game induced by the social choice function, and strategies (i.e., preference relations) that were not dominant may become dominant. Consider any social choice problem where the set of alternatives can be identi…ed with the interval [a; b] of real numbers and where single-peaked preference relations are de…ned on [a; b]. For this set up Moulin (1980) characterizes all strategy-proof and tops-only social choice functions on the domain of single-peaked preference relations as the class of all generalized median voter schemes. 1 In addition, Moulin (1980) also characterizes the subclass of median voter schemes as the set of all strategy-proof, tops-only and anonymous social choice functions on the domain of single-peaked preference relations; and this is indeed a large class of social choice functions. A median voter scheme can be identi…ed with a vector x = (x 1 ; :::; x n+1 ) of n + 1 numbers in [a; b]; where n is the cardinality of the set of agents N and x 1 ::: x n+1 . Then, for each preference pro…le, the median voter scheme identi…ed with x selects the alternative that is the median among the n top alternatives of the agents and the n + 1 …xed numbers x 1 ; :::; x n+1 : Since 2n + 1 is an odd number, this median always exists and belongs to [a; b]: Observe that median voter schemes are tops-only and anonymous by de…nition. They are strategy-proof on the domain of single-peaked preference relations because, given a preference pro…le, each agent can only change the chosen alternative by moving his declared top away from his true top; thus, no agent can manipulate a median voter scheme at any preference pro…le. A median voter scheme distributes the power to in ‡uence the outcome among agents according to its associated vector x in an anonymous way. Generalized median voter schemes constitute non-anonymous extensions of median voter schemes. A generalized median voter scheme can be identi…ed with a set of …xed ballots fp S g S N on [a; b], one for each subset of agents S. Then, for each preference pro…le, the generalized median voter scheme identi…ed with fp S g S N selects the alternative that is the smallest one with the following two properties: (i) there is a subset of agents S whose top alternatives are smaller or equal to and (ii) the …xed ballot p S associated to S is also smaller or equal to . Generalized median voter schemes are strategy-proof on the domain of singlepeaked preference pro…les, but manipulable on the universal domain. There are several papers that have identi…ed, in our or similar settings, maximal domains under which social choice functions are strategy-proof but, as soon as the domain is enlarged with a preference outside the domain, the social choice function becomes manipulable. Barberà, Massó and Neme (1998), Barberà, Sonnenschein and Zhou (1992), Berga and Serizawa (2000), Bochet and Storcken (2009), Ching and Serizawa (1998), Hatsumi, Berga, and Serizawa (2014), Kalai and Müller (1977), and Serizawa (1995) are some examples of these papers. Our contribution on this paper builds upon this literature and has the objective of giving a criterion to compare generalized median voter schemes according to their manipulability. We want to emphasize the fact that the manipulability of a social choice function does not indicate the degree of its lack of strategy-proofness. There may be only one instance at which the social choice function is manipulable or there may be many such instances. The mechanism design literature that has focused on strategy-proofness has not distinguished between these two situations; it has declared both social choice functions as being not strategy-proof, period! 2 Our criterion to compare two social choice functions takes the point of view of individual agents. We say that an agent is able to manipulate a social choice function at a preference relation (the true one) if there exist a list of preference relations, one for each one of the other agents, and another preference relation for the agent (the strategic one) such that if submitted, the agent obtains a strictly better alternative according to the true preference relation. Consider two generalized median voter schemes, f and g; that can operate on the universal domain of preference pro…les. Assume that for each agent the set of preference relations under which the agent is able to manipulate f is contained in the set of preference relations under which the agent is able to manipulate g: Then, from the point of view of all agents, g is more manipulable than f: Hence, we think that f is unambiguously a better generalized median voter scheme than g according to the strategic incentives induced to the agents. Often, it may be reasonable to think that agents'preferences are single-peaked, but if the designer foresees that agents may have also non single-peaked preferences, then f may be a better choice than g if strategic incentives are relevant and important for the designer. Before presenting our general result in Theorem 2, we focus on median voter schemes, the subclass of anonymous generalized median voter schemes. In Theorem 1 we provide two necessary and su¢ cient conditions for the comparability of two median voter schemes in terms of their manipulability. Let f and g be two (nonconstant) median voter schemes and let x = (x 1 ; :::; x n+1 ) and y = (y 1 ; :::; y n+1 ) be their associated vectors of …xed ballots, x to f and y to g, where x 1 ::: x n+1 and y 1 ::: y n+1. Then, g is at least as manipulable as f if and only if [x 1 ; x n+1 ] [y 1 ; y n+1 ] and [x 2 ; x n ] [y 2 ; y n ]: Using this characterization we are able to establish simple comparability tests for the subclass of unanimous and e¢ cient median voter schemes. Using the partial order "to be equally manipulable as"obtained in Theorem 1 we show that the set of equivalence classes of median voter schemes has a complete lattice structure with the partial order "to be as manipulable as"; the supremum is the equivalence class containing all median voter schemes with x 1 = x 2 = a and x n = x n+1 = b, 3 and the in…mum is the equivalence class with all constant median voter schemes; i.e., for all k = 1; :::; n + 1, In Theorem 2 we provide three necessary and su¢ cient conditions for the comparability of two generalized median voter schemes in terms of their manipulability. The three conditions are stated using the two associated families of monotonic …xed ballots and depend very much on the power each agent has to unilaterally change the outcome of the two generalized median voter schemes (i.e., the intervals of alternatives where agents are non-dummies). Obviously, Theorem 2 is more general than Theorem 1. However, our analysis can be sharper and deeper on the subclass of anonymous generalized median voter schemes. In addition, Theorem 1 can be seen as a …rst step to better understand the general characterization of Theorem 2. Before …nishing this Introduction we want to relate our comparability notion with two notions recently used in centralized matching markets. Pathak and Sönmez (2013) propose two di¤erent notions to compare, in terms of their manipulability, speci…c matching mechanisms in school choice problems. The two notions are related in the sense that one is stronger than the other, and both are based on the inclusion of preference pro…les at which there exists a manipulation. In contrast, our notion is based on the inclusion of preference relations at which an agent is able to manipulate. In applications, preference pro…les are not common knowledge while each agent knows his preference relation (and he may only know that). To use a more manipulable generalized median voter scheme means that each agent has to worry about his potential capacity to manipulate in a larger set. Again, the use of the inclusion of preference relations as a basic criterion to compare generalized median voter schemes according to their manipulability do not require any informational hypothesis. Thus, we …nd it more appealing. Moreover, we show that if two generalized median voter schemes are comparable according to Pathak and Sönmez's weaker notion, then they are also comparable according to our notion. Furthermore, Example 1 shows that our notion is indeed much weaker than Pathak and Sönmez's weaker notion (and hence, also weaker than their stronger one). The paper is organized as follows. Section 2 contains preliminary notation and de…nitions. Section 3 describes the family of anonymous generalized median voter schemes and compares them according to their manipulability. Section 4 extends the analysis to all generalized median voter schemes. Section 5 contains a …nal remark about the use of median voter schemes on the universal domain of preferences and the comparison of Pathak and Sönmez's criteria with ours. Sections 6 and 7 contain two appendices that collect all omitted proofs. Preliminaries Agents are the elements of a …nite set N = f1; :::; ng: The set of alternatives is the interval of real numbers [a; b] R. We assume that n 2 and a < b. Generic agents will be denoted by i and j and generic alternatives by and . Subsets of agents will be represented by S and T: The (weak) preference of each agent i 2 N on the set of alternatives [a; b] is a complete, re ‡exive, and transitive binary relation (a complete preorder) R i on [a; b]. As usual, let P i and I i denote the strict and indi¤erence preference relations induced by R i , respectively; namely, for all ; 2 [a; b]; P i if and only if : R i , and I i if and only if R i and R i . The top of R i is the set of alternatives that are weakly preferred to any other alternative. We will restrict our attention to preferences with a unique top which will be denoted by Let U be the set of preferences with a unique top on [a; b]. A preference pro…le R = (R 1 ; :::; R n ) 2 U n is a n-tuple of preferences. To emphasize the role of agent i or subset of agents S, a preference pro…le R will be represented by (R i ; R i ) or (R S ; R S ), respectively. A subset b U n U n of preference pro…les (or the set b U itself) will be called a domain. A social choice function is a function f : b U n ! [a; b] selecting an alternative for each preference pro…le in the domain b U n . The range of a social choice function is denoted by r f . That is, j there exists R = (R 1 ; :::; R n ) 2 b U n s.t. f (R 1 ; :::; R n ) = g: Social choice functions require each agent to report a preference on a domain b U. A social choice function is strategy-proof on b U if it is always in the best interest of agents to reveal their preferences truthfully. Formally, a social choice function In the sequel we will say that a social choice function To compare social choice functions according to their manipulability, our reference set of preferences will be the full set U. function operates on all preference pro…les on U n , because all of them are reasonable. However, for many applications, a linear order structure on the set of alternatives naturally induces a domain restriction in which for each preference R i in the domain not only there exists a unique top but also that at each of the sides of the top of R i the preference is monotonic. A well-known domain restriction is the set of single-peaked preferences on an interval of real numbers. We will denote the domain of all single-peaked preferences on [a; b] by SP U. Moulin (1980) characterizes the family of strategy-proof and tops-only social choice functions on the domain of single-peaked preferences. This family contains many nondictatorial social choice functions. All of them are extensions of the median voter. Following Moulin (1980), and before presenting the general result, we …rst compare in Section 3, the anonymous subclass according to their manipulability on the full domain of preferences U. In Section 4 we will give a general result to compare according to their manipulability all strategy-proof and tops-only social choice functions on SP n when they operate on the domain U n . Median Voter Schemes Assume …rst that n is odd and let f : U n ! [a; b] be the social choice function that selects, for each preference pro…le R = (R 1 ; :::; R n ) 2 U n , the median among the top alternatives of the n agents; namely, f (R) = medf (R 1 ); :::; (R n )g. 4 This social choice function is anonymous, e¢ cient, tops-only, and strategy-proof on SP. Add now, to the n agents'top alternatives, n + 1 …xed ballots: n+1 2 ballots at alternative a and n+1 2 ballots at alternative b. Then, the median among the n top alternatives, and the median among the n top alternatives and the n + 1 …xed ballots coincide since the n+1 2 ballots at a and the n+1 2 ballots at b cancel each other; namely, for all R = (R 1 ; :::; R n ) 2 U n ; f (R) = medf (R 1 ); :::; (R n ); a; :::; a | {z } 4 Given a set of real numbers fx 1 ; :::; x K g, where K is odd, de…ne its median as medfx 1 ; :::; where y is such that #f1 k K j x k yg K 2 and #f1 k K j x k yg K 2 . Since K is odd the median is unique and belongs to the set fx 1 ; :::; x K g. To proceed, and instead of adding n + 1 …xed ballots at the extremes of the interval, we can add, regardless of whether n is odd or even, n + 1 …xed ballots at any of the alternatives in [a; b]. Then, a social choice function f : U n ! [a; b] is a median voter scheme if there exist n + 1 …xed ballots (x 1 ; :::; x n+1 ) 2 [a; b] n+1 such that for all R 2 U n , f (R) = medf (R 1 ); :::; (R n ); x 1 ; :::; x n+1 g: Hence, each median voter scheme can be identi…ed with its vector x = (x 1 ; :::; x n+1 ) 2 [a; b] n+1 of …xed ballots. Moulin (1980) shows that the class of all tops-only, anonymous and strategy-proof social choice functions on the domain of single-peaked preferences coincides with all median voter schemes. can not manipulate f: It is less immediate to see that the set of all median voter schemes (one for each vector of n + 1 …xed ballots) coincides with the class of all tops-only, anonymous and strategy-proof social choice functions on the domain of single-peaked preferences. The key point in the proof is to identify, given a tops-only, anonymous and strategy-proof social choice function f : SP n ! [a; b]; the vector x = (x 1 ; :::; x n+1 ) 2 [a; b] n+1 of …xed ballots. To identify each x k with 1 k n + 1; consider any preference pro…le R 2 SP n with the property that #fi 2 N j (R i ) = ag = n k + 1 and #fi 2 N j (R i ) = bg = k 1 and de…ne x k = f (R): The proof concludes by checking that indeed f satis…es (2) with this vector x = (x 1 ; :::; x n+1 ) 2 [a; b] n+1 of identi…ed …xed ballots. To see that in the statement of Proposition 1 tops-onlyness does not follow from strategy-proofness and anonymity consider the social choice function f : SP n ! [a; b] where for all R 2 SP n , Notice that f is strategy-proof and anonymous but it is not tops-only. It also violates e¢ ciency, unanimity, and ontoness. We …nish this subsection with a useful remark stating that median voter schemes are monotonic. Remark 1 Let f : U n ! [a; b] be a median voter scheme and let R; R 0 2 U n be such that Main result with anonymity Median voter schemes are strategy-proof on the domain SP n of single-peaked preferences. However, when they operate on the larger domain U n they may become manipulable. Then, all median voter schemes are equivalent from the classical manipulability point of view. In this subsection we give a simple test to compare two median voter schemes according to their manipulability. Given a vector x = (x 1 ; :: we will denote by f x its associated median voter scheme on U n ; namely, for all R 2 U n , f x (R) = medf (R 1 ); :::; (R n ); x 1 ; :::; x n+1 g: Given x = (x 1 ; :::; x n+1 )2[a; b] n+1 , we will assume that x 1 ::: x n+1 : This can be done without loss of generality because the social choice function associated to any reordering of the components of x coincides with f x : Obviously, the rang of f x is [x 1 ; x n+1 ], i.e., r f x = [x 1 ; x n+1 ]. Any constant social choice function, f (R) = for all R 2 U n ; can be described as a median voter scheme by setting, for all 1 k n + 1; x k = : We denote it by f : Trivially, any constant social choice function f is strategy proof on U n . Then, for any 2 [a; b] and any social choice function g : U n ! [a; b] we have that g is at least as manipulable as f (i.e., g % f ). Furthermore, all non-constant median voter schemes are manipulable on U n : Hence, any non-constant median voter scheme f x is more manipulable than f (i.e., f x f ). Theorem 1 below gives an easy and operative way of comparing non-constant median voter schemes according to their manipulability. The formal proof of Theorem 1 is left for the next subsection but we now give some intuition about it. Whether or not agent i can manipulate f x at R i roughly depends on the set of alternatives that may be selected by f x for some subpro…le R i ; given R i (this set is called the set of options left open by R i ): How R i compares pairs of alternatives that will never be selected by f x once R i is submitted is unrelated with the ability of i to manipulate f x . Moreover, given f x , the set of options left open by R i depends only on x 1 ; x 2 ; x n ; and x n+1 ; and it does in a very particular way: the closer x 1 and x 2 are to a and x n and x n+1 to b, the larger the options left open by R i will be and hence, i will be able to manipulate f x easily. And …nally, R i has to be single-peaked on the set of options left open by itself, because otherwise there would exists R i such that i is able to induce a further away alternative from his top (R i ) by declaring another preference R 0 i : Proof of Theorem 1 In the proof of Theorem 1 the following option set will play a fundamental role. Before proving Theorem 1 we state three useful lemmata, whose proofs are in Appendix 1. Lemma 1 Let f x : U n ! [a; b] be a median voter scheme associated with x = (x 1 ; :: Lemma 2 Let f x : U n ! [a; b] be a median voter scheme associated with x = (x 1 ; :: Lemma 3 Let f x : U n ! [a; b] and f y : U n ! [a; b] be two median voter schemes associated with x = (x 1 ; :::; x n+1 )2[a; b] n+1 and y = (y 1 ; ::: Lemma 1 plays a key role in the proof of Theorem 1. To understand it notice that it roughly says that whether or not agent i can manipulate f x at R i depends on the fact that R i should only be like single-peaked on the set of alternatives that may be selected by f x for some subpro…le R i ; given R i : The comparison, in terms of R i , of pairs of alternatives that will never be selected once R i is submitted, is irrelevant in terms of agent i's power to manipulate f x : To illustrate that, consider the case where n = 3; : Lemma 1 says that R i should be single-peaked on this interval and that the preference away from (R i ) towards the direction of a+b 3 has to be monotonically decreasing until alternative a+b 3 and that all alternatives further away have to be worse than a+b 3 but they can be freely ordered among themselves; and symmetrically from (R i ) towards the direction of 2(a+b) 3 . Figure 1 illustrates a preference that is single-peaked on It also shows that this set may be signi…cantly larger than the set of single-peaked preferences. Figure 1 Proof of Theorem 1 First, we will prove that if [x 1 ; x n+1 ] [y 1 ; y n+1 ] and [x 2 ; x n ] [y 2 ; y n ]; then f y is at least as manipulable as f where 2 R f y . Thus; by Lemma 1, R i 2 M f y i : Therefore, f y is at least as manipulable as f x : To prove the other implication assume that f y is at least as manipulable as f x : To obtain a contradiction assume that [ x n ] * [y 2 ; y n ]: We will divide the proof between two cases. In particular, suppose that x 1 < y 1 ; the proof for the case y n+1 < x n+1 proceeds similarly and therefore it is omitted. We will divide the proof between two cases again, depending on whether Since f x is not constant and x 1 < y 1 , x 1 < minfy 1 ; x n+1 g: Let ; ; 2 [a; b] be such that x 1 < < < < minfy 1 ; x n+1 g and let R i 2 U be such that: and o y (R i ) = [y 1 ; y n ]: Hence, and since ; ; In particular, suppose that x 2 < y 2 ; the proof for the case y n < x n proceeds similarly and therefore it is omitted. Let ; 2 [a; b] be such that x 2 < < < x 2 +y 2 2 < y 2 and let R i 2 U be such that: Hence, and since ; ; . For further reference, let M V S denote the set of all median voting schemes from U n to [a; b]: An immediate consequence of Theorem 1 is that if median voter scheme f is at least as manipulable as median voter scheme g, then the range of g is contained in the range of f: The improvement in terms of the strategy-proofness of median voter schemes necessarily requires the corresponding reduction of their ranges since smaller ranges reduce agents'power to manipulate. The corollary below, that follows from Theorem 1 and the fact that for all f x 2 M V S; r f x = [x 1 ; x n+1 ], states this observation formally. Consider a problem where the range of the social choice has to be …xed a priori to be a subinterval Unanimity According to Proposition 1 in Moulin (1980), a median voter scheme f x : SP n ! [a; b] is e¢ cient (on the single-peaked domain) if and only if x 1 = a and x n+1 = b; namely, f x can be described as the median of the n top alternatives submitted by the agents and only n 1 …xed ballots since x 1 = a and x n+1 = b cancel each other in (2). But this subclass of median voter schemes is appealing because it coincides with the class of all unanimous median voter schemes (M V S [a;b] using the notation introduced in the previous subsection). 5 Corollary bellow shows that Theorem 1 has clear implications on how unanimous and non-unanimous median voter schemes can be ordered according to their manipulability. In particular, given a unanimous median voter scheme there is always a non-unanimous median voter scheme that is less manipulable. Moreover, if a unanimous median voter scheme and a nonunanimous median voter scheme are comparable according to their manipulability, then the former is more manipulable than the later. a) The statement follows immediately from Theorem 1. b) We distinguish between two cases. Case 1 : Assume y 2 < y n and let ; ; 2 [a; b] be such that y 2 < < < < y n : Consider x = ( ; ; :::; ; ) 2 [a; b] n+1 : Then, [x 2 ; x n ] = f g [y 2 ; y n ]: By Theorem 1, f y is at least as manipulable as f x and since [y 2 ; y n ] * [x 2 ; x n ], f x is not at least as manipulable as f y : Hence, f y is more manipulable than f x and f x is neither constant nor unanimous since a < x 1 < x n+1 < b: Case 2: Assume y 2 = y n . Furthermore, suppose that a < y 2 ; the proof when y n < b proceeds symmetrically and therefore it is omitted. Let 2 (a; y 2 ) and consider x = ( ; y 2 ; :::; y 2 ; b) 2 [a; b] n+1 : Then, [x 2 ; x n ] = fy 2 g. By Theorem 1, f y is at least as manipulable as f x and, since [y 1 ; y n+1 ] = [a; b] * [x 1 ; x n+1 ], f x is not at least as manipulable as f y : Hence, f y is more manipulable than f x . Furthermore, and since a < x 1 = :: By Theorem 1, f x is not at least as manipulable as f y : Furthermore, as f x and f y are comparable, f y f x must hold. We conclude this subsection with a corollary that identi…es the unanimous median voter schemes that do not admit a less manipulable unanimous median voter scheme. The statement also follows immediately from Theorem 1. Corollary 4 Let f y be a unanimous median voter scheme such that y 2 = y n : Then, there does not exist an unanimous median voting scheme g such that f y g. E¢ ciency A median voter scheme f x : U n ! [a; b] (operating on the full domain of preferences) is e¢ cient if and only if x 1 = a, x n+1 = b and x k 2 fa; bg for all 2 k n. 6 This is because on the larger domain, if a median voter scheme f x has an interior …xed ballot x k 2 (a; b) it is always possible to …nd a preference pro…le R with f x (R) = x k such that there exists an alternative y that is unanimously strictly preferred by all agents; namely, yP i f x (R) for all i 2 N: Moreover, all e¢ cient median voter schemes are unanimous. We now present simple criteria that are useful to compare e¢ cient median voter schemes with other unanimous median voter schemes according to their manipulability. But before, we need a bit of additional notation. There exists a non-e¢ cient and unanimous f x 2 M V S such that f k f x . 6 Hence, an e¢ cient median voter scheme f x : U n ! [a; b] has the property that for all (R 1 ; :::; R n ) 2 U n ; f x (R 1 ; :::; R n ) 2 f (R 1 ); :::; (R n )g: Miyagawa (1998) and Heo (2013) have studied this property under the name of peak-selection. Corollary 5 says the following. Statement a) states that any e¢ cient median voter scheme f = 2 ff 1 ; f n g belongs to the set of the most manipulable median voter schemes. Statement c) states that the two e¢ cient median voter schemes f 1 and f n are less manipulable than any other e¢ cient median voter scheme f = 2 ff 1 ; f n g: Statement d) states that any non-unanimous median voter scheme is less manipulable that any e¢ cient median voter scheme f = 2 ff 1 ; f n g: Statement e) states that given an e¢ cient median voter scheme f = 2 ff 1 ; f n g there is always a (non-e¢ cient) unanimous median voter scheme that is less manipulable. Moreover, Corollary 5 has the following two implications when n is odd. First, for any f Proof Let y be the vector of …xed ballots associated to f k : Since k = 2 f1; ng; y 1 = y 2 = a and y n = y n+1 = b: a) It follows from (4) and Theorem 1. b) It follows from a). c) Let z be the vector of …xed ballots associated to f 1 ; namely, z 1 = a and z 2 = ::: = z n+1 = b: Hence, by (4) and Theorem 1, f k is more manipulable than f 1 : Using a similar argument, it also follows that f k f n . d) Let f x be a non-unanimous median voter scheme. Then, either a < x 1 or x n+1 < b: Hence, by (4) and Theorem 1, f k is more manipulable than f x e) Consider any 2 (a; b) and de…ne x = (a; ; :::; | {z } k 1-times ; b; :::; b): Then, f x is unanimous but it is not e¢ cient. By (4) and Theorem 1, f k f x : Corollary 6 Let f 2 M V S be e¢ cient and such that f 2 ff 1 ; f n g. a) Then, there exists a non-e¢ cient and non-constant If f x and f are comparable and f x is non-e¢ cient, then f f x : Corollary 6 says the following. Statement a) states that there exists a non-e¢ cient and non-constant median voter scheme that is less manipulable than f 1 (or f n ). Statement b) says that if the e¢ cient median voter scheme f 1 (or f n ) and a non-e¢ cient median voter scheme f are comparable according to their manipulability, then the former is more manipulable than the later. Corollaries 5 and 6 make clear the well-known trade-o¤ between strategy-proofness and e¢ ciency. x n ; sup and ( sup x 1 ; sup t t t t t P P P P P P P P P P P P P P P i @ @ @ @ @ I 1 6 @ @ @ @ @ @ @ @ @ @ I 6 6 * 6 6 6 * : * 6 1 @ @ @ @ @ I P P P P P P P P P P P P P P P i range " 4 Comparing All Generalized Median Voter Schemes Generalized Median Voter Schemes Median voter schemes are anonymous. All agents have the same power to in ‡uence the outcome of a given median voter scheme f x ; although this power depends on the distribution of its associated …xed ballots x = (x 1 ; :::; x n+1 ): Generalized median voter schemes admit the possibility that di¤erent agents may have di¤erent power to in ‡uence its outcome. This power will be described by a monotonic family of …xed ballots, one for each coalition (subset) of agents. To develop a useful intuition to understand the class of all generalized median voter schemes, consider …rst the case n = 2: Given a monotonic family of …xed ballots fp f1;2g ; p f1g ; p f2g ; p f;g g, one for each coalition of agents, such that a p f1;2g p f1g p f2g p f;g b, we de…ne the social choice function f : Observe that r f = [p f1;2g ; p f;g ]: We can interpret this function as a way of assigning to agents 1 and 2 the power to select the alternative in the subset r f = [p f1;2g ; p f;g ]: For instance, agent 1 can make sure that the outcome is at most p f1g by voting below p f1g and at most (R 1 ) by voting above p f1g and agent 1 is a dictator on [p f1g ; p f2g ] (i.e., f (R) = (R 1 ) whenever (R 1 ) 2 [p f1g ; p f2g ]). It is easy to check that f can be rewritten as To present the characterization of all strategy-proof and tops-only social choice functions on the domain of single-peaked preferences for all n 2, we say that a collection fp S g S22 N is a monotonic family of …xed ballots if (i) p S 2 [a; b] for all S 2 2 N and (ii) T Q implies p Q p T . The characterization is the following. The social choice functions identi…ed in Proposition 3 are called generalized median voter schemes. A simple way of interpreting them is as follows. Each generalized median voting scheme (and its associated monotonic family of …xed ballots) can be understood as a particular way of distributing the power among coalitions to in ‡uence the social choice. To see that, take an arbitrary coalition S and its …xed ballot p S . Then, coalition S can make sure that, by all of its members reporting a top alternative below p S , the social choice will be at most p S , independently of the reported top alternatives of the members of the complementary coalition. 8 An alternative way of describing this distribution of power among coalitions is as follows. Fix a monotonic family of …xed ballots fp S g S22 N (i.e., a generalized median voter scheme) and take a vector of tops ( (R 1 ); :::; (R n )): Start at the left extreme of the interval a and push the outcome to the right until it reaches an alternative for which the following two things happen simultaneously: (i) there exists a coalition of agents S such that all its members have reported a top alternative below or equal to (i.e., (R i ) for all i 2 S) and (ii) the …xed ballot p S associated to S is located also below (i.e., p S ). Median voter schemes are the anonymous subclass of generalized median voter schemes. Hence, the …xed ballots of any two coalitions with the same cardinality of any anonymous generalized median voter scheme are equal. From a monotonic family of …xed ballots fp S g S22 N associated to an anonymous generalized median voter scheme f : U n ! [a; b] we can identify the n + 1 ballots x 1 ::: x n+1 needed to describe f as a median voter scheme as follows: for each 1 k n + 1, x k = p S for all S 2 2 N such that #S = n k + 1. Moreover, the onto social choice function f : U n ! [a; b] where agent j 2 N is the dictator (i.e., for all R 2 U n , f (R) = (R j )) can be described as a generalized median voter scheme by setting p T = a for all T N such that j 2 T and p S = b for all S N such that j = 2 S: Then, for any R 2 U n , (i) maxf (R j ); p fjg g = (R j ); (R j ) max i2T f (R i ); p T g for any T N such that j 2 T ; and (iii) max i2S f (R i ); p S g = b for any S N such that j = 2 S: Thus, min S 0 22 N max i 0 2S 0 f (R i 0 ); p S 0 g = (R j ): Given a monotonic family of …xed ballots p = fp S g S N ; let f p denote the generalized median voter scheme associated to p: Main result Our main result will provide a systematic way of comparing non-constant and nondictatorial generalized median voter schemes according to their manipulability. It turns out that to perform this comparison it is crucial to identify, for each agent i 2 N , the subintervals where i is a non-dummy agent; i.e., the subset of alternatives that are eventually chosen at some pro…le but agent i is able to change the chosen alternative by reporting a di¤erent preference relation. We de…ne formally below the general notion of a non-dummy agent at an alternative in a social choice function. The lemma below characterizes non-dummyness at an alternative in a generalized median voter scheme f p : U n ! [a; b] in terms of the monotonic family of …xed ballots p: This characterization will be useful in the sequel. We are now ready to state the main result of the paper. which is what Theorem 1 says. Hence, Theorem 1 can be seen as a corollary of Theorem 2. We will say that an interval I i = [c; d] with c < d is a non-dummy interval for i in f p if I i N D i p : Whenever we refer to an interval as a non-dummy interval we exclude the possibility that the interval contains only one alternative. If i 2 S with p S < p Snfig ; then [p S ; p Snfig ] is a non-dummy interval for i in f p and we denote it by I S i : We will write I S i when the median voter scheme used as reference is f p instead of f p : We state now the three lemmata, whose proofs are in Appendix 2, that will be used in the proof of Theorem 2. To simplify notation, given p = fp S g S N and R i 2 U; Lemma 5 Let f p : U n ! [a; b] be a non-constant generalized median voter scheme: Then, f p is not manipulable by i at R i if and only if, for all Lemma 7 Let p = fp S g S N and p = f p S g S N be two monotonic families of …xed ballots such that f p and f p are not constant. Assume (7), (8), and (9) in Theorem 2 hold. Then, for any non-dummy interval I S i and for all 2 I S i there exists a non dummy intervalÎ i for i in f p such that De…nition 8 Let p = fp S g S N and p = f p S g S N be two monotonic families of …xed ballots. The generalized median voter scheme f p : U n ! [a; b] is at least more (or more) dictatorial for i than the generalized median voter scheme f p : Proposition below formalizes the trade-o¤ between dictatorialness and manipulability. Proposition 4 Let p = fp S g S N and p = f p S g S N be two monotonic families of …xed ballots. Assume that f p : U n ! [a; b] and f p : U n ! [a; b] are non-constant, nondictatorial and comparable according to their manipulability. If f p is more dictatorial for i than f p , then f p is more manipulable than f p . Proof Since f p is more dictatorial than f p for i; ; p N nfig ] and p fig p N nfig : Therefore, p fig < p fig and p N nfig p N nfig or p fig p fig and p N nfig < p N nfig : Assume that p fig < p fig and p N nfig p N nfig hold; the proof for the other case proceeds similarly and therefore it is omitted. Since DT i p 6 = ; and p = fp S g S N is monotonic, N D i p = [p N ; p f;g ] holds by (6). Thus, Thus, by Theorem 2, f p is more manipulable than f p : Final remarks Before moving to the omitted proofs we …nish with two …nal remarks. The reader could ask about the meaning of applying a median voter scheme to the universal domain of preferences. 10 One could argue that if preferences are unrestricted it is like having no order on the set of alternatives. We also share this points of view. Under the universal domain of preferences median voter schemes lose their appeal. However, they still can be understood as a particular process for de…ning a speci…c subclass of social choice functions. Each ordering on the set of alternatives and each median voter scheme relative to this ordering de…nes a social choice function on the universal domain of preferences. This procedure becomes meaningful only when the structure and characteristics of the set of alternatives induce a natural order on it. But then, if we want to design strategy-proof social choice functions on any domain that contains the set of single-peaked preferences (relative to this natural ordering) we have to look only inside the class of median voter schemes (this is a consequence of Moulin (1980)'s characterization); otherwise, the social choice function would be manipulable. Our approach is relevant if agents, in addition to single-peaked preferences, may have additional preferences. The key point is to understand that a median voter scheme does not become necessarily manipulable under this larger domain. This depends very much on the identity of the agent, the particular properties of the additional preferences and the median voter scheme under consideration. Lemmata 1 and 5 in the proofs of Theorems 1 and 2 identify exactly the class of extra preferences that an agent may have and simultaneously preserve the strategy-proofness of the median voter scheme. And again, this class depends very much on the particular median voter scheme, and if this is not anonymous, it depends on the speci…c agent to whom this additional preferences have been included in his domain. Our main contribution is then to compare, in terms of their manipulability, some pairs of median voter schemes by using the set-wise inclusion criterion on the corresponding extra classes of admissible preferences. The second remark relates our comparability notion with two alternative notions proposed by Pathak and Sönmez (2013) to compare two di¤erent matching mechanisms (in school choice problems) according to their manipulability. Following Pathak and Sönmez (2013) the pro…le R is vulnerable under the mechanism f if f is manipulable by some agent at R; i.e., there exist i 2 N and R 0 First, and following their de…nitions in Section 1, a mechanism f is at least as manipulable as mechanism g according to Pathak and Sönmez (at least as PS-manipulable as, for short) if any pro…le that is vulnerable under g is also vulnerable under f ; i.e., if there exist i 2 N and R 0 i 2 U such that g(R 0 i ; R i )P i g(R i ; R i ), then there exist j 2 N and R 00 j 2 U such that f (R 00 j ; R j )P j f (R j ; R j ): Second, and following their de…nitions in Section 3, a mechanism f is at least as strongly manipulable as mechanism g according to Pathak and Sönmez (at least as strongly PS-manipulable as, for short) if for any pro…le g is vulnerable, f is also vulnerable by any agent who can manipulate g; i.e., 11 if there exist i 2 N and R 0 Remark 2 If f is at least as strongly PS-manipulable as g, then f is at least as PS-manipulable as g. Proposition 5 below shows that if a generalized median voter scheme f is at least as PS-manipulable as a generalized median voter scheme g; then f is at least as manipulable as g. Proposition 5 Let f and g be two generalized median voter schemes and assume that f is at least as PS-manipulable as g. Then, f is at least as manipulable as g: 12 Proof Assume that f is at least as PS-manipulable as g, and let R 2 U n be a pro…le under which g is vulnerable; that is, there exist i 2 N and R 0 i 2 U such that g(R 0 i ; R i )P i g(R i ; R i ). Hence, R i 2 M g i : Since g is tops-only, we may assume that R i 2 SP n 1 : By assumption, there exist j 2 N and R 00 j 2 U such that If j 6 = i; (10) implies that j can manipulate the generalized median voter scheme f at a pro…le R; where R j is a single-peaked preference, a contradiction with either Lemma 1 or Lemma 5. Hence, j = i. But then, by (10), which implies that f is at least as manipulable as g: Example 1 below shows that the reverse implication does not hold; i.e., there exist two median voter schemes f and g such that f is at least as manipulable as g but f is not at least as PS-manipulable as g (and, by Remark 2, f is not at least as strongly PS-manipulable as g). Therefore Example 1 shows that our notion of being "at least as manipulable as"is di¤erent than the two notions proposed by Pathak and Sönmez (2013). Example 1 Let n = 3 and f x and f y be two median voter schemes associated to x = (0; 1 2 ; 1 2 ; 1) and y = (0; 0; 1; 1); respectively. By Theorem 1, and since [x 1 ; x n+1 ] [y 1 ; y n+1 ] and [x 2 ; x n ] [y 2 ; y n ], f y is at least as manipulable as f x : On the one hand, consider any pro…le R = (R 1 ; R 2 ; R 3 ) 2 U 3 and any preference R 0 3 2 U such that (i) (R i ) = 1 for i = 1; 2, (ii) (R 3 ) = 1 4 and 3 4 P 3 1 2 ; and (iii) (R 0 3 ) = 3 4 : Therefore, f x (R 1 ; R 2 ; R 0 3 ) = 3 4 P 3 1 2 = f x (R) and hence, R is vulnerable under f x . Moreover, f y (R) = 1 and R is not vulnerable under f y : Thus, f y is not at least as PS-manipulable as f x and hence, by Remark 2, f y is not at least as strongly PSmanipulable as f x : On the other hand, consider any pro…le b R is vulnerable under f y : Thus, f x is not at least as PS-manipulable as f y and hence, by Remark 2, f x is not at least as strongly PS-manipulable as f y . Therefore, f x and f y are not comparable according to the two notions proposed by Pathak and Sönmez (2013). Example 1 illustrates the fact that our comparability notion is based on the inclusion of the maximal domains of preferences under which each of the two generalized median voter schemes are strategy-proof. In this case, the maximal domain of preferences under which f y is strategy-proof is the set of single-peaked preferences on [0; 1] while f x admits a much larger maximal domain, the union of the following three sets: We will divide the proof into three di¤erent cases. Case 1: Suppose 2 o x (R i ) and there exists 2 o x (R i ) such that < < (R i ) and P i ; the other case where (R i ) < < and P i is similar and therefore it is omitted. LetR 2 U n be such that (R j ) = for all j 2 N . Since 2 o x (R i ); and f x is a median voter scheme, f x (R i ;R i ) = : Similarly, let R 2 U n be such that ( R j ) = for all j 2 N: by the de…nition of f x , there must exist S N nfig and j 0 = 2 S such that Then, by (12) and the de…nition of f x ; Hence, by (11), Thus, f x is manipulable by i at R i with any R 0 i with the property that (R 0 i ) = f x (R i ;R j 0 ;R S ; R S[fi;jg ). and there exists 2 o x (R i ) such that < < (R i ) and P i ; the other case where (R i ) < < and P i proceeds similarly and it is therefore omitted. Let R 2 U n be such that ( R j ) = for all j 2 N: LetR 2 U n be such that (R j ) = for all j 2 N: If there exist S N nfig and j 0 = 2 S such that holds, the proof proceeds as in Case 1. Hence, assume that there do not exist S N nfig and j 0 = 2 S satisfying (14). Let N nfig = fj 1 ; :::; j n 1 g: Then, consider S 3 = fj 1 ; j 2 g, j 0 = j 3 = 2 S 3 ; and :(14) . . . . . . R i f x (R i ;R j n 2 ;R fj 1 ;j 2 ;:::;j n 3 g ; R fj 1 ;j 2 ;:::;j n 3 g[fi;j n 2 g ) consider S n 1 = fj 1 ; j 2 ; :::; j n 3 g, j 0 = j n 2 = 2 S n 1 ; and :(14) R i f x (R i ;R j n 1 ;R fj 1 ;j 2 ;:::;j n 2 g ; R fj 1 ;j 2 ;:::;j n 2 g[fi;j n 1 g ) consider S n = fj 1 ; j 2 ; :::; j n 2 g, j 0 = j n 1 = 2 S n ; and :(14) = f x (R i ;R i ) fj 1 ; j 2 ; :::; j n 2 g [ fi; j n 1 g = N: Hence, as P i ; Since Case 3: Suppose = 2 o x (R i ) and there exists 2 o x (R i ) such that < < (R i ) and P i ; the other case where (R i ) < < and P i proceeds similarly and it is therefore omitted. We will prove that this case is not possible. Consider the pro…leR such that (R j ) = for all j 2 N: Lemma 2), f (R i ;R i ) < : Furthermore, and since = 2 r f x which contradicts the initial hypothesis. Consider the case (R 0 i ) < (R i ); the other case is similar and therefore it is omitted: We distinguish among three di¤erent cases. We divide the proof into three cases. Proof of Lemma 3 We divide the proof into …ve cases. if y 2 (R i ) y n : if y 2 (R i ) y n : We start with two preliminary notions and several remarks. First, a generalized median voter scheme f p : U n ! [a; b] can alternatively be represented by a monotonic family of right …xed ballots p r = fp r S g S22 N , where (i) for all S 2 2 N , p r S 2 [a; b]; (ii) S T implies p r S p r T ; (iii) for all S 2 2 N ; p r S = p N nS , and (iv) for all R 2 U n ; f p (R) = max S22 N min j2S f (R j ); p r S g f p r (R): Second, a non-dummy interval I i is a maximal non-dummy interval for i if there is no non-dummy interval I 0 i such that I i ( I 0 i . Since the number of coalitions that contain a player is …nite, any maximal non-dummy interval I i can be written as the union of a family of intervals; namely, I i = [ K k=1 I S k i , where i 2 S k for all k = 1; :::; K: Before moving to the proof of the four lemmata used to prove Theorem 2, we state without proof the following facts. Remark 3 Let f p : U n ! [a; b] be a generalized median voter scheme and let R i 2 U. Proof of Lemma 4 Let f p : U n ! [a; b] be a generalized median voter scheme. We will denote f p simply by f: )) Assume i is non-dummy at in f: Then, there exist R 2 U n and R 0 i 2 U such that f (R i ; R i ) = and f (R 0 i ; R i ) 6 = : We distinguish between two cases. Observe that i 2 S: First, we prove that p S : Suppose otherwise, < p S ; then, max j2S f (R j ); p S g = p S > : By the de…nition of S and f; f (R i ; R i ) > ; a contradiction with f (R i ; R i ) = : Now, we prove that < p Snfig : Suppose otherwise, The proof proceeds symmetrically to Case 1 using the right phantom representation of f: () Assume there exists S N such that i 2 S, p S < p Snfig and p S p Snfig . We distinguish between two cases. Case 1 : Assume p S < p Snfig . Let R 2 U n be such that (R j ) = for all j 2 S and (R j ) = b for all j = 2 S: Then, f (R) = . Let R 0 i 2 U be such that < (R 0 i ) < p Snfig : Hence, f (R 0 i ; R i ) = (R 0 i ) 6 = : Thus, i is non-dummy at in f: Case 2 : Assume p S < p Snfig : Let R 2 U n be such that (R j ) = p S for all j 2 Snfig; (R i ) = and (R j ) = b for all j = 2 S: Then, f (R) = : Thus, i is non-dummy at in f: Proof of Lemma 5 We will denote f p and o p (R i ) simply by f and o(R i ); respectively. )) Assume f is not manipulable by i at R i and let I S i = [p S ; p Snfig ] be a non dummy interval for i in f: We distinguish among four cases. proof is similar changing the role of and ): We will show that R i : If = (R i ) the statement holds immediately. Assume < (R i ): Then, ; 2 I S i . Hence, and since < , p S < p Snfig . Consider any R i 2 U n 1 with the property that for every j 2 N nfig; Let R 2 U n be such that ( R j ) = for all j 2 N nfig and ( : We proceed by distinguishing between two subcases. . N g is …nite, we apply successively the previous argument starting with 1 < and obtaining R 1 ; R 2 ; :::; R K where (i) K 2 n , (ii) R k i = R i for all k = 1; :::; K, (iii) < f (R k ) < f (R k+1 ) < for all k = 1; :::; K 1, (iv) f (R 1 )R i and f (R k )R i f (R k 1 ) for all k = 1; :::; K, (v) f (R k ) 2 fp S j S N g and (vi) f (R K ) = : Then, by transitivity of R i ; R i : The proof proceeds as in Case 1 using the right phantom representation of f: the proof is similar using the right phantom representation of f ). We will show that R i : If = (R i ) the statement holds immediately. Assume < (R i ) and consider any pro…le R 2 U n where, for every j 2 N; ( R j ) = : f (R): Consider any subpro…leR i 2 U n 1 where, for every j 2 N nfig; . Since f is not manipulable by i at R i and Notice that 0 (R i ) and 0 2 o(R i ) \ I S i : Therefore, by Case 1, R i 0 . By transitivity of R i , R i : < the proof is similar changing the role of by ). We will show that this case is not possible. Consider any pro…le R 0 2 U n such that (R 0 j ) = for all j 2 N: We assume that the proof is similar using the right phantom representation of f ). Set R 0 = (R 0 i ; R i ): We distinguish among three cases. and f is a generalized median voter scheme, (R 0 i ) f (R 0 ) = De…ne S = fj 2 N j (R j ) g: Then, i = 2 S and because = f (R); p S : Set, S S [ fig: Hence, S = fj 2 N j (R 0 j ) g: Suppose p S > : Then, for all S 0 S max j2S 0 f (R 0 j ); p S 0 g p S 0 p S > and for all S * S; max j2S f (R 0 j ); p S g > because if j = 2 S; then (R 0 j ) > : Thus, Proof of Claim A: We proceed by …rst distinguishing between Case A.1 and Case A.2, and in turn for each one of them, the proof is divided in 5 subcases.
2015-03-06T19:42:58.000Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "41dd8444392350d6781b670ee6b5208dc9e05915", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.3982/TE1910", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "04da4be013de33ebef96394f3e941e3e08595422", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Economics" ] }
10915754
pes2o/s2orc
v3-fos-license
Adherence of trials of operative intervention to the CONSORT statement This paper by Gray et al highlighted the inadequacy in the reporting of non-pharmacological randomised controlled trials. This is consistent with previous work by our group,1,2 which has demonstrated the huge need for improvement with regard to the methodological reporting of surgical trials. SURGICAL RESEARCH Adherence of trials of operative intervention to the CONSORT statement extension for nonpharmacological treatments: a comparative before and after study The randomised controlled trial (RCT) represents the gold standard method for determining an association between treatment and outcome. 1 As important as the quality of the trial is the quality of its reporting; without transparent reporting, adequate appraisal of a trial's methodological quality and external validity is not possible. The Consolidated Standards of Reporting Trials (CONSORT) statement provides a minimum set of recommendations for the reporting of RCTs and is endorsed by many peer reviewed journals. 2 Its endorsement is widely recognised to have improved the reporting quality of RCTs. 3 However, since its introduction and update in 2001, 4 numerous articles have identified ongoing reporting quality deficiencies in both the medical 3,5-7 and surgical [8][9][10] literature. Trials of operative intervention carry inherent methodological challenges 11 and so the CONSORT statement in this setting is not as useful as for trials of pharmacological intervention. Such challenges include accounting for variation in the recruiting and consenting practices of surgeons, difficulty in blinding patients and outcome assessors, the presence of confounding factors such as the surgeon's technical ability, differing anaesthetic technique and the need to standardise operative technique between surgeons who have different training backgrounds. As a result, the emphasis of reporting is different than for trials of pharmacological intervention. The reporting of certain methodological features gains increased importance while other features are entirely unique to surgical trials. GRAy SULLIvAN ALTMAN GORDON-WEEKS The CONSORT statement for non-pharmacological treatment (CONSORT-NPT) provides criteria specific for such features, enabling assessment of the reporting quality of trials of operative (and other non-pharmacological) intervention while maintaining important reporting elements of the original CONSORT statement. 12 Such items specific to the CONSORT-NPT checklist include the way in which interventions are standardised between surgeons, the assessment of a surgeon's adherence to such standardisation, the volume of participants treated per surgeon and the surgeon's experience with the intervention. To date there has been no published analysis of the adherence of trials of operative intervention to the CONSORT-NPT statement or assessment of whether the CONSORT-NPT extension has led to an improvement in the reporting quality of such trials. This study analyses the adherence of trials of operative intervention to the CONSORT-NPT statement at time periods before and after its publication in order to determine whether the CONSORT-NPT extension has contributed to an improvement in the reporting standard of trials of operative intervention. Search strategy Electronic searches of MEDLINE ® and Embase™ were performed by a librarian at the John Radcliffe Hospital, Oxford. The following journals were searched: British Journal of Surgery, Archives of Surgery, European Journal of Cardio-Thoracic Surgery, European Urology and Journal of Bone and Joint Surgery (British volume). These publications were chosen to represent high impact factor journals providing information from a range of surgical specialties. The Annals of Surgery was excluded as a preliminary search revealed that it had published relatively few RCTs for the time periods studied. All included journals (with the exception of the European Journal of Cardio-Thoracic Surgery) have endorsed the CONSORT statement in their instructions to authors since 2007. No journal had mentioned the CONSORT-NPT statement as of February 2012 and the European Journal of Cardio-Thoracic Surgery had not mentioned the CONSORT or CONSORT-NPT statements. The search was limited to two time periods: January to December 2004 (sufficiently long enough following the 2001 CONSORT statement revision 4 that trialists should be aware of its publication) and January to December 2010 (similarly with regard to the publication of the CONSORT-NPT statement in 2008). 12 The search history for an example journal can be seen in Figure 1. The search was repeated for each journal at each time point to ensure that all published RCTs within the search limits were retrieved. The NHS Evidence advanced search software (http://www.library.nhs. uk/) was used to perform electronic searches. Inclusion/exclusion criteria RCTs were included if they reported the comparison of at least one non-pharmacological intervention. This included trials of surgical technique, surgical access, technology and instrument design as well as anaesthetic interventions. RCTs comparing solely pharmacological interventions were excluded. All journals included in this study publish their manuscripts in English. Studies were included in chronological order until a sample size of ten trials per journal per time point had been reached. This limit was set based on labour constraints rather than performing a sample size calculation, which, following personal communication with the study's lead statistician (DA), was felt to be unnecessary given that a defined hypothesis was not being tested. Publications of retrospective, observational, experimental and animal studies were excluded as were trials reporting follow-up data of a previously published trial. Journals that endorse the CONSORT statement were defined as those that reference the statement in their instructions to authors or those journals that are referenced on the relevant CONSORT webpage (http://www.consort-statement.org/ about-consort/consort-endorsement/consort-endorsers--journals/). Journals were not excluded on the basis of their lack of endorsement of the CONSORT statement. All abstracts retrieved from database searching were reviewed for selection by two authors (AGW and RG). Studies in which it was not clear whether the inclusion criteria had been met were reviewed in full text and discrepancies between the two authors were resolved by discussion with the remaining authors. Data extraction All publications were reviewed in full text by two authors (AGW and RG). From the CONSORT-NPT statement, a 30-point scoring system was devised (Table 1) giving equal weighting to CONSORT-NPT items and resulting in a score out of 30 for each trial. Two further items not included in the CONSORT-NPT statement (method of anaesthesia and sources of funding) were added as it was felt that these factors could be significant confounders in the design of trials of operative intervention such that their reporting was key. The reporting of the effect of clustering on sample size calculation and the estimate of the effect size and its precision were allocated their own point on the scoring system. Reporting of intention to treat was not included as an item in the scoring system as previous studies have pro- vided evidence to suggest that this method of analysis is frequently misreported by trialists. 13 Any addition or retraction of items from the original CONSORT-NPT checklist was done following discussion with all listed authors including a senior member of the CONSORT Group (DA). It has been noted that the CONSORT Group does not recommend the use of the checklist to provide a quality 'score'. 14 However, this approach gives a useful summary of overall reporting standard when comparing time periods to complement comparison of the reporting of specific items. The data extracted by each author were compared and any discrepancies resolved with discussion between the authors and re-review of the publication. Any remaining discrepancies were discussed with a third author (MS) until a consensus had been reached and a final score obtained. Assessors were not blinded to the time period or journal in which the RCT was published. Outcome measures The primary outcome measure was the difference in mean total modified CONSORT-NPT statement score between the 2004 and 2010 time periods. The secondary outcome measures were both percentage of adherence to each CON-SORT-NPT item and the difference in mean total modified CONSORT-NPT statement score between those journals endorsing the CONSORT-NPT statement and those not doing so. Statistical analysis Normally distributed means were compared using Student's t-test and non-normally distributed data with the Mann-Whitney U test. Normality was determined by visualisation of histograms and p-values of <0.05 were considered statistically significant. Study flow, demographics and CONSORT-NPT endorsement The study flow can be seen in Figure 2. A total of 191 RCTs were identified with 81 and 110 published in 2004 and 2010 respectively. Of these, 55 publications were excluded because the maximum number of RCTs had already been reviewed for specific journals within specific time points. The demographic details for publications at each time point were similar ( Table 2). Comparison of mean CONSORT-NPT score The mean CONSORT-NPT score was 15.2 (standard deviation [SD]: 3.8) for RCTs published in 2004 and 19.1 (SD: 4.1) for those published in 2010. The improvement in mean CONSORT-NPT score from 2004 to 2010 was 3.95 points (95% confidence interval: 3.61-4.29, p<0.001). There was considerable variation in the reporting of individual CON-SORT-NPT items with several items being underreported at both time points (Table 3). No single trial scored 100%. Adherence to the CONSORT-NPT statement Regarding methodological issues, for RCTs published in 2010 there was a significant increase in the reporting of sur-geon's/centre's eligibility criteria, sample size calculation, method for random sequence generation and allocation concealment (Table 3). For the results section, a significantly higher percentage of RCTs published in 2010 included a flow diagram or at least sufficient information to determine participant flow. There was also a significant improvement in the reporting of the number of participants treated per surgeon or centre, the study population's demographics and the experience of the surgeon or centre with the intervention technique. Finally, a significantly higher percentage of studies published in 2010 highlighted potential areas of bias. Methodological items reported by <50% of RCTs in 2010 include the eligibility criteria on which surgeons or centres were selected (27%), the type of anaesthesia used (40%), how interventions were standardised between surgeons (43%), the methods used to monitor surgeons' adherence to the intervention or comparator techniques (8%), the effect that clustering has on the sample size calculation (3%) and the method of blinding participants or outcome assessors (48%). For the reporting of results, the number of participants treated by each surgeon or centre (43%), the surgeon's/centre's experience (38%) and the confidence of the effect estimate (35%) were all poorly reported (Fig 3). CONSORT-NPT score and journal practice The mean CONSORT-NPT score (both time points) for those studies published in CONSORT endorsing journals (mean: 17.5, SD: 4.5) was higher than that for the European Journal of Cardio-Thoracic Surgery (mean: 15.6, SD: 4.0). However, this did not reach statistical significance (p=0.064). When this was analysed per time period there was also no significant difference. Discussion Trials involving operative rather than pharmacological intervention bring inherent methodological challenges. Failure to overcome such challenges in the conduct of a trial is likely to lead to considerable bias, potentially invalidating the results and limiting their interpretation. Adherence to the CONSORT statement enables trial authors to maintain a transparent system of reporting so that the reader can draw considered conclusions from the trial findings. Previous studies have found that trials published in the surgical literature are lacking in their adherence to the CONSORT statement. [8][9][10] Balasubramanian et al found that trials published in high impact surgical journals reported only 69% of CON-SORT items. 9 Similarly, studies analysing the reporting quality of publications in spinal 15 and cardiothoracic 16 surgery found on average 65% and 66% of CONSORT items were reported respectively while the figure for urological trials was lower still at 52%. 8 All such studies acknowledge the difficulties in performing surgical trials. However, all determined CONSORT statement adherence rather than CONSORT-NPT adherence and included trials of pharmacological as well as operative intervention in their assessment. This makes them less specific for the analysis of operative trials and the reporting of methodological features, which make such trial design difficult. Furthermore, no previous study has analysed reporting adequacy in trials of operative intervention at two time points such that until this time any change in reporting standards could not be quantified. It is recognised that in conducting this study the analysis is limited to four surgical specialties and that although data extraction was not performed in a blinded fashion, consensus was reached between multiple authors when scoring trials. The high quality of the search strategy and well defined inclusion/exclusion criteria enabled the analysis solely of trials of surgical intervention at two time points separated by the introduction of the CONSORT-NPT statement. Equal weighting was given to all items in the CONSORT-NPT statement to create an overall score although some items may in fact assume greater importance than others. Nevertheless, presentation of the figures for the reporting of each item individually (Table 2) enables a clearer understanding of the items most frequently underreported. Finally, although the demographics of the included trials differed little between the two time periods studied (Table 2), it is recognised that improvements seen in the CONSORT-NPT score in the period studied could result from secular trends, in particular an improved awareness of an evidence-based approach in the surgical community or stricter ethical regulations imposed on trials rather than publication of the CONSORT-NPT statement alone. Importantly, the significant improvement in reporting from 2004 to 2010 resulted from improved reporting of items such as sample size calculation, allocation concealment and participant flow (all items found in the original CONSORT statement). Comparison with prior estimates of reporting practice supports this improvement. For example, previous estimates for the number of trials in the surgical literature reporting sample size calculations were 20% (2003) 16 and 44% (2008) 15 compared with 80% (2010) here. Similarly, the percentage of trials reporting study flow was estimated at 51% (2000-2003) 8 and 52% (2008), 15 and documented as 80% (2010) here. This trend is repeated for reporting of participant blinding, random sequence generation and allocation concealment among other items. Conclusions Although CONSORT items improved significantly, there was little improvement in CONSORT-NPT specific items, all of which were reported in less than 50% of trials in 2010 (Fig 3). While journals give specific instructions to authors regarding the use of the CONSORT statement, no mention of the CONSORT-NPT statement was found in the journals included in this study. Peer reviewed journals' instructions to authors are likely to have played a large part in improving the awareness of reporting standards throughout surgical academia by insisting on adherence to the CONSORT guidance. The evidence presented here strongly suggests that journals publishing trials of operative intervention should pay equal attention to the CONSORT-NPT statement with the aim being to improve both authors' and reviewers' awareness of this CONSORT extension. This will, in turn, help to improve the quality of reporting of methodological issues specific to such trials, enabling clearer interpretation of their outcomes. Acknowledgement Many thanks to Tatjana Petrinic, who created the search strategy.
2016-05-12T22:15:10.714Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "0edbbc4eae84b5197723b2ade6ffa5b21427b51b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1308/003588414x13824511649418", "oa_status": "HYBRID", "pdf_src": "Grobid", "pdf_hash": "5e407a07cc08a99459ce7e32c2bea0706f8fb523", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219168796
pes2o/s2orc
v3-fos-license
Study on the Elastic–Plastic Correlation of Low-Cycle Fatigue for Variable Asymmetric Loadings The mean stress effect in fatigue life varies by material and loading conditions. Therefore, a classical low cycle fatigue (LCF) model based on mean stress correction shows limits in asymmetric loading cases in both accuracy and applicability. In this paper, the effect of strain ratio (R) on LCF life is analyzed and a strain ratio-based model is presented for asymmetric loading cases. Two correction factors are introduced to express correlations between strain ratio and fatigue strength coefficient and between strain ratio and fatigue ductility coefficient. Verifications are conducted through four materials under different strain ratios: high-pressure tubing steel (HPTS), 2124-T851 aluminum alloy, epoxy resin and AZ61A magnesium alloy. Compared with current widely used LCF models, the proposed model shows a better life prediction accuracy and higher potential in implementation in symmetric and asymmetric loading cases for different materials. It is also found that the strain ratio-based correction is able to consider the damage of ratcheting strain that the mean stress-based models cannot. Introduction Under high-amplitude cyclic loads, the fatigue process of materials is dominated by plastic strain that results in a short fatigue life in loading cycles, so-called low-cycle fatigue (LCF). It is well known from the monotonic tensile curve that the stress grows slowly while the strain increases rapidly when the stress reaches the yield limit. Therefore, either the experimental or the analytical study on the LCF uses the strain control method. When the R ≠ −1, the non-zero mean stress caused by the asymmetric loading cycles will affect the fatigue process considerably while the tensile mean stresses are detrimental to fatigue life while the compressive mean stresses are beneficial to fatigue life [1,2]. Engineering structures suffered from fatigue failure usually bears asymmetric cyclic loading rather than a symmetric one. Therefore, the prediction of fatigue life requires an understanding of material behavior under non-zero mean stress/strain conditions. This is why this sort of mean stress effect on the fatigue behavior of material has been widely studied in the past few decades. As outputs, a large number of mean stress models have been proposed to correct the fatigue life curve. Early studies considered that mean strain is ineffective on the fatigue resistance unless it can result in non-zero mean stress during cyclic loading. The authors in [3] believe that the mean strain has little influence on fatigue, but the mean stress has a significant effect. Under this consideration, the Goodman [4] mean stress correction is widely used in engineering applications. Soderberg [5] used yield strength to replace the tensile limit in the Goodman mean stress correction to tune the prediction conservative. Morrow [6] considered the mean stress as the main contribution on the elastic region of fatigue and used the fatigue strength coefficient σ ′ to replace the tensile limit in the Goodman mean stress correction to propose the model. The Morrow model shows good predictions when the fatigue strength coefficient is close to fracture strength, otherwise the prediction of Morrow model is not convincible. The Smith-Watson-Topper (SWT) model [7] based on the energy concept defines the product of strain amplitude and maximum stress as damage parameter considering that the maximum stress dominates the influence of mean stress on fatigue life. Lv [8] introduced the Walker exponent into the SWT model to describe the sensitivity of mean stress. Zhang [9] took the mean stress relaxation into account in the SWT model based on Landgraf model to improve the fatigue life prediction. Wang [10] proposed an alternative model based on an equivalent strain amplitude and showed excellent correlations in fatigue life data under different strain ratios of rubber. The authors in [11] show that brittle materials are more sensitive to mean stress than ductile materials. The mean stress sensitivity is affected by cyclic softening, cyclic hardening and mean stress relaxation. So a concept of mean stress sensitivity is incorporated to modify the total strain energy density by introducing two mean stress correction factors. It provides better predictions for 8 materials with lower mean error than SWT, Ince-Glinka [12] and GDP (Generalized energy-based fatigue-creep Damage Parameter) models [13]. It can be seen that most works on LCF models were done on the mean stress correction. However, the mean stress affected by cyclic softening or hardening changes with loading cycles and will lead to errors in life prediction [14,15]. In high cycle fatigue (HCF), the mean stress can be simply calculated by the stress ratio. However, the mean stress and strain amplitude cannot be obtained at the same time, which leads to extra effort in life prediction in engineering applications. In addition, how the strain ratio could affect the mean stress and life prediction depends on loading conditions, i.e., the strain ratio and the strain amplitude. It would be more convenient for LCF life prediction to use a directly controlled factor, such as the strain ratio, rather than an intermediate parameter, such as the mean stress. Wang [11] proposed a strain ratio-based LCF model, of which the model parameters need at least three groups of fatigue data at different strain ratios. Mean stress-based LCF models usually over-consider the effect of mean stress on fatigue life especially for asymmetric loading case with small amplitude strain. Besides, the mean stress is hard to obtain in the strain-controlled LCF experiments, which limits the applicability of such models. In this paper, the correlation between strain and stress in LCF is analyzed and a strain ratio-based modification on LCF model is proposed for asymmetric loading condition. Linear relationships behind the fatigue strength coefficient, the fatigue ductile coefficient and the strain ratio are developed and employed in the model modification. The proposed LCF model is then verified and compared with other commonly used LCF models. Elastic-Plastic Correlation of LCF To recognize the relationship between stress and strain under different strain ratios, experimental data of 2124-T851 aluminum alloy from [16] are presented in Figure 1. The experiment was carried out by strain-controlled method at a constant total strain rate of 0.004 s −1 at room temperature. Since the mean stress and the stress amplitude are the two factors usually considered as the control parameters in mean stress-based fatigue models, two diagrams are reproduced as the mean stress versus the strain amplitude ( Figure 1a) and the stress amplitude versus the strain amplitude ( Figure 1b) under four levels of strain ratios. For R = 0.5, the mean stress decreases with increasing strain amplitude at quite a significant decreasing rate in the low strain amplitude range from 0.005 to 0.01. As the strain amplitude further increases, the decreasing rate of mean stress reduces and the mean stress reaches a stable level above zero when the strain amplitude is greater than 0.01. The same patterns can be found in cases R = 0.06 and R = −0.06, and the overall mean stress decreases with increasing strain ratio. For R = −1, in particular, the mean stress keeps as zero and does not vary with the strain amplitude. The mean stress relaxation rate is the main cause of the variation of mean stress. When the plastic strain raises, the mean stress relaxation rate grows and leads to a sharp drop in mean stress. After that, the relaxation rate of mean stress remains at a relatively high level and the mean stress keeps decreasing as the strain amplitude increases. As we can see, in the small strain range, the mean stress varies a lot for different strain ratios. This effect of strain ratio on the mean stress is not significant for the large strain range. The stress amplitude increases with increasing strain amplitude, but it decreases with increasing strain ratio from −1 to 0.5. So, under small strain amplitudes, the effect of strain ratio on the fatigue life of the material is reflected in the mean stress and the elastic strain control of the fatigue process. For large strain amplitudes, the plastic strain takes their place and embodies the impact of strain ratio. Figure 2 shows a comparison of hysteresis loops at different strain ratios. The total strain energy per-cycle can be divided into plastic strain energy ΔWp and elastic tensile strain energy ΔWe. For a fully reversed loading case (R = −1), the hysteresis loop is symmetrical according to the coordinate origin, and the mean stress is zero. In the strain ratio R = 0.5 case, there is a non-zero mean stress σm during the cyclic loading process. As the mean stress varies, the elastic strain energy ΔWe changes by a great deal. So, the elastic strain energy increases with increasing strain ratio at a specific strain amplitude. It is also known that under a specific strain amplitude the stress amplitude decreases with increasing strain ratio. It means the plastic strain energy ΔWp decreases as the strain ratio raises and this phenomenon becomes more obvious as the strain amplitude increases. As a result, the mean stress and the cyclic strain softening affect the fatigue life of the material by means of changing the total strain energy, which explains the influence mechanism of strain ratio on fatigue life. For small strain amplitudes, there is a strong effect of strain ratio on the mean stress and weak effect on the cyclic strain-softening. So the total strain energy depends on mean stress. For large strain amplitudes, the mean stress is relatively insensitive to the strain ratio, but the cyclic softening effect dominates. That is to say that the strain ratio indeed affects the fatigue process, and the effect of strain ratio embodies different indices according to the strain amplitude. Model Description The Manson-Coffin model, given as Equation (1), is suitable for life predictions under fully reversed loading cases. where ′ is the fatigue strength coefficient, b the fatigue strength exponent, ′ the fatigue ductility coefficient, and c the fatigue ductility exponent. In the asymmetric loading condition, the effect of strain ratio is generally considered through equivalent stress, like Goodman model, which is expressed as where a  is the stress amplitude, m  the mean stress, u  the tensile limit of material, and eq  the equivalent stress amplitude. Equation (2) can be also expressed as The Manson-Coffin model is then modified into Replacing the fatigue strength coefficient f   by the tensile limit u  [6], the equivalent stress and the fatigue model will be Kwofie [17] considered that, at a specific stress amplitude, a decrease in fatigue life due to mean stress must be led by a corresponding decrease in the fatigue strength coefficient σ ′ . The influence is related to the rate of mean stress to tensile limit. The fatigue life decreases with stress in a non-linear manner, and the fatigue crack growth increases with the cycle number, which may be described by an exponential function. where α is a material constant. Walker [18] also proposed an equivalent stress model based on the mean stress as where γ is a material constant. The walker model turns into the SWT model while γ = 0.5 as Equation (9). The model used in LCF life prediction can be expressed as Equation (10). As we can see, the equivalent stress is the common corrector used in the fatigue model modifications. For the Goodman model and the Morrow model, the equivalent stress only depends on the mean stress. However, the Kwofie model and the Walker model use the mean stress σm and the stress amplitude σa to calculate the equivalent stress together. A dimensionless ratio of σa/σeq is employed to compare these modifications, which can represent the relationship between fatigue life and mean stress indirectly. The reason is that, at a given stress amplitude σa, a higher equivalent stress σeq results in a lower fatigue life. So, Figure 3 reflects how the modifications on fatigue life change according to the mean stress of different fatigue models. An obvious trend can be found in that σa/σeq decreases with the increase of mean stress σm. When the mean stress is small, σa/σeq of all models are close to each other, and the gap between models grows with increasing mean stress. The Goodman model has the lowest σa/σeq and will lead to the lowest fatigue life. The predicted lives of the Walker and Kwofie models depend on parameters α and γ. Using the Walker model and the Kwofie model can improve the life prediction, but this depends on the determination of α and γ, which could be uncertain in practice. Lv [8] considered that the effect of mean stress on fatigue life varies from materials can be represented by mean stress sensitivity. For the SWT model, the mean stress sensitivity is a constant of 0.5 and deviations will lead to errors in the life prediction. The material constant of Walker model, γ, can express the mean stress sensitivity as well. So, the SWT model can be modified into Equation (11). Other than the mean stress, Wang [10] used an equivalent strain amplitude to incorporate the effect of strain ratio into life prediction. The equivalent strain amplitude is defined as where β and n are material constants. Model Comparison Fatigue data of high-pressure tubing steel (HPTS) [19] under strain ratios of −2, and 0.5 are used to compare the abovementioned fatigue models, as shown in Figure 4. The effect of mean stress on fatigue life depends on the direction. For a tension-compression case (R < 0), the compressive stress has a beneficial effect by extending the fatigue life, while the tensile stress takes the main response for the fatigue damage accumulation. Under this consideration, a lower strain ratio means a larger compressive stress, which leads to a longer fatigue life, and vice versa. The Manson-Coffin model does not consider the mean stress effect, so the life prediction will be higher than the experimental data for R = 0.5 due to the existence of tensile mean stress. Aside from the Manson-Coffin model, other models show a common feature that the predictions are smaller than the experiment results in lower strain ranges. This is because the mean stress corrections in these models are over-considered when the mean stress raises rapidly with decreasing strain amplitude. In particular, for mean stress-based models, the deviation of prediction is strongly affected by the correction intensity of mean stress as shown in Figure 3. It is why the Goodman model has the lowest predictions and the Morrow model gives relatively high results. The Lv model is a transformation of the SWT model,introducing a material constant γ from the Walker model to consider the sensitivity of mean stress. When γ > 0.5, the prediction life of Lv model is larger than the SWT model. Therefore, the prediction accuracy of Lv model is worse than the SWT model for R = −2 and vice versa for R = 0.5. This means that fitting different γ values according to different strain ratios are essential for Lv model to provide accurate predictions. However, the material constant, γ, is not defined as a variable in the Walker model, which means γ has a determined value for a certain material no matter how the loading condition changes, i.e., for different strain ratios. As a result, the Lv model, which inherits γ from the Walker model, cannot fit for all strain ratios. In other words, an Lv model with higher γ value is suitable for the low strain ratio case, while that with smaller γ value can predict the high strain ratio case better. Using the equivalence strain amplitude, Wang's model gives the best predictions in asymmetric loading cases. The mean stress-based models mainly focus on the elastic strain, and the predictions usually excessively consider the effect of mean stress. The Walker model and the Lv model weaken the above over-consideration by introducing the sensitivity coefficient of mean stress, but they sacrifice in terms of feasibility. In addition, the mean stress relaxation, cyclic softening and cyclic hardening will affect the mean stress and influence the prediction accuracy. Thanks to the strain ratio-based modification, the Wang model can avoid the interferences from stress relaxation and cyclic softening/hardening and offers acceptable predictions under asymmetric loading cases. It can be concluded that the strain ratio-based fatigue model is more appropriate for the strain-controlled LCF case. Modification To predict fatigue life, the Manson-Coffin model needs to obtain four parameters: the fatigue strength coefficient ′ , the fatigue strength exponent b, the fatigue ductility coefficient ′ , and the fatigue ductility exponent c. Parameters b and c represent the slopes of the elastic and plastic shares in the fatigue process, ′ and ′ represent the intercepts of the elastic and plastic shares on the coordinate axis. Ong [20] analyzed the fatigue behaviors of 49 metals using the Manson-Coffin model and proved that b and c are related to the tensile limit, the true fracture strength and the true fracture ductility from the monotonic tensile test. As a result, these two parameters are independent from strain ratio. The Manson-Coffin model can be transformed into where 2 • σ ′ ⁄ and 2 • ε ′ are the Elastic strain amplitude and plastic strain amplitude of the hysteresis loop for one cycle failure, respectively. The total strain energy required for failure is a certainty [21]. The change of strain ratio leads to a change in the proportion of elastic strain energy and plastic strain energy for one cycle failure. Accordingly, the strain ratio will affect the values of σ ′ and ε ′ . However, most current fatigue models only focus on σ ′ , and the lack of consideration for ε ′ may have adverse effects on the life prediction, especially for high-amplitude strain. According to Equation (6), the modifications are mainly made on the fatigue strength coefficient using the mean stress as (14) where f(σm) is the function of mean stress. For the strain-controlled LCF, the mean stress of material is caused by the asymmetric strain, so Equation (14) can be transformed into The plastic strain is affected by the strain ratio, in large strain amplitude range in particular, so the fatigue ductility coefficient needs to be corrected by the strain ratio as below. where f(R) is the corrected fatigue strength coefficient, and g(R) is the corrected fatigue ductility coefficient. To study the effect of strain ratio on f(R) and g(R), a correlation analysis is performed. Equations (17)- (19) are fatigue life expressions fitted by the HPTS data, in which Equation (17) is fitted based on the Manson-Coffin model, and Equations (18) and (19) are fitted using the proposed model (Equation (16)). The fitting curve is shown in Figure 5. It can be seen from Figure 5 that the prediction curve fits well with the experimental data, and the slopes of the elastic and plastic lines do not change with strain ratio. As shown in Figure 6, the corrected fatigue strength coefficient, f(R), and the corrected fatigue ductility coefficient, g(R), are both linearly correlated with the strain ratio. where and are material constants, which are adjustable based on the sensitivity to the strain ratio. The fatigue life of the material will change rapidly if the absolute values of and vary. Then, Equation (16) can be transformed as follows, For the fully reversed loading case (R = −1), the correction values and are 0, the model degenerates to the Manson-Coffin model. Verification Prior to the prediction accuracy, the basis of the proposed model modification, the linear functions between f(R) and g(R) and the strain ratio, are examined. Fatigue data of different materials under various strain ratios are employed, including T851 from [22] and epoxy resin from [23]. The fatigue strength coefficient σ ′ , the fatigue strength exponent b, the fatigue ductility coefficient ε ′ , and the fatigue ductility exponent c are obtained using the R = −1 data. The fitting curves of 2124-T851 and epoxy resin are shown in Figure 7 and Figure 9, the linear relationships between f(R) and g(R) and the strain ratio are quite clear even for different materials. It can be considered that and are constants for a certain material. By obtaining and by two groups of tests with different strain ratios, the proposed model can be used in any other strain ratios without obtaining the mean stress, which is a great advantage in the application of life prediction for asymmetric loading cases. The proposed model is employed to predict the fatigue life of the HPTS, T851 and epoxy resin and results are shown in Figure 10. The relative errors of the predicted lives fall within the double scatter band, which is much better than most of the models mentioned in Figure 2. Discussions A dimensionless constant z is defined as where Np is the predicted life and Nf is the experimental life. Since (Np-Nf) is a measure of the discrepancy of predicted life and experiment life, the r.m.s. of z, Sz, can represent the discrepancy level of predictions. where k is the number of scattered points. The discrepancy level Sz of the predictions of 8 models for four sorts of material are calculated and listed in Table 1. Apparently, the life prediction of the Manson-Coffin model shows the largest relative error since it does not consider the effect of strain ratio, while other models show better performance no matter what correction criteria they follow. The effect of strain ratio on the mean stress relaxation rate of T851 aluminum alloy is smaller compared with that of other materials, for small loading cases in particular. This is the reason why the prediction discrepancy of T851 aluminum alloy is smaller than HPTS and epoxy resin for all models. The Morrow model and Walker model are more accurate in HPTS due to the smaller correction intensity in small strain cases as shown in Figure 4. The SWT model defines a damage parameter of ε to correct the effect of mean stress on fatigue life, and it overestimates the effect in large mean stress cases [24]. This is why considerable discrepancies in predictions are found for HPTS in large mean stress conditions. The Lv model shows a better prediction than the SWT model because it considers the sensitivity of mean stress. All models show big gaps in the epoxy resin, particularly for the Goodman model and Morrow model. One reason is that epoxy resin has anisotropic behavior in tension and compression, which affects the mean stress in cyclic loading process; therefore, the mean stress cannot fully reflect the effect of strain ratio on damage. The other reason is that considerable ratcheting strain could be accumulated in fatigue tests due to the viscoelastic behavior of polymer materials, and a certain amount of irreversible strain energy will be converted into heat dissipation or cause damage that cannot be described by the mean stress. In other words, models based on mean stress are not capable of accurately predicting the fatigue life of materials with large ratcheting strain. The ratcheting strain is more obvious for nonmetallic materials than metals, but the temperature will intensify the ratcheting strain of metals. That is to say, the mean stress-based model may not be available for non-metallic materials and hightemperature metals. The Wang model performs well in non-metallic materials due to the correction of strain ratio. In other words, strain ratio-based correction is more suitable for asymmetric loading cases. This is why the proposed model adds the correction term to the fatigue strength coefficient and fatigue ductility coefficient, which can express the influence of the strain ratio and viscoelastic behavior of the material. Conclusion In this paper, the correlation between elastic and plastic strains in LCF were analyzed and a strain ratio-based fatigue model was proposed and examined using various materials. The main conclusions can be drawn as follows: (1) Two ways that strain ratio affects the LCF process are found, depending on the strain condition. For low strain cases, the effect of strain ratio on fatigue life relies on the mean stress; for high strain cases, the plastic strain energy controlled by the strain ratio dominates the fatigue life and the mean stress plays a small role. This is why the mean stress-based models struggle in smallamplitude loading cases but show acceptable accuracy in large-amplitude loading conditions; (2) The corrected fatigue strength coefficient f(R) and fatigue ductility coefficient g(R) show linear correlations with the strain ratio R, which can effectively represent the complex behavior of the material, including the mean stress relaxation and ratcheting; (3) Temperature is not considered in the present study, which may play a considerable part in the LCF of metallic and nonmetallic materials. Further work could be done in such directions to further understand the LCF mechanism.
2020-06-02T21:05:51.270Z
2020-05-28T00:00:00.000
{ "year": 2020, "sha1": "49196c6ea381f564e34768506c746b2a68dd01f1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/13/11/2451/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "95b350acb0a82d085660df4d63f0d41c9a9b563c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
260395397
pes2o/s2orc
v3-fos-license
The Governance Turn in the World Bank Discourse from a Normative IR Lens: Cosmopolitanism or Communitarianism? When structural adjustment programmes which had dominated the lending conditionality of the leading international financial institutions (the International Monetary Fund and the World Bank) in the 1980s failed to deliver the expected success stories, governance gained traction as a predictor of aid effectiveness. Development discourse and practice began incorporating governance indicators and defining a governance concept in line with the effort to reassess the role of the state in development. This paper examines whether the inclusion of governance in the development discourse of the World Bank in the 1990s reflects cosmopolitan or communitarian ethical norms. Normative international relations theory permits an assessment of the so-called governance turn in World Bank conditionality which interrogates the understanding of the state and of the international community which are put forth. Key World Bank publications from the 1990s are selected for content analysis. The first level of analysis interrogates whether the conceptualization of the state emerging from the documents reflects a communitarian or cosmopolitan approach. The second level of analysis focuses on the universalism-particularism tension in the cosmopolitan-communitarian debate. What emerges from the analysis is a hybrid of cosmopolitan and communitarian Introduction The latter decades of the 20 th century were crucial in the transformation of the discourse and practice of development aid. The domestic popularity of the neoliberal doctrine and the taste for far-reaching reforms found an echo in the rising popularity of structural adjustments programmes which came to dominate the conditionality of the international financial institutions (the International Monetary Fund and the World Bank) in the 1980s. A corollary of this trend was the governance turn in the 1990s when, in the effort to reassess the role of the state in development (and the lack of success of structural adjustment-driven reforms), governance was identified as a crucial predictor of aid effectiveness. The governance turn was noticeable both in the development discourse and in the practice of conditionality, with the explicit inclusion of governance criteria among conditions for lending. It is the aim of this paper to examine whether the inclusion of governance in the development discourse of the World Bank in the 1990s reflects cosmopolitan or communitarian ethical norms. The paper will first outline the theoretical framework by presenting the cosmopolitan-communitarian debate from normative international relations and its main points of contention, as well as underlining where it echoes the liberalism-communitarianism debate from political theory. The following section contextualizes the emergence of the governance turn in the World Bank discourse by assessing the pivotal World Bank publications of the 1990s which modified the mainstream discourse, as well as by connecting it to the previous decades. The third section analyses a selection of World Bank publications from the 1990s by employing a normative IR lens and assessing whether they fit a cosmopolitan or communitarian ethical sensibility. The conclusion offers some further thoughts on the ethical hybrid that emerges from the analysis. The Cosmopolitan-Communitarian Debate Contemporary normative international relations theory has been defined for the past few decades by the cosmopolitan-communitarian debate. The 1970s revival of liberalism through influential works like John Rawls' A Theory of Justice (1999Justice ( [1971), Robert Nozick's Anarchy, State, andUtopia (1999 [1974]) or Ronald Dworkin's Taking Rights Seriously (1978) redefined the focus of political philosophy. In international relations theory, Charles Beitz (1999Beitz ( [1979) adapted Rawls to a theory of international distributive justice and Thomas Pogge (1992Pogge ( , 2002 focused on how the existing global order actively harms the poor and why it should be reformed. The 1980s communitarian critique of liberalism did not offer a grand theory to contrast liberalism, but rather core counterarguments (primarily to the Rawlsian take). Authors like Alasdair MacIntyre (2007MacIntyre ( [1981) and Charles Taylor (1979) present some the strongest voices in the communitarian camp. International theorist Michael Walzer (1983) develops the concept of complex equality on communitarian grounds, while David Miller (1995) offers a defense of the moral significance of nationalism. Far from being novel, the seeds of the debate had been planted beforehand. Walzer compares the communitarian critique of liberalism (in the article bearing the same name) to a recurring fashion: "transient but certain to return" (Walzer, 1990, p. 6). Hegel's critique of liberal individualism and Marx's critique of the theory of human rights are earlier examples of a communitarianminded critique of liberalism (Morrice, 2000, p. 234). Cosmopolitanism itself can trace its modern legacy to Kant. While cosmopolitanism in IR and liberalism in political theory largely overlap (though both terms encompass a rich variety), the contribution at hand is concerned primarily with cosmopolitanism because it aims to employ a normative IR theoretical framework. Important for the analysis is the central cosmopolitan claim to a universal community. This universal community can be conceived as a moral one, a political one or an economic one (Kleingeld and Brown, 2019). Communitarianism, in contrast, disagrees with the possibility of there existing a sole community, instead proposing a multiplicity of communities whose interests are in tension. While the debate between the camps can be simplified to a disagreement about human communities, there are three distinctions which are generally underlined when dissecting the cosmopolitan-communitarian debate: 1 the concept of the person, the moral relevance of states, and universalism versus particularism (Caney, 1992;Cochran, 2004;Morrice, 2000). The first point of contention is a descriptive one about the nature and essence of the person. This issue overlaps in political theory and its normative IR counterpart. Like liberalism, cosmopolitanism has an a priori, pre-social concept of the person, placing great value on individualism: "the ultimate units of concern are human beings, or persons" (Pogge, 1992, p. 48). An individual's identity and value, as well as moral subjectivity is held to be independent to society. Nozick offers a strong rebuke to the idea that there could be a "social entity with a good that undergoes some sacrifice for its own good. There are only individual people, different individual people, with their own individual lives." (Nozick, 1974, pp. 32-33) Communitarianism criticizes the atomistic cosmopolitan description of human nature by offering a competing one comprising an embeddedness thesis, a social thesis and a cultural options thesis (Caney, 1992, p. 274). The embeddedness thesis refers to the argument that persons are embedded and constituted by communities: "I inherit from the past of my family, my city, my tribe, my nation, a variety of debts, inheritances, rightful expectations and obligations. These constitute the given of my life, my moral starting point." (MacIntyre, 1981, p. 220) The social thesis holds that a person achieves full moral agency only by living in society (Taylor, 1985, p. 191). Finally, the cultural options thesis draws attention to the exercise of autonomy (Caney, 1992, p. 280). The second notable cleavage in the cosmopolitan-communitarian debate is concerned with the role of the state. With its privileging of the individual, cosmopolitanism does not offer special moral agency or character to the state. While political cosmopolitanism can argue for a version of a world state (Kleingeld and Brown, 2019), it is not needed to achieve full moral agency, but rather as a recognition of equal moral agency of all individuals. The communitarian position holds that the state is "morally relevant because it is necessary to the development of the individual as a free person" (Cochran, 2004, p. 12). Thus, full moral agency is not attainable outside a community. The communitarian position is also closest to a traditional international relations one which privileges sovereignty and sees the international realm as being comprised of equal sovereign states. Communitarians offer equal moral standing to all states in the international realm. The third issue in the debate is the one which Cochran (2004, p. 50) finds insurmountable: "the dispute on the universal versus the particular stands." This tension revolves around the question of how to establish ethical standards across different societies. Cosmopolitans, as a result of their conception of the person as being freely chosen and pre-social, argue that there can be universal standards because individuals are morally equal. Communitarians regard communities as those being morally equal while the individuals embedded in them have differing ethical standards and moral groundings, circumscribed by space and time. This epistemological question of moral grounds colours the ability of normative international relations theory to make unimpeachable ethical judgements on concrete aspects of practical ethics in the international system. Importantly, the cosmopolitan-communitarian debate distinguishes itself from the liberalism-communitarianism one here by always having a global dimension in the issue of universalism versus particularism. To put it plainly, it is a question of establishing moral standards between states, not within states. It is on the basis of these three issues outlined above that questions of distributive justice can be interpreted differently in a cosmopolitan or communitarian framework. Best (2005) argues that international financial institutions, specifically the IMF, have adopted a moral language which combines aspects of cosmopolitanism and communitarianism. She calls this moral hybrid a "communitarian liberalism-that so far has demonstrated more of the weaknesses than the strengths of these two moral frameworks" (Best, 2005, p. 362). She arrives at this conclusion after analysing a series of moral tropes employed in IMF discourse, such as transparency, universal standards, ownership, solidarity, and discipline. To some extent, this paper seeks to test whether her assertion holds for the IMF's twin, the World Bank, by looking at the development discourse surrounding the introduction of the concept of governance in aid conditionality. Development Discourse: Tracing the Emergence of the Governance Turn Before proceeding to contextualize the establishment of the new aid paradigm in the development discourse of the 1990s, it is necessary to highlight a few key points about the role of the World Bank in shaping development discourse. First, World Bank discourse on the matter is highly influential. As Gavin and Rodrik note, "the Bank's strength lies is in its tremendous powers to spread and popularize ideas that it latches on to. Once the Bank gets hold of an idea, its financial clout ensures that the idea will gain wide currency." (Gavin and Rodrik, 1995, p. 333) As this influence is mainly exercised upon other development actors, changes in the Bank discourse and practice echo in the larger development field. Second, it is subject to fashions and fads. Ziai's analysis of flagship World Bank publications leads him to argue that, while the organization's thinking of the 70s imagined a central role for states in development, this tide had turned in the 80s with the popularity of neoliberalism which marginalized the state, which was then supplanted by a more nuanced consideration of the role of institutions at the beginning of the 21 st century (Ziai, 2016, pp. 135-136). These fashions largely mirror the mainstream in development theory, with critiques of conventional development existing in academia unsurprisingly not penetrating the Bank's discourse. Third, there appear to be factions within the institution engaged in an intellectual and ideological struggle of defining development (Ziai, 2016, p. 136). The discourse that emerges from the Bank in its multiplicity of publications is thus not uniform, though general trends can be detected. The paper at hand is concerned with such a trend in particular, specifically what it calls the governance turn, that is, the adoption of the concept of good governance in World Bank discourse and, later, conditionality (via the Country Policy and Institutional Assessment). While the 1980s undoubtedly belonged to neoliberalism, so much so that it has been dubbed the lost decade for development, it equally belonged to structural adjustment reforms. Both the IMF and the World Bank shifted their conditionality towards one promoting specific, market-friendly, policy reforms. Going into the 1990s, structural adjustment had a less than stellar record. When no quick miracles were delivered by structural adjustment, the development discourse shifted towards the new 'concept of the decade': governance. The 1989 report on Sub-Saharan Africa already signalled this shift, arguing that "Underlying the litany of Africa's development problems is a crisis of governance." (The World Bank, 1989, p. 60). The fault rested with the state still, except it was not because it impeded the functioning of the market, as a pure neoliberal explanation would claim, but because it was inefficient. As Ziai notes: "In a situation where the majority of African countries have been undergoing structural adjustment, the failure of these policies to improve lives or even spur economic growth can now be attributed to 'weak public sector management' (...) -instead of blaming the economic reforms themselves." (Ziai, 2016, p. 131) This shift was intimately linked with the increased concern for aid effectiveness in an age when funds where dwindling and poverty and underdevelopment were persisting. It was easy to point to examples of government mismanagement and abuse of aid money and to persisting levels of underdevelopment, be it in Somoza's Nicaragua or Mobutu's Zaire, so it is unsurprising that this line of argument gained steam (Hout, 2007, p. 135). In the development sector at large in the 1990s, "There is heightened awareness that the quality of a country's governance system is a key determinant of the ability to pursue sustainable economic and social development." (Santiso, 2001, p. 5) The Burnside and Dollar (1997) paper as well as the 1998 annual report Assessing Aid were crucial in cementing the narrative of aid effectiveness alongside that of governance. "Despite fierce criticism levelled at the methodology of the studies and the validity of their conclusions by academic researchers, the relationship between governance quality and aid effectiveness became almost a dogma in certain policy-making circles." (Hout, 2007, p. 135) A natural result of this concern for aid effectiveness was the shift to aid selectivity. Governance represented a key distinguishing aspect in determining who was deserving of aid, as recipient countries were selected based on their past performance in regard to policies and governance. This new aid paradigm thus included what is called ex post conditionality, alongside the ex ante conditionality imposed by structural adjustment programmes (Hout, 2007, p. 23). In effect, it established a system where good performers were further rewarded and bad performers were starved of resources. Another important aspect of the new aid paradigm is its reassessment of the state. The 1991 World Bank annual report highlighted how the relationship between the state and the economy influences development and considered this one of the most valuable lessons learned in the past decade (The World Bank, 1991, p. iii). It further argued that "Reform must look at institutions" (The World Bank, 1991, p. 10) as a means to increase the quality of governance. Annual reports like The State in a Changing World (1997) and Assessing Aid: What Works, What Doesn't, and Why (1998) put the relationship between state and development centre stage, a notable departure from the neoliberalism of the Washington Consensus which marginalized the state and regarded it as an obstacle in the functioning of the market. Many considered this move as a signal that a Post-Washington Consensus was emerging, however "although the good governance agenda acknowledges the importance of the state in the development process, it would be a grave misconception to regard it as a complete break with neo-liberalism" (Abrahamsen, 2000, pp. 41-42). Rather than denying the premises upon which the development agenda of the 1980s was constructed, the new agenda added governance considerations on top of them, resulting in what could more accurately be called an augmented Washington Consensus. It was not only Bank publications and discourse which revolved around the new 'concept of the decade'. The governance turn was also reflected in the Country Performance Rating being replaced in 1998 by the Country Policy and Institutional Assessment (CPIA) which now included governance and social policy criteria. The CPIA rating is used to decide allocation of the International Development Association (IDA) funding. There were six governance-related indicators, making up 30% of the CPIA criteria: "policies and institutions for environmental sustainability (indicator 10); property rights and rule-based governance (indicator 16); quality of budgetary and financial management (indicator 17); efficiency of revenue mobilisation (indicator 18); quality of public administration (indicator 19); transparency, accountability and corruption in the public sector (indicator 20)" (Hout, 2007, pp. 31-32). The rest of the CPIA criteria were made up of macroeconomic policies and structural policies which echoed the same development formula prescribed in the 1980s. Thus, we can consider that the governance turn emerged in the World Bank discourse already from the 1989 Sub-Saharan Africa report, partially as explanation for the failure of structural adjustment reforms, partially as a strategy to salvage the neoliberal agenda by grafting governance concerns on top of it. World Bank influence acted as a multiplier to ensure other development actors took up the discourse and the concerns brought up, thus waving in the age of aid selectivity and (new) aid conditionality. A Normative Analysis of World Bank Publications: Finding the Cosmopolitan-Communitarian Balance Perhaps most significant about the governance turn was that it brought a reassessment of the role of the state in development. To many, it signalled a shift to a Post-Washington Consensus which was not (as) market fundamentalist. As the role of states is a major point of contention in the cosmopolitan-communitarian debate, it is worthwhile to begin the analysis here. While the state did indeed get recast as a central actor in the new development discourse, it would be an exaggeration to say that its new role went against the previous neoliberal script. The Bank promoted an effective state, which was not "a direct provider of growth but (as) a partner, catalyst, and facilitator" (The World Bank, 1997, p. 1). This message was central in its 1997 World Development Report and it echoed in other papers and reports on governance published throughout the 1990s which argued for a "smaller state equipped with a professional, accountable bureaucracy that can provide an "enabling environment" for private sector-led growth, to discharge effectively core functions such as economic management, and to pursue sustained poverty reduction" (The World Bank, 1994, p. xvi). It must also be noted that the Bank recognized that effective institutional arrangements were subject to variation between countries on basis of their culture and history (The World Bank, 1992, p. 7). Overall, the ideal state was still a minimal one, but an effective one. While such a conceptualization of a minimal state might fit in with liberalism, it does not necessarily follow that it fits in with a cosmopolitan view. Even Pogge's proposed institutional cosmopolitanism, though not going as far as imagining a global state, argues for a vertical dispersal of the sovereignty concentrated in the hands of the state (Pogge, 1992, p. 58). Pogge (1992) criticizes the concentration of sovereignty and considers it an impediment in establishing international justice, while the Bank makes no special effort to address such concerns. When it addresses the importance of civil society, it does so as a means to support state functioning, not with a view to dispersing sovereignty downwards: "Stimulating debate in civil society about policy is an intangible way for development assistance to influence policy reform." (The World Bank, 1998, p. 57) More importantly, the Bank discourse on this lacks any reference to a global community, so an upwards dispersal of sovereignty, which would be essential if we were to characterize its stance as cosmopolitan. Instead, it shapes its discourse to reflect recognition of equal state sovereignty. While this does come closer to the communitarian stance that maintains the state's moral relevance as the means of providing personal self-realization (Cochran, 2004, p. 12), imagining it as the only viable instrument, besides market forces, through which development can be assessed and spurred, it would be an exaggeration and an omission to call the Bank's approach a communitarian one. Though it does allow for some measure of maximalism in determining domestic policies, it does not fully respect what Walzer (1994) considers the minimalist claim to tribalism. In his theory on spheres of justice, Walzer (1994, p. 4) highlights the maximalist/minimalist dualism as a "feature of every morality. Philosophers most often describe it in terms of a (thin) set of universal principles adapted (thickly) to these or those historical circumstances." The World Bank fails the simplistic test of tribalism through the imposition of conditionality when this conditionality seeks to shape non-Western governments to reflect Western political and economic arrangements. Because tribalism, understood as commitment to your own community, can never disappear, it must always be accommodated, Walzer (1994, pp.81-82) argues, defining a common argument in communitarian approaches. While corruption and mishandling of aid funds for personal gain are clear enough cases where a minimalist conception of justice can be used as condemnation, it is difficult to imagine that the public sector management reforms promoted by the Bank (such as public expenditure management or civil service reform) (The World Bank, 1992, p. 12) in its quest to instil good governance do not contravene the maximalist morality framework as imagined by Walzer (1994). After all, in a communitarian approach, it is the community itself which is to negotiate the rules for communal living and the public sector is an essential part of that. The World Bank's treatment of the role of the state in its publications cannot be comfortably placed in either side of the cosmopolitan-communitarian debate. Though privileging sovereignty may on its face appear communitarian, such privileging ends when it comes to reforms and aid conditionality. Corruption may be condemned from a minimalist morality, as it is recognizable and condemnable regardless of the community from which it is seen, but public sector management done in a way that contradicts World Bank recommendations is not, per se, a case of injustice or something to be condemned, much less penalized by withholding aid funds. What this promotion of uniform reform schemes in certain sectors shows is a tendency towards universalism, as well as a privileging of certain governmental infrastructure (political, economic, administrative) over others. It is the universalism versus particularism division of the cosmopolitancommunitarian debate where the Bank takes its clearest stance in one of the camps. Before looking at the discourse, the imposition of the CPIA criteria is already very telling. This mechanism uses "a uniform model of what is assumed to work in development processes, irrespective of the context to which it is applied" (Hout, 2007, p. 44). Criticism of over-reliance on one size fits all prescriptions is not new to Bretton Woods institutions. Though, as noted above, some lip service is paid to the cultural and historical variation of governance practices, the underlying message of the CPIA seems to be that democratic regimes are the only appropriate frameworks for good governance practices. It is certainly the case that such arrangements are rated highly via CPIA and then rewarded with aid funds. The Bank even introduced a socalled governance discount in 1998 which reduced IDA funds allocation to countries which scored poorly on the governance criteria in the CPIA (Hout, 2007, p. 32). Moreover, democracy and economic liberalism are conceptually linked as determiners of good governance (Abrahamsen, 2000, p. 51). As the Soviet Bloc had just dissolved and a democratic wave (together with a transition to capitalism) had swept the globe, it is understandable that democratic political arrangements were the ones considered appropriate for delivering good governance. However, there was some room for variation allowed: "This does not mean that Western-style democracy is the only solution. Experience from parts of East Asia suggests that where there is widespread trust in public institutions, effective ground-level deliberation, and respect for the rule of law, the conditions for responsive state intervention can be met." (The World Bank, 1997, p. 116) The State in a Changing World report is equally cautious in overstating the relationship between growth and democracy (The World Bank, 1997, p. 149). However, no degree of caution on the matter can disguise the fact that the Bank has a universalist tendency. The universalism-particularism tension rests on whether we can find a standard by which to make judgements "across plural conceptions of the good" (Cochran, 2004, p. 12). By introducing the good governance agenda in development, the World Bank did exactly that. It found a formulation of both ex ante and ex post standards by which to judge and reform aid recipient countries. Abrahamsen goes further and argues that the good governance agenda not only promotes a particular understanding of democratisation which legitimises specific interventions, it also delegitimises and marginalises "alternative representations of democracy and development" (Abrahamsen, 2000, p. 13). In its characteristic technocratic fashion, the Bank's good governance agenda was broken down into actionable aspects. The most ambitious portrayal appears in the Kaufmann et al. (1999) paper which emerged at the end of the decade. Their study confirms what the earlier Burnside and Dollar (1997) paper had argued: that there is a strong causal relationship between good governance and good development outcomes. This study of data measuring subjective governance quality perceptions is organized around six clusters of indicators: Voice and Accountability, Political Instability and Violence, Government Effectiveness, Regulatory Burden, Rule of Law, and Graft (Kaufmann et al., 1999, p. 2). As each indicator is described in detail, the report would merit a separate analysis which goes beyond the scope of the present contribution. For the purpose of the paper at hand, it is sufficient to use it as illustration of the standardizing and universalizing approach adopted by the World Bank in regard to its governance agenda. This approach suggests a level of clarity and agreement on how to achieve good governance which is non-existent in the literature: "There are still no clear or settled ideas about how effective governance and democratic consolidation should be suitably defined, let alone how they could be supported from abroad." (Santiso, 2001, p. 6) While it might be easy to recognize bad governance, the variety of governance practices which might be characterized as good outside the World Bank's technocratic breakdown of the concept is significant. The methodology employed in the mentioned study also draws the obvious criticism that perception may not be factual and treating such qualitative data in a quantitative manner may lead to overreaching. That the Bank proceeds with an overabundance of confidence in the promotion of certain governance practices despite the difficulty in measuring governance and prescribing reform and that it even excludes countries whose record does not conform to its agreed upon illustration of good governance demonstrates a clear universalist tendency. This standardization is the mark of universalization which Best (2005) also identifies in the IMF discourse. By technocratizing the discourse, the moral underpinning is hidden under the veil of objectivity. In actuality, when setting standards, one cannot do without a standard setter and in the case of the IMF and the World Bank alike, both the standard maker and the standard example are Western democracies which places them in the position of moral arbiter while feigning a cosmopolitan veneer of universal standards derived from moral equality. There is no Rawlsian veil of ignorance from which the CPIA or the good governance criteria emerged. Though not addressed directly in the reports analysed, a few observations on the underlying conceptualization of the person can be read between its lines. The World Bank stance on distributive justice is the clearest pathway to revealing this stance. While the Bank recognizes a moral duty to help through its very mandate to end poverty (2021), irrespective of belonging to a community, aid flows, aid conditionality and aid selectivity show the opposite. A purely theoretical cosmopolitan stance would hold that all individuals are equal moral agents and thus equally deserving of aid if needed, which could not square with aid selectivity. A communitarian stance, on the other hand, would not shy away from privileging certain groups above others when it comes to prioritizing aid. However, it would not prioritize according to efficient use of aid, but according to one's belonging to the same community or tribe. The World Bank has thus made prioritizing aid à la communitarianism a practice, but without grounding it on communitarian ethical norms of selectivity. However, the communitarian concept of the person prevails in the Bank's stance through the very simple fact that the World Bank deals with governments in need, not people in need. Conclusion By focusing the analysis along the three tensions inherent to the cosmopolitancommunitarian debate, the paper at hand has concentrated mostly on assessing the conceptualization of the role of the state and the universalist versus particularist tendencies of the discourse. Best's (2005) argument that international financial institutions promote a hybrid cosmopolitancommunitarian moral language was confirmed to some extent, though the variation in methodology has produced a different looking picture of the World Bank than what emerged for the IMF. Instead of focusing on tropes, this analysis focused on identifying illustrations of the three tensions in the cosmopolitan-communitarian debate. On the concept of the person, the reports selected say very little, but the World Bank's privileging of states over people (and of specific states over others through aid selectivity) shows a communitarian bias overall. Reassessing the role of the state in the governance turn showed a clear contradiction to cosmopolitan norms, but a selective respect for communitarianism. Far from promoting a moral arrangement in which the state has no special function in the fulfilment of moral duty, as cosmopolitanism would dictate, the governance agenda is obsessive about how the state should look and act so as to promote development. In areas of policies deemed of relevance to the reformist effort of the World Bank, foreign intervention and support was considered necessary. Thus, public sector management appears as a policy area rightly exposed to Bank scrutiny and reform, while distributive policies are largely left to the discretion of the states, as long as they maintain the government apparatus within acceptable minimalist limits. A technocratic approach to governance which breaks it down into actionable dimensions allows for such selectivity in the Bank's respect for tribalism. However, selective tribalism means no true respect for tribalism. Allegiance to the cosmopolitan camp is much clearer when assessing the tension between universalism and particularism. Just as structural adjustment reforms were deemed a cure-all in the previous decade, so now governance emerged as a vital predictor for sound development in the 1990s. Though lip service was paid to the cultural and historical variation, there was still a clear message that governance, specifically whether it was good or bad, could be measured and assessed by using the definitions and criteria promoted by the World Bank. Such criteria rested on an underlying prioritization of liberal democratic practices and demonstrated a type of confidence in the descriptive and prescriptive strength of their governance agenda which is not echoed in a research field which can first and foremost agree on how difficult it is to measure governance accurately. Such confidence is understandable from an organization which seeks to turn its theory into practice, but it is also bullishly ignorant of criticism which it cannot incorporate into its existing framework. Criticized as universalist and ignorant of contextualization, the Bank augmented the Washington Consensus by adding governance considerations, but not eliminating its core. Overall, the World Bank does not emerge as a fully cosmopolitan actor. It falls short in its conceptualization of the role of the state, as well as its underlying conceptualization of the person. There is nothing to indicate respect for the agency of individuals in developing states. Rather, they are victims to be saved from poor governance and poverty or civil society to be galvanized into pressuring for Bank-approved reforms. Their representatives are similarly considered as objects of the reform effort. That there is a universalist bias in the development discourse surrounding governance is unsurprising for an international organization whose most frequent criticism is that it promotes 'one size fits all' policies. The analysis here can be further extended into tackling governance indicators as envisioned by the Bank on a case-by-case basis so as to obtain a more comprehensive picture of whether the universalist bias holds for each. Rather than combining cosmopolitanism and communitarianism, it appears that the World Bank fails at fulfilling cosmopolitan norms and retreats into respect for tribalism as an excuse.
2023-08-03T15:38:04.562Z
2023-07-31T00:00:00.000
{ "year": 2023, "sha1": "66994344ef3987e685a3f8f084d1c352b631c252", "oa_license": "CCBYNC", "oa_url": "http://politicalstudies.uvt.ro/index.php/psf/article/download/20/19", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e7ff14cd9e7ff36736fb19c2903053949a2cec2e", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
268201328
pes2o/s2orc
v3-fos-license
Evolutionary Psychology The Springer Series in Evolutionary Psychology is the first series of volumes dedicated to this increasingly important discipline within psychology. The series will reflect the multidisciplinary nature of this field encompassing evolutionary theory, biology, cognitive psychology, anthropology, economics, computer science, and paleoarchaeology. It will explore the underlying psychological mechanisms, and information processing systems housed in the brain as well as the various triggers for their activation. Its scientific assumptions rest on The concept that evolution is the only known causal process capable of creating complex organic mechanisms as are exhibited in human and animal life. Further, it seeks to show how information processing is adaptively influenced by input from the environment. Overall, the series will address the range of functionally specialized evolved mechanisms, mediated by contextual circumstances, that become combined and integrated to produce manifest behavior. The Series will address key areas of research within the field, including sexual behavior; conflict between the sexes; parenting; kinship; cooperation and altruism; aggression; warfare; and status and prestige. A premier resource in evolutionary psychology for researchers, academics and students, this series will provide. the field continuing and comprehensive coverage of this high profile area. present confirm the predictive power of religious attendance vis-à-vis support for suicide terrorism. However, the authors' dismissal of religious beliefs themselves as an important factor in suicide terrorism is premature and unwarranted. The "religious-belief hypothesis" (p. 224), as tested by Ginges et al., has little to do with actual beliefs and is concerned only with level of devotion to religious beliefs, whatever those beliefs are. This is a misrepresentation of the religious-belief hypothesis. The importance of devotion (a term which is not explicitly defined in the article) is introduced along with a citation to Harris (2004), although Harris clearly articulates the argument that the relationship between religion and suicide terrorism is a result of specific religious beliefs. For example, Harris (p. 52,italics in original) argues that "Beliefs are principles of action: … they are processes by which our understanding (and misunderstanding) of the world is represented and made available to guide our behavior." The devotion to which Harris refers is to specific religious beliefs, namely a literal interpretation of the Koran (p. 45). Ginges et al. argue that belief in the afterlife and in martyrdom fall conceptually within the religious-belief hypothesis, yet they inexplicably rely only on religious devotion, as measured by prayer frequency, to test the religious-belief hypothesis. Granted, the null relationship between prayer frequency and support for suicide terrorism does not support the religious-belief hypothesis, but to conclude that this provides a "disconfirmation of the religious-belief hypothesis" (p. 230) is unjustified. Regardless of the level of devotion, one who believes there is a moral obligation to kill infidels is likely to have a different attitude toward suicide terrorism than one who does not hold these specific beliefs (even if both subscribe to the same religion). This is just one example of a prediction that can be derived from the religious-belief hypothesis, and predictions like this should be empirically tested before concluding that the religious-belief hypothesis has been confirmed or disconfirmed. A proper test of the religious-belief hypothesis would assess the actual beliefs held by participants. Ginges et al. assessed beliefs in Study 2, but erroneously used these data as a measure of personal support for suicide attacks. Participants were asked what they believed Islam's position was regarding suicide terrorism (i.e., whether Islam forbids, allows, encourages, or requires suicide attacks), failing to acknowledge that Islam is a religion and, therefore, not independent of religious beliefs. Ginges et al. assumed that those who responded "requires" were themselves supporters of suicide attacks. They assumed that one's religious beliefs determined their personal support of suicide terrorism -the very hypothesis they argue was disconfirmed. If responses to this question had been used as a predictor variable, Ginges et al. could have asked participants whether they personally support martyrdom attacks (as was done in Study 1), and then examined the relationship between one's religious beliefs and support for suicide terrorism. This type of examination would serve as a justifiable confirmation or disconfirmation of the religiousbelief hypothesis. Finally, there may be an important difference between willingness to engage in suicide terrorism and support for suicide terrorism. Ginges et al. are often clear that they are investigating the latter, but in concluding their article they write as if they have investigated the former. They conclude that "…the association between religion and suicide attacks is a function of collective religious activities that facilitate popular support for suicide attacks and parochial altruism more generally" (p. 230, italics added). The results they present do not address the association between religion and suicide attacks; instead, these results address the association between religion and support for suicide attacks. It is troubling that Ginges et al. make this terminological error in a sentence that concludes the article, as well as in the abstract: "Implications for understanding the role of religion in suicide attacks are discussed." (p. 224, italics added). In summary, although we appreciate Ginges et al.'s contribution, which provides evidence of the relationship between religious collective rituals and support for suicide terrorism, we do not agree with their assessment of the religious-belief hypothesis. One's specific religious beliefs may be related to support for -and willingness to engage insuicide terrorism. The religious-belief hypothesis has not yet been disconfirmed, despite remarks to the contrary by Ginges et al. Prayer frequency is unrelated to support for suicide terrorism, but this does not address whether belief in the afterlife, or any other specific religious belief, is related to support for suicide terrorism. We hope that the unwarranted and unjustified conclusions reached by Ginges et al. do not discourage researchers from investigating the role that religious beliefs themselves may play in encouraging or supporting suicide terrorism.
2018-05-26T03:24:56.989Z
2020-01-31T00:00:00.000
{ "year": 2017, "sha1": "0e299dac01eae070b903c5eef700418441077c04", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/147470491000800302", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd8e8f2941370298f8fe9fb2de84130e1d9b090e", "s2fieldsofstudy": [ "Psychology", "Economics", "Computer Science" ], "extfieldsofstudy": [] }
11854531
pes2o/s2orc
v3-fos-license
Sphingomyelinase Activity of Trichomonas vaginalis Extract and Subfractions Trichomoniasis is one of the most common acute sexually transmitted curable diseases, and it is disseminated worldwide generating more than 170 million cases annually. Trichomonas vaginalis is the parasite that causes trichomoniasis and has the ability to destroy cell monolayers of the vaginal mucosa in vitro. Sphingomyelinases (SMase) are enzymes that catalyze the hydrolysis of sphingomyelin into ceramide and phosphorylcholine. Ceramide appears to be a second messenger lipid in programmed apoptosis, cell differentiation, and cell proliferation. Sphingomyelinase is probably a major source of ceramide in cells. Signal transduction mediated by ceramide leads cells to produce cytokine induced apoptosis during several inflammatory responses. SMase are also relevant toxins in several microorganisms. The main objective of this research is to identify SMase activity of T. vaginalis in the total extract (TE), P30, and S30 subfractions from brooked trophozoites. It was found that these fractions of T. vaginalis have SMase activity, which comes principally from P30 subfraction and was mainly type C. Enzymatic activity of SMase increased linearly with time and is pH dependent with two peaks by pH 5.5 and pH 7.5. The addition of manganese to the reaction mixture increased the SMase activity by 1.97. Introduction Trichomoniasis is caused by the unicellular flagellated protozoan parasite named Trichomonas vaginalis which is one of the most prevalent sexually transmitted diseases. It has a worldwide distribution, and WHO estimates that more than 170 million cases are reported each year [1]; of these, 18.5 million come from Latin America [2]. In Mexico, more than 125,000 new cases are reported annually [3]. T. vaginalis infects both genders. In men this infection is commonly asymptomatic; however it may cause urethritis, prostatitis, cystitis, epididymitis, and infertility. In women the infection normally causes symptoms of vulvovaginitis and urethritis with vaginal discharge, irritation, dysuria, and abdominal pain. Vaginal secretion may also be yellow-green, itchy, frothy, and foul-smelling [4]. In pregnant women this disease has been related to premature rupture of amniotic membranes, premature birth, and low birth weight [5]. Patients with trichomoniasis are more susceptible to develop HIV seroconversion [6]. T. vaginalis is pyriform and grows in microaerophilic conditions when cultured. It has two main stages: flagellated and trophozoite [7]. To this date, there is no knowledge of resistant cysts forms [4]. T. vaginalis does not have mitochondria, instead it has hydrogenosomes, organelles with no DNA, formed by three chromatic granules [8]. Energy requirements are provided by the transformation of glucose to glycerol and succinate in the cytoplasm, followed by the subsequent conversion of malate to pyruvate, hydrogen, and acetate in the hydrogenosomes [9][10][11]. T. vaginalis has the ability to destroy monolayers of epithelial cells isolated from human vaginal mucosa in vitro by detaching them, lysing them [12][13][14], or by phagocytosis [15,16]. Engbring and Alderete [17] reported that T. vaginalis has a high specificity to bind only to mucosal epithelial cells of the genitourinary tract. This process is mediated by proteases found in the parasite's surface. Some authors have identified and characterized several cysteine proteinases and adhesins that participate in adhesion and cytotoxicity of the parasite to the vagina and ectocervix [18,19]. Although the pathogenic mechanisms of T. vaginalis are unknown, there are some factors related to its destructive effect, also its ability to proliferate and damage host cells [20,21]. At this time, several parasite molecules have been identified as the cause of damage in cells and tissues of the host [20,21]. Several hydrolases have been described in T. vaginalis; those with low molecular weight may be released into the cell medium [20]. Some of these molecules participate in specific cell damage including neuraminidase [22] [17,20,23], and phospholipases [21]. Also, an additional molecule of membrane attack has been recently detected in T. vaginalis and called lytic factor, which is able to destroy cells and nucleate erythrocytes as well as acting specifically on phosphatidylcholine suggesting an activity of phospholipase A 2 . Vargas-Villarreal et al. [21] demonstrated direct and indirect activities dependent of hemolytic phospholipase A (A 1 and A 2 ) in subcellular extracts from T. vaginalis. These activities have been proposed as responsible for hemolytic and cytolytic effects of T. vaginalis. The main objective of this research was to identify sphingomyelinase activity in the total extract, P30, and S30 subfractions of T. vaginalis. [35]. The strain of T. vaginalis remains in three tubes at a time. The best growth culture was inoculated 5 × 10 3 trophozoites/mL in three new tubes with fresh PEHPS [7,36,37]. Trophozoites used in the experiments were grown in suspension in spinner flasks [7,21,38]. Preparation of Subcellular Fractions. The subcellular fractions were prepared as described previously [38]. Briefly, pellet containing trophozoites harvested from the spinner flasks were suspended in two volumes of Hank's balanced salt solution BSS (0.7 mM CaCl 2 , 5.5 mM Glucose, 120 mM NaCl, 5.3 mM KCl, 1.7 mM MgSo 4 , 1 mM Trizma base, and pH 7.5). The trophozoites were disrupted with an electric motordriven Potter-Elvehjem Teflon-glass homogenizer (Bellco, Glass Inc., NY, USA) [38] and activated at 1000 rpm, representing the fraction total extract (TE). This fraction was separated in two parts; the first 3 mL of extract was divided in 0.5 mL aliquots and stored at −70 ∘ C until required. The remaining TE was centrifuged at 30,000 ×g during 15 min at 4 ∘ C. The resultant supernatant (S30) was stored until being used. The pellet (P30) was resuspended with 1 volume BSS, distributed in 200 L aliquots, and stored at −70 ∘ C. Before the initiation of each experiment, a sufficient numbers of TE, P30, and S30 aliquots were thawed at room temperature and diluted with BSS to adjust the proteins concentration, according to each experiment design. Determining Sphingomyelinase Activity. Sphingomyelnase activity (SMAase) was determined by radio assay in soluble and particulate samples. It was previously described by Vargas-Villarreal et al. [38]. Briefly, substrate was prepared by mixing 1 mL of 100 mM Trizma base (pH 7.5) solution, 1 mM MgCl 2 , 0.2% of Tritón X 100, 4 mg sphingomyelin, and 2. This substrate preparation was divided into 0.5 mL aliquots and stored in vials at −70 ∘ C until being used. A 10 L of assay mixture and 10 L 2X mixture containing several amounts of fractions from T. vaginalis, TE, P30, or S30 (0-400 g of total protein of each fraction), were deposited in tubes 7 × 75 mm borosilicate (Bellco, Glass Inc., NY, USA). Tubes were shaken on vortex for 30 s and incubated at 37 ∘ C for 150 min in a moist chamber. After incubation time, the reaction was stopped by adding 25 L of 1 mg/mL sphingomyelin, 1 mg/mL phosphorylcholine, and 1 mg/mL choline (Sigma Chemical Co, St Louis, MO, USA) in 5% trichloroacetic acid in n-butanol. Then the lipids from each sample containing nondigested sphingomyelin were separated from the SMase hydrolysis products by thin layer chromatography (TLC) [38]. To identify the [ 14 C]-sphingomyelin, [ 14 C]-phosphorylcholine, and [ 14 C]-choline spots, their respective relative migration coefficients ( ) were compared with those of their corresponding nonradioactive standards (Sigma). Visualization of spots corresponding to [ 14 C]-sphingomyelin, [ 14 C]phosphorylcholine, and [ 14 C]-choline was scraped from the TLC silica gel plates and placed into plastic vials containing 5 mL scintillation liquid (BCS, Biodegradable Counting Scintillation fluid; Amersham). Radioactivity in each vial was determined with a 1600 Tri-Carb liquid scintillation spectrometer (Packard Instrument Company, Inc., Downers Grover, IL, USA). The instrument was adjusted to work with unquenched samples with 96% efficiency. One unit of SMase activity was defined as 1 pmol [ 14 C]-sphingomyelin hydrolyzed (equivalent to the number of picomoles of [ 14 C]phosphorylcholine released) in 1 hr of incubation. Specific activity was defined as the amount of units of SMase activity per milligram of total Trichomonas proteins for 1 hr incubation at 36.5 ∘ C (U SMase/mg/hr). The type of SMase activity was classified by its cleavage site. Trichomonas preparations (TE, P30, and S30; 400 g/mL) were assayed (at pH 7.5), using the method of sphingomyelinase chromatography bidirectional thin plate, When using protease inhibitors sphingomyelinase activity increases tenfold. The radioactive hydrolysis products were identified by comparing their final location with those of their respective standard [23]. All determinations were performed three times in triplicate and were presented as the mean ±1 specific activity of the SMase (SMase U/mg total protein/hr) was arbitrarily defined as 1 U = 1 DPM SMAase. Effects of Inhibitors, Incubation Time, Dose of Proteins, pH, and Dissolvent Cations on Trichomonas SMase Activity. The effect of inhibitors on SMase activity was measured from TE, P30, or S30 fractions in the presence of sodium salt p-chloromercuribenzoate, a protease inhibitor, in a final concentration of 0.1 mM in all fractions [40,41]. Incubation time was determined by incubation of TE, P30, or S30 assay samples (each containing 400 g of total protein) for 0-150 min. Dose response curves were obtained using 0-400 g of P30 total protein. Finally, the effect of pH was analyzed by adjusting the pH values (2-10) with appropriate concentration of glycine-HCl (pH 2-2.5), sodium acetate (pH 3-6), or Trizma base (pH 7-10). The requirement for divalent cations was analyzed by adding 1 mM or 10 mM of MgCl 2 , MnCl 2 , CoCl 2 , CaCl 2 , HgCl 2 , and ZnSO 4 or 1 or 10 mM EDTA and 10 L to the P30 fraction. Activity was determined as described previously. Total Protein Quantification. The concentration of proteins was calculated in biological samples by the method of Lowry et al. [42]. Statistical Analysis. All the experiments were performed three times in triplicate ( = 9). Plots of incubation time and dose were analyzed by linear regression, and the results were compared by ANOVA for data normally distributed. Detection of Sphingomyelinase Activity in Total Extracts (TE), P30, and S30 Fractions of T. vaginalis. Trichomonas extracts have SMase activity and were able to hydrolyze [ 14 C]sphingomyelin. All fractions (TE, P30, and S30) have this activity. But it was found that sphingomyelinase activity was higher in TE (2.57 U/mg/hr) and P30 (2.43 U/mg/hr), and S30 was less active (Figure 1). When inhibitors proteases as the p-chloromercuribenzoate were used, it was found that sphingomyelinase activity increased in all fractions by a factor of 10 times (Figure 1). Identification of the Type of Sphingomyelinase-C and an Unidentified ESase Activity Present in TE and P30 of T. vaginalis. When sphingomyelinase activity was determined in P30 and TE fractions it was found that virtually all the [ 14 C]-phosphorylcholine activities corresponded to 96% and 4% to [ 14 C]-choline ( Figure 2). It was confirmed that the sphingomyelinase activity of T. vaginalis is a type C. In addition, small but reproducible quantities of [ 14 C]-choline were detected, indicating the presence of other esterase activity (ESase activity) in TE and P30 fractions (Figure 2). Concentration and Time-Dependent SMase-C Activity from P30. P30 shows a time-dependent SMase C activity; a graphical representation of this activity can be observed in Figure 3. It was found in the dose response curve that the radioactivity in the spots corresponding to [ 14 C]-phosphorylcholine steadily increased with increasing concentrations of the P30 fraction from 0 to 400 ug of total protein (Figure 4), showing low proportionality concentrations (less than 100 g). Figure 5 shows that 400 g of the total protein of the membrane-associated P30 fraction incubated for 150 min at 37 ∘ C has two peaks of activity, one at pH 5.5 and the other at pH 7.5. The peak at pH 7.5 corresponds to the highest SMase specific activity and was 1.9 times higher than the acidic activity showed at pH 5. Figure 4: Dose dependence of membrane-associated SMase-C activity. Several protein total concentrations (0-400 g) of P30 incubated for 150 min at pH 7.5 were tested. Then [ 14 C]-phosphorylcholine released was measured. Symbols correspond to mean ± SE of nine determinations of three independent experiments. Effect of Cations on SMAase Activity of P30. Several cations were tested as described in Table 1; the mixtures were treated with EDTA, MgCl 2 , MnCl 2 , CoCl 2 , CaCl 2 , HgCl 2 , and ZnSO 4 . It was observed that the cation which produced a maximum stimulation effect of Mn 2+ was 1.97 times more than the control without cations followed by Mg 2+ and Co 2+ with results 70 and 84% higher than the control, respectively. Furthermore the effect with EDTA was 0.13 times less than the one which occurred with the control. However, CaCl 2 , HgCl 2 , and ZnSO 4 cause inhibition of SMase activity by 40 to 93% (see Table 1). Discussion The ability to synthesize toxic substances offers some advantages to several organisms to fend off predators or when capturing a prey. These substances are commonly called poisons and are secreted by glands or buccal organs and in some other cases they are secreted through the skin [43]. Similarly, many microorganisms can produce this type of substances that act as pathogenetic factors favoring the invasion of the host. These substances can cause serious disruption to the host's health [9,44]. Poisons are usually proteins; the best known are the lipases, phosphatases, hyaluronidases, phospholipases, and sphingomyelinases. Phospholipases and sphingomyelinases are the most studied poisons to date and are recognized to be involved in invasion processes, activation of second messengers, and cytopathogenic mechanisms present in many species of microbes [45]. We have demonstrated the presence of phospholipases in Trichomonas [21], amoeba [46], and giardia [47]. But the presence of sphingomyelinases has not been described in T. vaginalis. This study was conducted in order to identify and isolate the production of sphingomyelinase from T. vaginalis and thereby build a base of knowledge of the physiopathology of this microorganism that causes serious damage to those affected, such as urethritis, vulvovaginitis, infertility, preterm childbirth, and predisposition to get HIV [6]. T. vaginalis is a protozoan with high specificity to bind only to the epithelial cells of the mucosa of the urogenital tract. This process is mediated by proteases found in the parasitic surface and which are decisive in the establishment of infection and participate in pathogenicity. Because once implanted in the vagina the microorganism is able to obtain nutrients from bacteria and leukocytes in the vaginal or urethral cavity, and it is also capable of destroying the host cells [17]. For this to happen, it is necessary first for an invasion to break the integrity of the membranes of the host so the sugar residues present on the surface of the parasite can participate, in particular alpha-D-mannose and N-acetylglucosamine, which are involved in the etching process of T. vaginalis [48]. This work suggests that the production of sphingomyelinase helps break the membrane components of the host cell. The in vitro cytopathic effect of T. vaginalis in MDCK epithelial cells has been intensively studied; these parasite trophozoites produce severe damage to the cell monolayer in 30 minutes and a rapid decrease of the transepithelial resistance [13,14]. Several researchers have demonstrated virulence factors, proteinases, and adhesins, such as (CP30) which is a 30 kDa proteinase required for parasite adhesion to the target cell [49]: a cysteine proteinase of 65 kDa (CP65) and protein of 120 kDa inducible by high concentrations of iron called AP120 produced by T. vaginalis are thought to have cytotoxic activity [50]. Although the pathophysiological mechanisms of T. vaginalis are not completely defined, they are now recognized as important virulence factor dependent cell-cell contacts and several secreted factors that cause cell damage as consequence of the symptoms of the patients [20,21]. GT-15 strain of T. vaginalis was selected for being one of the strains that produce higher crop yields [7] and because it was detected and quantified for hemolytic activity of cytolytic phospholipase A in direct and indirect assays in studies realized previously [38]. In this work, the mass culture of T. vaginalis was fractionated to identify sphingomyelinase activity present in the total extract (TE) and the subcellular fractions P30 and S30 (Figure 2). The fractions were obtained by mechanical homogenization to preserve as far as possible the subcellular compartmentalization and prevent protein denaturalization caused by freeze-thaw cycles [51,52]. Bovine serum was not included in the assays to avoid the presence of undefined factors in the reaction mixtures that can interfere with the activity of sphingomyelinase [53]. The activity of sphingomyelinase was determined using the previously developed assay for phospholipase activity and published by Vargas-Villarreal et al. [46], adapted for sphingomyelinase activity detector with modifications to the reaction mixture so as not to exceed 60 L. This amendment would grant two major advantages to this new method: (a) it is possible to analyze a greater number of samples simultaneously, as it requires fewer radioactivity than other methods [54] and (b) save a considerable amount of reagents. Results showed the presence of sphingomyelinase activity in the totality of the extracts of T. vaginalis and principally in the P30 fraction. As P30 is a particulate fraction [55], it is likely that such activity is present in the plasma membrane of this protozoan. To discriminate against that type of activity C or D sphingomyelinase is present in TE and P30 fractions. Was used as substrate [ 14 C]-sphingomyelin, whereas if after incubation with P30 and TE the radioactivity was located on the chromatographic spot corresponding to [ 14 C]-phosphorylcholine but not in the [ 14 C]-choline, then enzymes sphingomyelinase would be of type C. Sphingomyelinase type C activity was confirmed when both fractions showed a remarkable activity in the spots corresponding to [ 14 C]-phosphorylcholine, and then these fractions subjected them to a bidirectional chromatography. These results undoubtedly confirm that the sphingomyelinase activity present in T. vaginalis is of type C. It was demonstrated that from all of the degradation products of [ 14 C]-sphingomyelin 4% were [ 14 C]-choline. It corresponds with type D SMase activity, probably caused by unidentified esterase. In previous studies trichomonas [56] have been shown to have protease activity. Since trichomonas extracts may contain proteases, a protease inhibitor was used for an activity free of inhibitory effects from these enzymes. An inhibitor of proteases that does not interfere with the activity of sphingomyelinase was used in this assay [40,41]. The outcome shows an increase more than 10 times in the SMase activity ( Figure 1). It was shown that the proteases were affecting sphingomyelinase activity in all fractions. The effect of pH on the activity of sphingomyelinase in the fraction P30 presents two peaks, one at pH 5.5 and other at pH 7.5; the latter was almost twice as high ( Figure 5). This concludes that P30 has at least two isoforms of sphingomyelinase. Previous studies state that the first sphingomyelinase activity acting on pH 5.5 has as preferred cofactor Zn; the second isoform acting at a basic pH of 7.5 requires Mg as cofactor. For now, we characterize the activity of Mg as a dependent alkaline sphingomyelinase. It was also demonstrated that other cofactors such as manganese and cobalt can stimulate between 71 to 97% of sphingomyelinase activities at pH 7.5. Besides EDTA, calcium, mercury, and zinc inhibit this activity between 39 and 87% (Table 1). Conclusions The main contribution of this work is the identification of SMase activity in the total extract, P30, and S30 fractions of T. vaginalis. This activity is principally of type C and is mainly in the subfraction P30. It showed two peaks of activity at pH 5.5 and 7.5. The activity at pH 7.5 can be increased using cofactors principally Mg. For the future it is necessary to investigate type D SMase activity to determine the presence of esterase in the extracts and subfractions from T. vaginalis, as well as study SMase fraction that is active at pH 5.5.
2016-05-15T07:47:52.361Z
2013-08-19T00:00:00.000
{ "year": 2013, "sha1": "a229aae225d706ccf99046b6ec76fa43e36474ca", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2013/679365.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb3517867f300a6aea9d86c2b7ebbd4e850003fe", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
255409996
pes2o/s2orc
v3-fos-license
Airborne Emissions from Si/FeSi Production The management of airborne emissions from silicon and ferrosilicon production is, in many ways, similar to the management of airborne emissions from other metallurgical industries, but certain challenges are highly branch-specific, for example the dust types generated and the management of NOX emissions by furnace design and operation. A major difficulty in the mission to reduce emissions is that information about emission types and sources as well as abatement and measurement methods is often scarce, incomplete and scattered. The sheer diversity and complexity of the subject presents a hurdle, especially for new professionals in the field. This article focuses on the airborne emissions from Si and FeSi production, including greenhouse gases, nitrogen oxides, airborne particulate matter also known as dust, polyaromatic hydrocarbons and heavy metals. The aim is to summarize current knowledge in a state-of-the-art overview intended to introduce fresh industry engineers and academic researchers to the technological aspects relevant to the reduction of airborne emissions. INTRODUCTION Elemental silicon is often referred to as ''silicon metal'' although it is not a true metal but a semimetal (metalloid). ''High-silicon alloys'' typically denote silicon-containing alloys in which silicon dominates the behavior in the production furnace. This normally includes metallurgical grade silicon (MG-Si) with 96-99% purity and ferrosilicon (FeSi) with a silicon content of 65-90%. [1][2][3] Table I shows the world production of silicon alloys in recent years. In 2015, an estimated 68% of the global high-silicon alloy production was produced in China. 4 In Europe, emissions and waste associated with the production of silicon and ferrosilicon are regulated at both national and EU levels, and there are similar divisions between state/territorial and federal legislations in the USA, Canada and Australia. Some of the national environmental agencies offer guidelines and recommendations, published to complement and detail the legal requirements of this industrial branch. [6][7][8][9][10] Throughout this article, Norwegian legislation and practices are often cited because they are among the most stringent and comprehensive sets of rules. In some countries, emission data are made publically available through annual publications in open databases. [11][12][13][14] The management of airborne emissions from silicon production is, in many ways, similar to the management of airborne emissions from other metallurgical industries, such as other smelters, foundries and electrochemical metal production. Nonetheless, certain challenges are specific to the silicon alloy-producing industry, for example, the specific dust types and the management of NO X emissions. A commonly encountered difficulty in the mission to reduce emissions is that information on emission types and sources as well as abatement and measurement methods is often scarce, incomplete and scattered. Much progress in emission abatement has been achieved in the industry itself over the last decades, but such work is rarely published. At best, it may be partially presented at industry-specific conferences or in confidential reports, producing little or no documentation available through database search engines. When the industry cooperates with research institutions, journal articles are published within a vast range of different scientific fields, such as atmosphere and aerosol physics, chemistry, process metallurgy, occupational hygiene and environmental monitoring. The sheer diversity and complexity of the subject presents a hurdle, especially for new professionals in this field. The aim of this literature review is to summarize current knowledge on emission types and concentrations, as well as suitable measurement techniques developed in and relevant to the Si-and FeSi-producing industry. It is an attempt to create a state-of-the-art overview, which can introduce researchers, engineers and others to the relevant technological aspects. The focus of this article is the airborne emissions of particular significance to the silicon industry, including greenhouse gases (GHG), nitrogen oxides (NO X ), particulate matter (PM), polyaromatic hydrocarbons (PAH) and heavy metals. Silicon Production and Emissions The submerged arc furnace (SAF) is the core process for silicon production. Figure 1 schematically illustrates the production process steps and emission sources. The primary raw material for silicon production is quartz. The reductants include coal, charcoal, wood chips and sometimes coke. In addition, iron pellets or sinter are included in the raw materials for ferrosilicon (FeSi). The raw materials and reductants are crushed and weighed before they are charged to the furnace. The high-temperature process continuously consumes the carbon-based electrodes. Both ferrosilicon (FeSi) and metallurgical grade silicon (MG-Si) are typically produced this way and the product is hereafter simply referred to as the ''silicon alloy''. While the overall carbothermic reduction reaction of quartz in the furnace may be expressed as; ð1Þ the furnace is often described as a reactor consisting of two zones; an inner (lower) hot zone and an outer (upper) colder zone. In the hot zone, liquid Si and SiO gas are produced at temperatures around 2000°C through various sub-reactions, giving different stoichiometric versions of the overall reaction: In the outer zone, SiO ascending from the inner (lower) hot zone reacts with carbon materials according to: SiO þ 2C ! SiC þ CO ð3Þ and condenses, depending on temperature, according to either: Furnace operation and raw material properties will determine the Si yield, i.e. how much SiO gas leaves the furnace. This will, in turn, affect the composition of the furnace off-gas. The raw silicon alloy produced in the furnace hot zone is tapped from the furnace and refined in a slag process before it is cast in molds for cooling. The solidified product is also crushed, sized, weighed and packed at the plant before it is sent to the customer. 1,[15][16][17] The most commonly considered emission types, emission points and their origin, as discussed in this article, are listed in Table II. While most gases are generated in the furnace itself, dust is generated in almost every step of the silicon production process. The transport and handling of raw materials, reductants and products at ambient conditions generates PM through mechanical impact. Hot processes on the other hand, such as tapping, refining and casting, are sources of thermally generated fumes. Most of the processes Table I. World production of silicon alloys; data in kilotons of silicon content 4,5 Nation 2012 2013 2014 2015 Bhutan a 61 61 72 72 Brazil 225 230 154 150 Canada 55 35 52 52 China 5050 5100 5500 5500 France 174 170 130 130 Iceland 75 80 75 75 India a 70 70 86 86 Norway 339 175 332 330 Russia 733 700 700 are equipped with ventilation systems and the collected off-gases are typically transported through a system of heat exchangers and filters before escaping through the chimney. The furnace dust collected in the filters is commercially termed ''micro-silica'' and is used in a variety of applications, such as concrete filling. ''Fugitive'' or ''diffuse'' emissions are emissions which do not pass through a stack, chimney, vent, duct or other functionally equivalent opening. Typical fugitive emissions in the metallurgical industry are gases and PM leaking into the working atmosphere from closed or encapsulated processes, where the hoods are not capable of capturing 100% of the emissions. Major Greenhouse Gases (GHG) In the production of silicon alloys, the carbonaceous reductants are usually coal and coke, but biocarbon (charcoal and wood chips) may also be used. The carbothermic reduction of the quartz to Si will create CO gas through the overall oxide reduction reaction: 1 The CO gas will oxidise to CO 2 at the furnace charge top in an open or semi-closed furnace. Methane (CH 4 ) and volatile hydrocarbons are also generated in the combustion of the carbonaceous materials and electrodes (pre-baked for MG-Si production and Søderberg-type for FeSi production). Iron-bearing raw materials added as oxides in FeSi production are reduced to metallic iron through the CO gas and the volatile hydrocarbons in the furnace, hence generating CO 2 . In the top part of the furnace, the Boudouard reaction may also contribute to CO 2 emissions: The extent of GHG emission is highly dependent on: 1. Type of alloy Reduction of quartz requires more energy (i.e. carbon and electricity) than iron oxides, so the higher the Si content, the higher the GHG emissions. 2. Carbonaceous material mix The levels of fixed carbon and volatile matter depend on the choice of carbon materials, which in turn affect GHG emissions. In national emission inventories, only emissions from fossil carbon are accounted for and therefore the use of charcoal lowers the reported specific CO 2 emissions. 3. Furnace operation Furnace operation and charging method strongly influence the emissions, in particular of CH 4 and NO X . More even charging will generally reduce emissions as compared to batch-wise charging. The tapped alloy will contain some of the added carbon, mainly in the form of carbides, as the solubility of carbon is generally low compared to other ferroalloys (typically in the order of 0.005-0.02% at 1400-1600°C, depending on the alloy Si content). Typical GHG emissions from this industry have been assessed and documented by a number of authors. [18][19][20] The 2006 IPCC Guidelines for National Greenhouse Gas Inventories 21 and Lindstad et al. 22 present a summary of pre-2006 work in terms of general, operation-based and reductant materials-based emission factors. These emission factors are still in use for national GHG inventories. Typically, most production emissions are reported on the basis of raw material type/use and production tonnage. Then, control measurements are carried out to verify the calculated numbers. Coke and coals are the main contributors to the GHG emissions, but the carbon-based electrodes and electrode paste will also contribute substantially. Typical compositions are shown in Table III which is a summary of data from Lindstad et al. 22 Recent, personal communications with the Norwegian industry indicate that the total %C in coal and electrode paste may be approximately 81% and 94%, respectively. 23 Based on the above summarised raw materialand production-based emissions, generic (average), total emission factors for different high Si alloys are summarised in Table IV. Note that CH 4 emissions based on semi-closed furnaces with the sprinkle charging and off-gas temperatures >750°C are used as default values in these factors. All hydrocarbon emissions are highly dependent on both alloy type and operation, which in turn lead to high variations and uncertainties of reported data. Lindstad et al. 24 , the discrepancy is of the order of a factor of 10. The reported values are lower than those calculated using emission factors for a given production tonnage. With such large divergences of data, there is undoubtedly room for improvement. Nitrogen Oxides, NOx Nitrogen oxides (NO and NO 2 ; often referred to as NO X ) are important emissions due to their role in the atmospheric reactions creating fine particles and ozone smog. NO X emissions also contribute to a suite of year-round environmental problems, including acid rain, eutrophication (stimulated growth of algae and bacteria) and bronchial suffering. Figure 2 illustrates the temperature dynamics of the three main NO X formation mechanisms compared to typical processes and operation temperatures for silicon alloy production. The fuel and thermal formation mechanisms are the dominant mechanisms in electric arc furnaces producing ferrosilicon and silicon. Fuel NO X is formed by oxidation of the nitrogen components present in the solid fuel, while thermal NO X is formed by direct oxidation of nitrogen (from the air) at temperatures above 1400°C. Such temperatures are frequently observed in the furnace hood. [25][26][27] Combustion of gaseous SiO above the charge surface and in the tap-hole may locally increase the temperature. The amount of SiO(g) released from the charge will therefore also influence NO X formation, while any SiO reducing measures also seem to reduce NO X emissions. The Norwegian Ferroalloy Association (FFF), SINTEF and the Norwegian University of Science and Technology (NTNU) have collaborated on NO X -reducing strategies for over 20 years and the investments have proven successful. 27,29,30 While some of this work is available through international journals and conference publications, a significant part of the results and achievements remain unpublished. For this paper, we have had the opportunity to read and evaluate some unpublished work in conjunction with the published papers, and we have, with permission from the authors and industrial partners, chosen to include brief summaries of some of the major, unpublished findings and conclusions in this field. Research initiatives have both focused on waste gas dynamics in general 31 and NO X emissions in particular (see also Table V; Fig. 4). 32 Efforts to understand the NO X formation have shown that furnace design and furnace operating procedures, such as stoking and charging, heavily influence NO X emissions. [33][34][35][36][37][38][39] Reported NO X emission values vary greatly, with typical values ranging from 500 to 1500 ton per site and year. 12 The NO X production is inversely proportional to the silicon yield (low Si yield, high SiO losses), at least up to a certain level of silica fume formation. NO X also forms during tapping, when an oxygen lance is used to open up the tapping channel to increase the metal flow out of the furnace. 26,40,41 The general correlation between SiO and NO X emission is illustrated in Fig. 3. The NO X formation is also correlated to the moisture content of the furnace gas. 42,43 Moisture is introduced through the raw materials and, hence, will vary throughout the materials charging cycle. It is well documented that the injection of water vapor into a combustion engine increases the heat capacity (C P ) of the off-gas, so that the temperature cannot exceed the limit for thermal NO X production. [44][45][46] Although not validated, it is likely that this effect, in part at least, explains the observations made in high silicon alloy production. 27,42,43 The effect of the furnace hood design and the inlet for false air on the NO X emission has been thoroughly studied and modeled by Kamfjord 26 and others, 47-49 but most of this work is not published. The main conclusions from the unpublished reports are that the amount of air and its flow path throughout the hood determines when, where and whether oxygen and nitrogen are mixed for a sufficiently long time in a sufficiently hot zone to produce NO X . Optimization of furnace hood designs is very complex and the trial-and-error approach is both time-and cost-consuming. Therefore, modeling capabilities are extremely valuable. A modeling concept for predicting turbulent flows, heat transfer, combustion and NO X formation in the furnace hood of a typical submerged arc furnace where silicon or ferrosilicon is produced has been developed. Currently, it is not accurate enough to calculate the true NO X emissions, but it can predict whether it increases or decreases when changes are made in the design or process operations. [50][51][52][53][54][55] Primary strategies for NO X reduction includes modifications to the furnace operation, process management and/or the SAF system itself. [56][57][58] For silicon alloy production, this means: 40,59 Reducing the combustion temperature through active cooling of the primary flame zone. Avoiding the ''blowing'' of SiO-rich gas up through the charge surface. Frequent stoking and semi-continuous charging. Grå dahl et al. 25 found that it was possible to reduce NO X emissions from poorly operated furnaces by 50% if best practices were implemented. Recycling the flue gas to reduce excess air above the charge. Secondary strategies includes chemical reduction treatments for the flue gas from the furnace, such as selective catalytic or non-catalytic reduction with ammonia (NH 3 ) or urea (CO(NH 2 ) 2 ), 60 as used in steel production. 61 To date, there is no literature on the use of secondary methods for silicon alloy production, and the effect of such chemical treatments on the silica fume quality is therefore unknown. SO X , Dioxin, and Other Gases A great number of gases may be present in the furnace flue gases, some of which are regularly measured while some are more occasionally detected and documented. Examples of such gases include sulfur oxides (SO X ) and other compounds such as H 2 S and various volatile organic compounds (VOC). 1,62 SO X emissions are often mentioned as a type of gaseous emissions which occurs in the silicon alloy industry, but very few authors seem to have specifically studied these emissions. The origin of SO 2 gas is the sulfur content of the raw materials, primarily reductants, and the reported emissions levels are typically calculated based on material balances. While abatement methods for post-filter cleaning of SO 2 are available, the current installation rate is primarily inhibited by investment costs. 59 Grådahl et al. 25 showed the correlation of SO 2 and CO gas emissions with certain furnace events called ''avalanches'' (collapse of charge burden near the electrodes), the occurrence of which could be reduced by use of semi-continuous charging procedures. The reported SO 2 emissions from silicon alloy production are typically of the same order of magnitude as the NO X emissions. 1,12 Table V illustrate typical values of NO X and SO X off-gas concentrations, varying with furnace and product type. The values presented in the Table are averaged means of several measurements on different furnaces performed irregularly over some 20 years (1995-2016). Only a small fraction of the data has been previously published. 25,27 The measurement campaigns were carried out by SINTEF, NTNU and FFF in Norway, The results were compiled by S. Grå dahl at SINTEF for the sake of this article, and the data are published with permission from FFF. Dioxins are a class of persistent organic pollutants (POPs) which are highly toxic to human health. Like polycyclic aromatic hydrocarbons (PAHs), dioxins may be both gaseous and particlebound, depending on temperature. The generation of dioxins in combustion and metallurgical processes is, in a general sense, quite well established. 63,64,25 The destruction of dioxins and organic compounds, such as furans and PAH's, at high temperatures allow for efficient reduction or even elimination in modern, semi-closed SAFs. Furnace design and operation are keys and can be optimized for close control of the flue gas temperature, see Fig. 4. Tveit et al. 59 suggest that the use of a heat exchange system (where off-gases are effectively cooled, post-furnace, in a steam boiler) will allow for higher off-gas temperatures and therefore have the same reducing effect on this type of emissions. Polycyclic Aromatic Hydrocarbons, PAH Polycyclic aromatic hydrocarbons (PAH) consist of organic structures having more than two joined aromatic (benzene) rings. Anthropogenic PAHs are typically formed by incomplete combustion of organic materials like oil, wood, or garbage. The lighter compounds, with few aromatic rings, are gaseous at room temperature whereas the larger molecular compounds are liquid or solid and commonly adsorbed on particles, for example, soot. PAHs belong to the Persistent Organic Pollutants (POP), a group of airborne emissions which are particularly resistant to degradation. Some of the PAH compounds are linked to various forms of cancer and the US Environment Protection Agency (EPA) has identified 16 priority PAHs, based on their potential to induce adverse environmental and health effects. The main sources of PAH in silicon alloy production is the combustion of reductants in the furnace and the baking of electrodes. Typical PAH and NO X emissions for different furnace operations are listed/plotted in Fig. 4. Reported PAH values from Norwegian plants range from 10 to 70 kg per site and year. PAH emissions from industrial sites are estimated by use of emission factors. [65][66][67][68][69][70] PAH formation is linked to soot formation which in turn is influenced by furnace design and operation and varies throughout the charging cycle. 59 As PAHs are destroyed at high temperatures, emissions can be significantly reduced by increased offgas temperatures as illustrated in Fig. 4 by Grådahl et al. 25 The reference case (A) represents a traditional open furnace with batch charging. The second case (B) is a semi-closed furnace, with feeding tubes through which the raw materials were fed semicontinuously (every minute). Case (C) represents the semi-closed furnace with average off-gas temperatures raised from 635°C to 812°C. The increased temperature leads to the destruction of PAH and dioxin but may also increase the formation of thermal NO X . Heavy Metals Heavy metals enter the production process as trace elements in the raw materials and electrodes and are redistributed to metal, slag, fume and gas. The concentrations depend on the alloy composition and the process temperatures. At temperatures of 1600°C or higher, certain metals such as Zn, Pb, Cd, Na, Mn and Fe go into the gas phase and may escape as metal vapor. When the off-gas temperature drops, the metal vapors are condensed and therefore often collected with the dust. Myrhaug and Tveit 71,72 showed that a boiling point model can be used to predict the redistribution of an element in the furnace, as shown in Fig. 5. Naess et al. 73 showed that the model is also applicable to the refining ladle, with some modifications due to the oxidation of elements, as shown in Fig. 6. The Norwegian legislation for heavy metal emissions appears to be one of the most rigorous in the world, requiring emission control of 11 trace elements for silicon alloy production facilities. These trace elements are: As, Cd, Co, Cr, Cu, Hg, Mo, Ni, Pb, Se and Zn. The European, USA and other partners to the United Nations Environment Program (UNEP) have put special emphasis on lead, cadmium and mercury. 74 The reported emissions of trace elements are often based on material balances and may vary greatly between plants, but an example is shown in Table VI. 59 Mercury constitutes a special case among the airborne heavy metal emissions as international legislation has long been stringent with respect to this metal. 75 In silicon alloy plants, the particulate control devices (e.g., fabric filter or wet scrubber) capture the particle-bound mercury. The more volatile elemental mercury is emitted to the atmosphere if no further gas treatment is applied. Hg and Cd levels in the off-gas may be reduced by the use of bag filters with an adsorbent injection (such as activated carbon or lignite coke). 8,76,77 Emission estimates to air through the filter systems must cover both gaseous and particlebound heavy metals, but a major challenge for the estimation is the low concentrations of these elements in the material flows. Mercury typically has detection limits (DL) given in units of parts per billion (ppb) whereas the other heavy metals have DL of the order of magnitude of parts per million (ppm). Hence, very significant measurement uncertainties are introduced and it is often impossible to ''close'' the material balance for individual elements. The total uncertainty for elements such as Co, Hg and Mo is often around or above 100%. These uncertainties may be lowered by continuous, on-line measurements after the filter systems, but such measurements are often practically challenging. Additionally, large uncertainties are also related to sampling and representability. 73,78 PARTICULATE EMISSIONS Airborne PM is an important constituent in the diffuse emissions escaping the plants and may not only affect the air quality inside the plant but also in the local, urban communities as well as the environment at large. The PM may be harmful if inhaled and exposure to high levels of particles has been linked to cancer, pneumonia, chronic obstructive pulmonary disease (COPD) and other respiratory and cardiovascular syndromes. [79][80][81][82][83][84][85][86] Almost all processes involved in silicon alloy production produces PM in some form. In this article, the terms particulate matter and dust are used as synonyms and primarily used for solid particles. The term aerosol includes both liquid and solid particles and the term fume relates to thermally generated aerosols. The furnace generates most of the PM, through combustion of escaping SiO gas from the furnace hot zone to SiO 2 above the charge surface. A typical metal yield of between 80% and 90% means that up to 10-20% of incoming Si-units in the furnace escape as fumed silica. Modern ventilation and filter systems have enabled efficient collection of this type of dust and it even constitutes a profitable by-product (microsilica). A typical Norwegian PM emission limit is approximately 30 mg/ Nm 3 . The characteristics of microsilica have been described in the literature as agglomerates of amorphous silica spheres. 71,87,88 The PM in the silicon alloy industry includes both fine (FP) and ultrafine particles (UFP), i.e. particles with aerodynamic diameters of <2.5 lm and <0.1 lm, respectively. UFPs represent a rather special case of particulate matter as the large surface area implies higher reactivity and different physico-chemical properties than the larger particles. [89][90][91] Current administrative norms as well as other limits are established in mass concentrations, but UFPs make little contribution to the mass concentration. [92][93][94] Measurements of dust and NO X above the furnace charge and in the off-gas show strong correlations. The SiO ''combustion'' is a highly exothermic reaction which produces high-temperature zones locally. In these high-temperatures zones, thermal NO X production is promoted (see the NO X section). 26,95 It is clear from Table IV that the major sources of PM, both inside the plant and escaping from the plant, are those which involve the liquid alloy. Tapping, refining, casting and other operations where high-temperature liquid alloy is in contact with air produces a silica fume which has many similarities to the microsilica. 26,73,78,92,93,96,97 Figure 7 shows an SEM picture and an ELPI particle size distribution for thermally generated fume particles from ferrosilicon tapping. Naess et al. 96,98 studied the process by which this type of silica dust forms, and concluded that the main dust formation mechanism is the active oxidation of the liquid silicon alloy, while a small fraction (<1%) of the dust particles would form by splashing (droplet expulsion). 97 The active oxidation was found to occur in two steps in which the silicon would first react with oxygen to form SiO gas which would then oxidize further to SiO 2 . 97,[99][100][101] The kinetics of this process is governed by oxygen access to the alloy surface, and therefore highly dependent on the dynamics of the alloy surface exposed to the air. 102,103 Depending on the gas flow rate, a refining ladle for MG-Si generates 0.8-1.7 kg SiO 2 per ton Si. The dust from the handling and transport of solid materials, such as the product and the raw materials, is fundamentally different from the dust generated by the active oxidation. It is typically coarser, and the physical and chemical properties depend on the material from which it was generated. Raw material handling and transport can, for example, produce airborne crystalline alpha-quartz which is a health hazard in its own right. No literature on the generation, collection and reduction of the mechanically generated PM in high-Si alloy smelters has been found. EMISSION MEASUREMENT TECHNIQUES Off-gas monitoring in MG-Si and FeSi production is connected to a couple of specific challenges compared to emission measurements in other industries. The gas temperature in proximity of the furnace is very high, as illustrated in Fig. 2 and this constitutes a major difficulty, as described below. Another difficulty is the high PM concentrations before the filter. The wear on instruments installed in particle-laden gas streams is considerable, and material deposits on the instruments risk completely off-setting the results obtained in such conditions. In addition, data handling and the interpretation of results is made difficult by the varying conditions caused by the industrial operation. The IPPC BREF documents 10 offer some general guidelines for emissions monitoring. Timing considerations, such as averaging time and sampling/data collection frequency, are of prime importance and depend heavily on the processes. Hence, process understanding is essential. Figure 8 gives an overview of the variety of measurement methods available for airborne emissions. A bordering field of science is that of occupational hygiene, a topic which is outside the scope of this article and will not be covered here. Hence, only measurement methods using stationary devices will be described. The concentration of the detected pollutant is read as a function of time with in situ, directreading or in-line instruments. They operate in real time and are often equipped with data logging. Indirect instruments are samplers which collect the pollutant over a certain time interval with subsequent laboratory analysis. This is sometimes referred to as ex situ analysis. Active or extractive sampling refers to the use of a pump to draw the polluted air into the instrument whereas passive methods operate without alteration of the air flow. 10,104,105 In addition to in situ and extractive measurements, the materials balance (process-or site-specific mass flow calculations) are often carried out to estimate the emissions of, for example, heavy metals and CO 2 . To report the correct emissions of the different components, representative flow measurement in the off-gas channel is an essential complement to correct concentration measurements. Different measuring principles are used, like pitot tubes, annubars, orifice plates, ultrasonic flowmeters and thermal mass flow meters. Extractive measurement techniques applied to off-gas ducts and pipes often call for isokinetic sampling and/or dilution, which can be extremely challenging in terms of practical operation. Both procedures will also, inevitably, introduce additional error sources and increase uncertainty, especially under the non-ideal conditions of industrial operations. [105][106][107] Gas Measurements Round-the-clock gas measurements are desirable, but may be difficult to achieve in high-temperature dusty gas streams. Most melting plants have no chimney after the baghouse filter and, therefore, all the gas measurements have to be done in the duct before the filter. This is an extremely harsh environment and continuous measurements are therefore very challenging. Optical sensors are the first choice, typically recommended for most continuous industrial measurement applications, but their use is often limited by high gas temperatures. Instruments based on extractive principles will automatically decrease the gas temperature (as the gas is drawn out of the duct) which enables more straightforward detection of gas species. The high dust load in the furnace off-gas will, however, be very challenging for most of the commercially available devices. Nonetheless, these types of instruments are frequently used in the industry today, albeit not often for continuous measurements. The secondbest option is systematic measurements for some hours at a time. This is the most common industrial approach for gas component measurements in offgas from silicon alloy production. In situ Measurements In situ gas analyzers measure the gas directly inside a duct or across an open path (0-500 m) with very short response times. The measuring principle is usually some form of optical spectroscopy, often ultraviolet (UV) or infrared (IR). In smelters, the instruments are selected based on their ability to operate in hot, dirty and dusty gas conditions. The tunable diode laser (TDL) has become standard instrumentation for NO measurements at most Norwegian plants, 25,108 and can be used in off-gas ducts prior to the baghouse filter, despite the high temperatures and high dust concentrations. It can detect many different gas species including NO, H 2 O, NH 3 , HCl, and HF in a gaseous mixture if coupled with laser absorption spectrometry (TDLAS). The advantage of TDLAS over other techniques for concentration measurement is its ability to achieve very low detection limits and very short response time. Extractive Gas Sampling Devices Extractive techniques are acceptable for quantification of non-reactive gas species, such as NO, CO and CO 2 , as they may be allowed to cool down before detection. Species like HCl, SO 2 or H 2 O, however, have to be kept at a constant, high temperature which may be achieved by an electrically heated sample hose. Fourier transform infrared (FTIR) devices have been tested and proven useful in ferroalloy industries in spite of the harsh conditions. 42,43 Several gas species may be detected simultaneously as well as solid or liquid aerosols, but components with a symmetrical electron binding cannot be assessed. Grå dahl et al. 109 showed that an FTIR (with inhouse analysis software) can be used over an openpath, such as a slag-pit, in ferroalloy smelters. For gas components with a symmetrical electron binding (such as H 2 , N 2 and O 2 ), mass spectrometry (MS), gas chromatography (GC) or Raman spectroscopy may be used, but these methods tend to have slow response times. Promisingly, Kjos et al. 110 demonstrated the industrial relevance of a combined GC-MS instrument in flue gas from aluminum electrolysis and was able to detect very low (sub-ppb) concentrations. PM Measurements While many different measurement techniques offer the ability to characterize and quantify the airborne particulate matter, very few of the commercially available instruments are tested in industrial melting plants or validated for the specific types of PM encountered there. PM characteristics, such as optical properties, sizes, shapes, density, etc., may heavily influence the measurements, and site-specific calibrations are typically necessary to ensure reasonable accuracy. Airborne particulate matter is often classified by the aerodynamic diameter (D p ) of the particles, but the terminology is far from unambiguous. In occupational hygiene and medicine, exposure terminology is based on how the aerosols may penetrate the body through the respiratory system. It is then common to distinguish between the inhalable, thoracic and respirable fractions where the term ''respirable'' indicates that the aerosol may penetrate the body all the way down to the alveolar region of the lungs. 105,107 Another common and more technical terminology for airborne particulate matter is based on the so-called PM standards. For example, PM 10 refers to all aerosols with D p < 10 lm and PM 2.5 refers to the concentration (in total mass per unit volume of air) of particles with aerodynamic diameter <2.5 lm. PM 2.5 is a subset of PM 10 and is sometimes referred to as fine particles. 104,107,111 Yet other terminologies exist and are used in parallel with the aforementioned ones in the literature. For instance, Preining 112 defines the terms fine, ultrafine (UFP) and nanosized (NP) as particles with D p < 750 nm, D p < 100 nm and D p < 25 nm, respectively. In situ PM Measurements Passive instruments are typically long-range instruments for measuring across a gas stream in a duct. These are typically laser-based instruments and other optical sensors. Such instruments have been used in the silicon alloy industry to assess fuming rates, 78,97 and to continuously monitor PM emissions as well as workplace atmosphere. 113 For roof measurements, Grå dahl et al. 113 recommends the use of directional anemometers (able to detect not only wind speed but also direction) in combination with long-range, open-path devices. Unpublished reports, [114][115][116] however, emphasise a need for site-specific calibration which can be performed using gravimetric filters. A number of filter cassettes are then mounted on wires along the laser line-of-sight, and the mass readings of the lasers are compared and adjusted to the dust mass collected by the filters as a means for calibration. Several reports indicate that, without such calibration, the optical instruments are not reliable for concentration measurements but may still be useful for relative measurements for improved process understanding and control. Extractive PM Measurements Extractive measurements on hot, particle-laden or otherwise dirty off-gases typically require dilution of the gas stream before it enters the sensitive instrument. Dilution may, however, be challenging in terms of practical operation and also introduces additional error sources with significant implications for data treatment and calibration. The purpose of the dilution is typically two-fold: first, to cool the gas stream to temperatures that can be handled by the instrument; and second, to dilute the particle concentration to a level which can be detected and/ or quantified by the instrument and/or avoid condensation or clogging inside the instrument. The cooling in itself introduces error sources such as potential condensation of gases into aerosols and deposition of substances onto surfaces inside the instrument. 104,105,117 A number of extractive methods for PM quantification have been tested in the silicon alloy industry, including gravimetric filters, optical devices, mobility sizers and impactors. Gravimetric filters offer a robust, cheap and simple way to assess PM weight concentration. 104,113 A standard optical particle counter (OPC) and a condensation particle counter (CPC) appear to be less useful in silicon plants as they detect too-low PM concentrations. 94,113 Mobility particle sizers 117,118 and electrical low-pressure impactors (ELPI) [92][93][94] appear better suited for PM measurements in silicon plants, although these instruments are larger, heavier and more cumbersome to operate. Data interpretation is also more challenging, especially for the ELPI. [119][120][121] CONCLUSION In this literature review, current knowledge developed in, and relevant to, the Si-and FeSiproducing industry has been summarized. The article is primarily based on information available in the open literature, but some previously unpublished reports, of utmost relevance to the topics, have also been summarized and included. It contains state-of-the-art overviews for gaseous and particle-bound airborne emissions. Relevant technological aspects for the control and reduction of GHG, (NO X ), (PAH), heavy metals and PM are introduced. A number of research areas that need prioritized consideration have been identified: Emissions of GHGs other than CO 2 , such as hydrocarbons. For methane, the discrepancy between the very limited reported emission data and emissions calculated by standard emission factors is of the order of a factor of 10. The use of chemical NO X reduction treatments for SAF off-gases and the potential effect of such treatments on the silica fume quality. The mechanical generation of dust from handling and transport of raw materials as well as solid products has not been studied. Effective methods for the collection and reduction of such dust are needed. Most gaseous emissions are reported and monitored by use of emission factors. The overall GHG emissions from FeSi and MG-Si production are reasonably well understood and quantified, with the exception of hydrocarbons. The extent of GHG emissions is highly dependent on the carbon and electricity consumption (which in turn depends on the type of Si/FeSi alloy), the carbonaceous material mix, charging methods and furnace operation. The furnace design, flue gas management and furnace operating procedures, such as stoking and charging, heavily influence NO X emissions. Measurements show strong correlations between PM and NO X formation above the furnace charge. Localized temperature control can only be achieved by limiting the extent of silica fume production through SiO(g) combustion. Close flue gas temperature control is extremely important for several reasons. One reason is the delicate trade-off between PAH and NO X management. PAHs are destructed at high temperatures, and PAH emissions can be significantly reduced when off-gas temperatures are kept above 800°C. PAH and heavy metals are simultaneously present as gases and particulate forms, and their distribution is highly temperature-dependent. The particle-bound compounds are often collected in the particulate control devices (e.g., fabric filter or wet scrubber). The more volatile compounds, however, will risk being emitted to the atmosphere if no further gas treatment is applied, for example, by the use of bag filters with adsorbent injection to remove Hg and Cd. It is particularly important to consider uncertainty parameters arising from every step of the monitoring process, yet their estimation is often less than trivial. Accuracy and sample representability are often limiting the trustworthiness of the obtained data. The use of material balances are, for example, sensitive to representability issues, and heavy metals assessment is highly uncertain due to the detection limits of currently available analysis methods. For flue gas measurements, averaging time and frequency are of prime importance, and such timing requirements always depend heavily on the processes at hand. Solid process understanding is therefore essential if useful data are to be produced. Round-the-clock gas measurements are desirable, but may be difficult to achieve in high-temperature dusty gas streams. Smelter flue gas ducts present an extremely harsh environment where sampling is very challenging and the available instruments must be selected based on their ability to operate under such conditions. Dilution and isokinetic sampling requirements may present additional difficulties and typically increase uncertainty values. PM measurement principles often remain to be validated for the specific types of dust encountered in Si and FeSi smelters. Hence, site-specific calibrations are recommended to ensure reasonable accuracy. ACKNOWLEDGEMENTS Funding was provided by Norges Forskningsrå d (NO) (Grant No. 237738). This article was enabled through funding from the Research Council of Norway through the center for research-driven innovation (SFI) Metal Production. The authors wish to thank Dr. Edin Myrhaug and Dr. Nils Eivind Kamfjord at Elkem AS for comments and discussions. OPEN ACCESS This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2023-01-04T15:23:36.446Z
2016-10-25T00:00:00.000
{ "year": 2016, "sha1": "9208118a10be0946841067ce32e3d5c81bfaf936", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11837-016-2149-x.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "9208118a10be0946841067ce32e3d5c81bfaf936", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
196609974
pes2o/s2orc
v3-fos-license
ADMET studies (toxicological) of plastic by products in causing breast cancer Plastic pollution is one of the major threats of the world these days. Plastic pollution refers to the accumulation of plastic and its products causing severe adverse environmental issues not only to humans but also to the animals and plants.1 These are one of the major compositions of land and water pollutions.2 Though; known for these adverse effects plastic has become an integral part of human life. It was noted earlier that the industrial effluents hamper the normal physiological responses3,4 mimicking steroids, retinoid, thyroid hormones and lipophilic hormones.5–9 Introduction Plastic pollution is one of the major threats of the world these days. Plastic pollution refers to the accumulation of plastic and its products causing severe adverse environmental issues not only to humans but also to the animals and plants. 1 These are one of the major compositions of land and water pollutions. 2 Though; known for these adverse effects plastic has become an integral part of human life. It was noted earlier that the industrial effluents hamper the normal physiological responses 3,4 mimicking steroids, retinoid, thyroid hormones and lipophilic hormones. [5][6][7][8][9] Bisphenol A (BPA) is used in the preparation of polycarbonate plastics seen widely in the consumer products, 10 which was firstly synthesized in 1891 11 and is "BPA is usually known as Estrogenic agent", 12 both in vivo and in vitro 10 and binds to the same receptor as the female hormone does. Vanden Berg et al., 13 in 2009 stated that humans come in contact with BPA primly when they consume food and water from the materials used for containers and packages. This endocrine disruptor thus, enters the body by 'Leaching' results when the polymer breaks to release the BPA monomers. 14,15 BPA binds and activates estrogen receptors ER α and ER β, but with a much lesser affinity when compared to estradiols 16 Figure 1. Physiologically BPA can alter the ovarian cycle and interferes with the embryonic development and exerts its role in many physiological and biological changes in women, 10,17-20 besides causing a variety of cancers 21,22 including breast cancer. Chemically BPA (CH3)2C(C6H4OH)2, Figure 2, is a colorless compound belongs to diphenyl methane derivatives and bisphenol with two hydroxyl phenyl groups. It is soluble in organic solvents. The objective of the present article is to assess and understand the ADMET / toxicological properties of Bisphenol A and its metabolite, 4-methyl-2, 4-bis (4-hydroxyphenyl) pent-1-ene [MBP] after they enter the human body. Result and discussion Upon studying the ADMET descriptors which gives an idea of the absorption, distribution, metabolism, and excretion-toxicity in pharmacokinetics of a given compound. following are studied and understood for the compound, taking the standard parameters as constant; ADMET Absorption: which Predicts Human Intestinal Absorption (HIA) after it is orally administered, ADMET Aqueous Solubility: which predicts the solubility of each of the compounds in water at 25°C, ADMET Blood Brain Barrier: that helps in predicting the ratio of concentrations of compound on both sides of the blood brain membrane after oral administration, ADMET Plasma Protein Binding: this predicts whether or not a compound is likely to be highly bound to carrier proteins in the blood, ADMET CYP2D6 Binding: Predicts cytochrome P450 2D6 enzyme inhibition, ADMET Hepatotoxicity: 23 Predicts dose-dependent human hepato toxicity of compounds. Table 1 shows the ADMET results of MBP and BPA and Figure 3 represents the ADMET plot and Figure 4 demonstrates the graphical, comparative results of ADMET. Blood brain barrier The results states that the MBP has high penetration level when compared to that of Bisphenol A. ADMET BBB penetration level The obtained results states that the penetration levels of the compounds fall in the category of high penetration value with the level of 1. ADMET absorption The level obtained for both the compounds is 0 stating that they can be absorbed easily by the human intestinal system. ADMET solubility/level The compounds show a moderate to good level of solubility. ADMET hepatotoxicity The hepatotoxicity model predicts potential organ toxicity for a wide range of structurally diverse compounds. The generated values states that the compounds are likely to cause toxicity to the liver with the score of 0.7. ADMET CYP2D6 The generated values show that the compounds, non inhibitor and unlikely to inhibit CYP2D6 enzyme with the score of 0.346 and 0.247 respectively. Conclusion The present article elucidates on the toxic effects of plastic and its byproduct. In order to reduce its hazardous effects on humans, it is utmost essential to reduce its encounter. BPA and MBP have already proved to alter and induce certain notable changes in breast cancer and in the mammary glands. Their binding affinities with ER α and ER β were also studied and analyzed by Barker and Chandsawangbhuwana. In conclusion, I strongly feel that there should be more awareness among the people on this aspect. Further, it should be made known to the people to minimize eating canned food and polycarbonate plastic food containers and to choose fresh foods.
2019-03-30T13:05:47.995Z
2015-04-08T00:00:00.000
{ "year": 2015, "sha1": "5ff252cc0ccbf8c3ab53ea070d43f99a888a00b1", "oa_license": "CCBYNC", "oa_url": "http://medcraveonline.com/MOJPB/MOJPB-02-00046.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ad8138f7df6fb5a66632fbcf4fea702ebeb72964", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
257099060
pes2o/s2orc
v3-fos-license
Prediction of improved antimalarial chemotherapy of artesunate-mefloquine in combination with mefloquine sensitive and resistant Plasmodium falciparum malaria Background Declining in susceptibility of Plasmodium falciparum to mefloquine is reported in South-East Asia. A revisiting on mefloquine pharmacokinetics-pharmacodynamics (PK/PD) could assist in finding new appropriate dosage regimens in combination with artesunate as a three-day course treatment. Objective This study aimed to investigate promising alternative artesunate-mefloquine combination regimens that are effective for the treatment of patients with mefloquine-sensitive and resistant P. falciparum malaria. Methods Data collected during 2008–2009 from 124 patients with uncomplicated P. falciparum malaria were included in the analysis, 90 and 34 patients with sensitive and recrudescence response, respectively. All patients were treated with a three-day combination of artesunate-mefloquine. Population PK-PD models were developed. The developed models were validated with clinically observed data. Simulations of clinical efficacy of alternative mefloquine regimens were performed based on mefloquine sensitivity, patients’ adherence and parasite biomass. Results The developed PK/PD models well described with clinically observed data. For mefloquine-resistant P. falciparum, a three-day standard regimen of artesunate-mefloquine is suitable (>50% efficacy) only when the level of parasite sensitivity was < 1.5-fold of the cut-off level (IC50 < 36 nM). For mefloquine-sensitive parasite with IC50 < 23.19 nM (0.96-fold), all regimens provided satisfactory efficacy. In the isolates with IC50 of 24 nM, regimen-I is recommended. Curative treatment criteria for mefloquine and artesunate were C336h (>408 ng.mL-1) or Cmax/IC50 (>130.1 g.m/M), and Cmax/IC50 (>381.2 g.m/M), respectively. Conclusions Clinical use of a three-day standard artesunate-mefloquine is suitable only when the IC50 of P. falciparum isolates is lower than 36 nM. Otherwise, other ACT regimens should be replaced. For mefloquine-sensitive parasite, a dose reduction is recommended with the IC50 is lower than 23.19 nM. Introduction Malaria-related mortality has tremendously reduced during the last two decades since the introduction of artemisinin-based combination therapy (ACT) [1]. Artesunate-mefloquine is one of the commonly used ACT for the first-line treatment of uncomplicated Plasmodium falciparum in Southeast Asia and Africa [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. In South-East Asia, its clinical efficacy has been continuously declined, with failure rates of 10-30% [1]. The highest failure rate was reported in the Thai-Myanmar border areas (42-day cure rate of 72.58%) in 2010 [18]. This high failure rate was attributed to pharmacokinetic factors (inadequate drug concentrations) and reduced susceptibility or resistance of P. falciparum to either mefloquine or artesunate, or both. Inadequate blood concentrations of mefloquine was shown to significantly influence treatment response compared with artesunate. Notably, the current three-day course regimen may not provide adequate drug concentrations in the resistant parasites. Revisiting the pharmacokinetic and pharmacodynamic relationship of drug concentration-time profile and clinico-parasitological response following this combination therapy may offer effective alternative dosage regimens for both mefloquine-sensitive and resistant P. falciparum. The pharmacokineticpharmacodynamics (PK/PD) modelling has been successfully applied to predict the appropriate dosage regimens of various antimalarial drugs for a clinical use [19][20][21][22][23], exemplified by the SJ733 (an oral ATP4 inhibitor) [23]. The current study aimed to investigate promising alternative regimens of artesunate-mefloquine with improved efficacy to cope with multidrug resistant P. falciparum using PK/PD model approach. Materials and methods The flow-chart of study framework is shown in Fig 1. Data source and study population Data from the previously published article on the clinical efficacy of the three-day artesunatemefloquine combination in the Thai-Myanmar borders during 2008-2009 were used for pharmacokinetic-pharmacodynamic (PK/PD) analysis [18]. All patients were diagnosed with uncomplicated P. falciparum malaria. In brief, 124 patients (aged 16-50 years) were included in the study, 90 and 34 patients with sensitive and recrudescence response, respectively. All received 200 mg of artesunate and 750 mg of mefloquine on day 1, followed by 200 mg of artesunate, and 500 mg of mefloquine on day 2, followed by 30 mg of primaquine on day 3. Population PK/PD modeling The PK/PD models for mefloquine and artesunate/dihydroartemisinin (active metabolite of artesunate) were constructed using nonlinear mixed-effects modeling (MonolixSuite software, version 2021R1, Antony, France; Lixoft SAS, 2021). Pharmacokinetic parameters were estimated using the build-in stochastic approximation expectation-maximization algorithm. Various compartment model with different order absorption and elimination were performed to fit with drug concentration-time data. Pharmacokinetic parameters were normally distributed when transformed to log-scale. The pharmacodynamic model was evaluated using E max model (turn-over rate) with production inhibition. The model was corrected with the fraction of unbound drug (f u ) in plasma and tissue and was tested for sigmoidicity characteristics. Pharmacodynamic parameters were tested following non-transformed and log-normal transformation. The pharmacodynamic equation is shown below; Where parasite is number of parasite; C is drug plasma-concentration profiles; IC 50 is the half maximal inhibitory concentration that inhibit parasite growth by 50%; K in is indirect turnover model with full inhibition of production; K out is the degradation rate of parasite. The residual variability and the types of error models were evaluated using proportional, constant, and combined error models with power law. Predefined criteria for a model selection were: (i) the decrease in minimum objective function value (OFV), Akaike Information Criteria (AIC), Bayesian Information Criteria (BIC), and Corrected Bayesian Information Criteria (BICc), and (ii) the percentage of root mean square errors (RSE%), Graphical Goodness of Fit (GOF) including the observed versus predicted concentrations, scatter plot of residual, and the virtual predictive check (VPC). A significant level for the inclusion of the covariates (age, sex, bodyweight, and mefloquine level before treatment) in the model was set at α = 0.05. The plotting of model-based individual prediction (IPRED) and population prediction (PRED) versus observed concentrations (GOF) was used for model evaluation. VPC included the observational data versus simulated data (1,000 patients) with the 10th, 50th, and 90th percentiles. PK/PD model validation The IC 50 (concentration that inhibits parasite growth by 50%) reflecting parasite's susceptibility to mefloquine from the clinical study in the same areas areas [24] were used for model validation (IC 50 12.9, 19.1 and 30.4 nM for isolates from Ranong, Kanchanaburi and Ratchaburi, and Tak provinces, respectively). These IC 50 values were used to report the drug's efficacy in those area (Ranong, Kanchanaburi and Ratchaburi, and Tak province). The evaluation of drug's efficacy were described in the next section. The pharmacodynamic endpoint were the 42 days curative rate. Monte Carlo simulations The final PK/PD (1,000 virtual patients with 10 simulations) models were used to simulate optimal dosages of mefloquine in the combination regimens that provided high clinical efficacy using Monte Carlo simulations (Simulix version 2021R1, Antony, France; Lixoft SAS, 2021). The simulated dose regimens (oral administration) for mefloquine-resistant P. falciparum included: (i) 750 mg on day 1 (day 0), followed by 500 mg on day 2, (ii) 500 mg once daily for 42 days, (iii) 500 mg every 72 hours for 42 days, (iv) 500 mg every 96 hours for 42 days, and (v) 250 mg every 12 hours for 42 days. The simulated regimens for mefloquine-sensitive strains were: (i) 750 mg on day 1 and 500 mg on day 2, (ii) 500 mg on day 1 and 250 mg on day 2, (iii) 750 mg on day 1, (iv) 500 mg on day 1, and (v) 250 mg on day 1, 2, and 3. The simulated regimens are based on trials and erros until it provides the curative rate close to 100%. Effect of patients' adherence Due to long course treatment of the proposed reimen (42 days), the effect of patients' adherence to medication (i.e., 100%, 70%, 50%, and 30%) on treatment efficacy were evaluated. Toxicity approach Due to the permanent neurological deficits following standard treatment, the relative risk of plasma/blood concentration over the threshond, resulting in the disruption of calcium homeostasis and ER function following five regimens, were calculated. Establishment of the criteria for curative treatment Receiving Operating Characteristic (ROC) curves were used to assess the accuracy of the following cut-off parameters for curative outcome: area under the curve during the first 7 days (AUC 0-7days ), trough plasma/whole-blood concentration at 336 h (C trough,336h ), and peak concentration and IC 50 ratio (C max /IC 50 ). It is noted that the mechanism of action of a mefloquine is on the blood stage (blood schizontocide), curative parameter "C trough, 336h (14 days)" covers the blood stage. A binomial proportion (Wilson/Brown) method was performed at a statistically significant level of 0.05 (GraphPad Prism version 9.20 for Windows, GraphPad Software, La Jolla California USA). Sensitivity, specificity, accuracy, negative predictive value (NPV), positive predictive value (PPV), positive likelihood ratios (LR+), negative likelihood ratios (LR-), and diagnostic odd ratios were calculated for an internal validity. PK/PD models A one-compartmental model with zero-order kinetic absorption and linear elimination showed the best characterized (best fit) the population pharmacokinetic properties for mefloquine. A transit-compartment model with one compartment and linear elimination best characterized (best fit) the pharmacokinetic properties of artesunate and dihydroartemisinin. The pharmacokinetic model of artesunate/dihydroartemisinin was in accordance with that previously reported [a transit-compartment model followed by one-compartment disposition] [26]. Since both artesunate and dihydroartemisinin are almost immediately eliminated from the systemic circulation, their contribution to parasite elimination is only during the first two days of treatment. Therefore, the neither artesunate nor artesunate could be fit with parasite clearance throughout 42 days. With a short-half-life, primaquine unlikely plays important role for parasite elimination due to fast elimination, particulary of blood stage. Only mefloquine was therefore used for PK/PD modeling and simulation as it play role in parasite elimination for the whole 42-day follow up. The final PK/PD model of mefloquine was well characterized by E max model (turn-over rate) without sigmoidicity when corrected with f u . The inclusion of body weight and sex did not improve model quality. All parameters in all models showed low to moderate variation (%RSE). The analysis of OFV, AIC, BIC, BICc and GOF were summarized in the S1 File. Final population parameters estimated for mefloquine (resistance and sensitive), artesunate and dihydroartemisinin are shown in Tables 1 and 2, respectively. Model validation The developed PK/PD models using the IC 50 of 12.9, 19.1 and 30.4 nM adequately predicted the reported clinical efficacy of mefloquine [19]. The efficacy predicted based on the IC 50 of 23.19 (sensitive) and 63.84 nM (resistance) reported for the current study was 74 and 13.8%, respectively [18]. PK/PD simulations for prediction of the clinical efficacy of mefloquine Mefloquine-resistant P. falciparum. Without the effect of patients' adherence (100%), the efficacy of mefloquine for regimen-I (standard regimen) ranged from 1 to 57.3% (Fig 2), with IC 50 ranging from 1 to 5-fold (24-110 nM). All other regimens provided better treatment efficacy than regimen-I (p � 0.001). Regimen-V was the most effective (70.6-98.4% cure) (Fig 2). Number-needed-to-treat (NNT) and relative risk (RR) for regimen-V were 1.44-2.43 and 0.04-0.58, respectively. With a 1 to 2-fold increase in IC 50 (24-72 nM), all proposed regimens provided moderate efficacy (>50%). With a 3 to 5-fold increase in IC 50 , the efficacy of regimen-III and IV dramatically dropped to around 30 and 15%, respectively. The efficacy of regimen-II and V were high (>70%). The simulated C max ratios between regimen-II, III, IV and V compared with regimen-I were 5.65, 1.82, 1.44, and 5.55-fold, respectively. Since regimen-II and V provided C max over 5-fold of the standard regimen, patients may be at risk of toxicities. Regimen-II and V were inappropriate for clinical use, and therefore, the effect of patients' adherence was evaluated only for regimen-III, and IV. Treatment efficacy, RR, and NNT for all regimens when the adherence was 100% are presented in Figs 2-4 (2, 3, and 4 for efficacy, RR, and NNT, respectively). With 70% adherence, regimen-III and IV provided improved efficacy compared with regimen-I for isolates with different levels of sensitivity to mefloquine (p<0.001) for mefloquine resistance (S1 File).The NNT and RR were 3.21-43.68 and 0.12-0.81, respectively. When the IC 50 was 1 to 1.5-fold of the cut-off value, the efficacy of regimen-III and IV were moderate (50-70%) and high (>70%), respectively. When the IC 50 was increased by 2-fold, regimen-IV provided low efficacy (43.9%) efficacy. When the IC 50 was increased by 5-fold, the efficacy for regimen-III and IV were 13.3 and 7.2%, respectively (S1 File). With 50% adherence, both regimen-III and IV still provided superior efficacy than regimen-I (α = 0.001). Only a 1-fold increase of IC 50 resulted in moderate efficacy (>50%) for both regimen-III and IV. When the IC 50 was increased by 1.5-fold, treatment efficacy of regimen-III was still over 50% (NNT = 5.05, p<0.001). With a 2 to 3-fold increase in IC 50 , the efficacy was 15-50% for both regimens. With a 5-fold increase in IC 50 , the efficacy of both regimens were lower than 10%. With 30% adherence, the efficacy of regimen-III appeared to be slightly higher than regimen-I (S1 File). However, no significant difference was found for 1-fold (p = 0.2), 1.5-fold (p = 0.09), and 2-fold (p = 0.02) increase in IC 50 . The efficacy of regimen-IV for all sensitivity levels except 5-fold was lower than regimen-I. No significant difference was found between regimen-I and IV when the IC 50 was increased by 3-fold (p = 0.08) and 5-fold (p = 0.24). The efficacy and RR of each regimen at 100 and 30% adherence are shown in Figs 5 and 6. The predicted mefloquine plasma/blood concentration profiles for each regimen are shown in S1 File. Predicted mefloquine in malaria Five different levels of parasite biomass [i.e., 30,000 (group-I), 20,000 (group-II), 15,000 (group-III), 10,000 (group-IV), and 5,000 (group-V)] with three different levels of parasite sensitivity to mefloquine were simulated. Overall, treatment efficacy seemed to be an inverse relationship with parasites biomass. All scenarios in all groups except for the IC 50 of 63.83 nM in group-II (p = 0.1) showed a significant difference in efficacy compared with group-I (p<0.001) (Fig 7), with odds ratios of 0.02-0.58 (Fig 8). For all parasite sensitivity levels, the group-V with IC 50 of 30.89 nM was the most effective regimen (75.30% cure). In contrast, the group-I with IC 50 of 63.84 nM was the least effective (1.3% cure). Mefloquine sensitive P. falciparum Three levels of mefloquine sensitivity based on clinical IC 50 and two varied IC 50 values were simulated assuming 100% patients' adherence. Overall, the efficacy of all regimens was inferior to regimen-I for all scenarios (p = 0.001). However, the efficacy of all regimens except regimen-IV with IC 50 of 23.19 nM (49.7%) were still higher than 50% (moderate efficacy). With 10% (77.1, 65.0, 61.6, 53.7 and 64.6%). Notably, treatment efficacy was dramatically decreased to around 60% when the IC 50 was increased to 23.19 nM (73.9, 60.2, 56.7, 49.7 and 60.6%). Comparison of clinical efficacy of regimen-I and other regimens for all IC 50 are provided in S1 File. Comparison of treatment efficacy between the 0.25-fold and 0.96-fold IC 50 with different regimens are shown in Fig 9. The predicted mefloquine plasma/blood concentration profiles for each regimen are shown in S1 File. Discussion The emergence and spread of mefloquine resistance have led to a decrease in clinical efficacy of artesunate-mefloquine combination. For mefloquine-resistant P. falciparum, a three-day artesunate-mefloquine regimen (regimen-I) should be replaced by other regimens when the IC 50 of mefloquine was higher than 36 nM (1.5-fold of the cut-off level) (<50% efficacy). It was clear that regimen-II/V provided the best efficacy for P. falciparum with different sensitivity. These regimens were the best option for mefloquine-resistant P. falciparum. Nonetheless, these proposed regimens resulted in a 5-fold increase in C max compared with regimen-I. Clinical use of regimen-II or V may result in an increased risk of mefloquine toxicity. In such case, regimen-III or IV would be a preferable choice for mefloquine-resistant parasite with IC 50 lower than 48 nM, particularly when supervised medication or directly observed therapy (DOT) is applicable. Notably, it is clear that patients' adherence significantly affected the treatment efficacy of the long-course treatment regimens (42 days). Treatment efficacy for parasite strains with IC 50 below 48 nM dramatically dropped to lower than 50% when the adherence was only 30%. Interestingly, initial parasite biomass significantly affected the clinical efficacy of mefloquine in the standard regimen (I), the higher initial parasite biomass, the higher treatment failure rate. The current standard regimen is thus, only suitable when the initial parasite biomass was lower than 5,000/μL (IC 50 : 30.89-32.71 nM). For mefloquine-sensitive parasite, although the clinical efficacy of all proposed regimens was relatively low compared with regimen-I (moderate efficacy). Ideally, regimen-IV with the lowest mefloquine total dose (500 mg) would be a preferable choice considering the amount of dosage administration. This regimen, however, provided the lowest efficacy compared with others. Its efficacy was also decreased to 50% for the parasite strains with IC 50 of 23.19 nM (Fig 5). Regimen-II, III and V had a comparable amount of mefloquine dose as well as clinical efficacy. In cases when patients' adherence to medication is of great concern, a single dose regimen-III (on day 1) is a preferable choice. Due to Mefloquineinduced neurotoxicity, low dose regimens, i.e., regimen-II and V are preferable choices. Remarkably, the efficacy of all proposed regimens for the parasite strains with IC 50 of 23.19 nM was decreased to 60%. In such case, Regimen-I is more appropriate. Similarly to the resistant strains, initial parasite biomass significantly influenced the efficacy of mefloquine in the sensitive isolates. When the IC 50 of mefloquine was between 13.10 and 19.39 nM, regimen-I was still effective with an initial parasite biomass of lower than 10,000/μL. In cases when the initial parasite density was over 10,000/μL, clinical use of this regimen may not be effective (<50% efficacy). Besides the problem with mefloquine-resistant P. falciparum in the Great Mekong Subregion (GMS), the emergence and spread of resistance to artemisinin-based therapy is the great concern in all malaria-endemic areas. The combination of two partner drugs with different mechanisms of action (triple artemisinin-based combination therapy or TACT) is an optional choice for malaria treatment [27,28]. It is noted that this approach has been successfully applied in HIV as well as tuberculosis [27]. The first clinical trial of TACT showed that TACT could tackle artemisinin-based resistance problems with high efficacy, safety and tolerability [28]. TACT is, therefore, a promising approach for malaria therapy. Few clinical information have been corrected based on previous study [18], therefore, the effects of covariate parameters on models are limited. Since the highest mefloquine resistance level is reported in Thailand [1], the proposed curative criteria based on clinical data in Thailand could be applied for the treatment of uncomplicated P. falciparum malaria in other endemic areas. The efficacy reported in this study [18] was 78.5%, while the efficacy in other Southeast Asia countries (Cambodia, Myanmar, and Laos) [4][5][6][7][8][9][10][11][12], and African countries (Mali, and Senegal) [13][14][15][16][17] ranged 96 to 100%. However, clinical application based on the results of the current study may be limited as the PK/PD models did not include the effect of artesunate and dihydroartemisinin on parasite clearance. Furthermore, the impact of initial parasite biomass on the clinical efficacy of regimen-I artesunate-mefloquine combination seemed to be underestimated (low efficacy). An increase in initial parasite biomass has been reported to result in delayed parasite clearance [29]. Therefore, simulation of efficacy for the 42-day follow up may not be long enough to capture such effect. Moreover, external validation for curative treatment criteria using separate data sets was not performed. Further large clinical trials of the three-day artesunate-mefloquine combination with different levels of initial parasite biomass is required. Also, only nausea and vomiting have been reported in patients and no other neurological deficits have been reported [18]. The accumulation of this drug exceeding 50 μM following standard treatment in some patients, however, would disrupt the function of neuronal calcium homeostasis and ER functions [30]. Results based on this simulation showed that there were not significantly different of predicted brain-concentration profiles between patients with and without adverse events following a standard treatment (p = 0.2734). Nevertheless, the RR of mefloquine concentrations over threshold of the disruption of neuronal calcium homeostasis for each regimen compared with standard treatment were reported for a risk-benefit assessment (All regimens are at risk of the disruption of neuronal calcium homeostasis). Conclusions In conclusion, clinical use of a three-day artesunate-mefloquine (regimen-I) is suitable only when the IC 50 of P. falciparum isolates is lower than 36 nM. With the resistant strains (IC 50 up to 48 nM), two proposed regimens (III and IV) are preferable under DOT therapy or supervised medication. Otherwise, other ACT regimens should be replaced as the clinical efficacy would be dramatically decreased if patients' adherence to medication is poor. For mefloquinesensitive parasite, all regimens should provide satisfactory efficacy if the IC 50 is lower than 23.19 nM. With decreased parasite sensitivity (IC 50 close to 24 nM), a three-day artesunatemefloquine is recommended since the efficacy of all proposed regimens would be extremely reduced to lower than 60%.
2023-02-24T06:18:32.480Z
2023-02-23T00:00:00.000
{ "year": 2023, "sha1": "71a4b584d03f7694dc4ffc1e0ad2f5987ca988ca", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e25f140746364549b62a5384ac30025592144933", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13539681
pes2o/s2orc
v3-fos-license
Two Phase Evaluation for Selecting Machine Translation Services An increased number of machine translation services are now available. Unfortunately, none of them can provide adequate translation quality for all input sources. This forces the user to select from among the services according to his needs. However, it is tedious and time consuming to perform this manual selection. Our solution, proposed here, is an automatic mechanism that can select the most appropriate machine translation service. Although evaluation methods are available, such as BLEU, NIST, WER, etc., their evaluation results are not unanimous regardless of the translation sources. We proposed a two-phase architecture for selecting translation services. The first phase uses a data-driven classification to allow the most appropriate evaluation method to be selected according to each translation source. The second phase selects the most appropriate machine translation result by the selected evaluation method. We describe the architecture, detail the algorithm, and construct a prototype. Tests show that the proposal yields better translation quality than employing just one machine translation service. Introduction Due to online access and instant availability, machine translation (MT) services are becoming more popular. One example is the online Google translation service. These MT services, in most cases, do not provide perfect accuracy or fluency. When multiple MT services are available, the user is confused about which service is more accurate for the task at hand. Manual service selection is tedious and error prone. Thus, it is necessary to create a mechanism that can select the most appropriate MT service. Many functional equivalent MT services have become available. Language Grid (Ishida, 2011) is a serviceoriented intelligence platform for language services. It provides many language translation services by wrapping nonnetworked language resources and software. With standard interfaces, functional equivalent translation services are formalized and made available for both end-users and community translation developers. The types of language services include machine translation services, dictionary services, parallel text services, and morphological analyzer services. Moreover, composite translation services could be generated based on these types of language services . For example, Language Grid provides a multi-hop composite service. It combines a machine translation service and in-domain dictionary services, so as to provide a higher quality MT service for a desired domain. Many composite MT services can be created by generating different combination (Bramantoro et al., 2010). Given this multiplicity of translation services available, it is difficult for the user to select the MT service that best suits the current task. Several evaluation methods can be used to evaluate translation results automatically, such as BLEU (Papineni et al., 2002), NIST (Doddington, 2002), WER (Nieen et al., 2000), etc. However, their evaluation results are not unanimous (Och, 2003;Cer et al., 2010). Moreover, the efficiency of the evaluation method will affect the selection of the MT service. It must be noted that these evaluation methods have incompatible metrics, and their results can have different distribution ranges. The purpose of this paper is to provide MT service selection for end-users and community translation developers of Language Grid (Ishida, 2011). Community translation developers as well as end-users require assistance selecting the proper translation according to the translation quality, as well as other properties like time. The selection of a service according to the general quality of service (QoS) properties, such as time, cost, etc, has been well researched (Tian et al., 2004;Serhani et al., 2005). Thus, our research focuses on how to use these evaluation methods to calculate and rank MT services according to the translation quality, a domainspecific QoS property. For the example of Japanese-English translation, there are two candidate services, Google Translate and J-Server, and three candidate evaluation methods, BLUE, NIST, and WER (see Figure 1). The evaluation results and translation results of two source sentences are given below. For the first sentence, Google gets higher evaluation results than J-Server, thus it will be selected by BLEU or NIST, while WER will select J-Server. The evaluation results of these evaluation methods do not agree with each other. For the second sentence, WER generates same evaluation results for Google and J-Server, BLEU generates only a slight different evaluation result, while NIST indicates a disparity between them. Here, we face the problem of how to make use of these evaluation methods to generate an evaluation result for machine translation service selection. If a source sentence is given, the results of multiple evaluation methods can conflict. It is better to select a proper evaluation method for each source sentence, rather than using the same evaluation method continuously. For certain translation source, if more than one evaluation methods are appropriate, selecting one of them is proper. In the MT services selection, multiple translation evaluation methods are available, thus, to achieve the goal of selecting a proper MT service, two main issues should be considered. (1) How to make use of multiple evaluation methods? We provide a two-phase architecture for service selection, which is an extension of the Web service broker for service selection. In the first phase, a proper evaluation method is selected. In the second phase, based on the selected evaluation method, the most appropriate machine translation service is selected. (2) How to realize service selection? To achieve this goal, we introduce a ranking algorithm It dynamically selects the appropriate evaluation method for each input translation source. We use data-driven classification for selecting the evaluation method. The machine translation service with the highest evaluation result as indicated by the selected evaluation is chosen. Translation Service Selection Architecture We extend the broker for Web service selection (see Figure 2) to create a two-phase architecture for MT service selection. For selecting Web service according to QoS properties, a Web service broker is flexible and trustworthy architecture for realizing the management of QoS properties for providers and users of Web services (Tian et al., 2004;Serhani et al., 2005). It receives request from and makes a response to Web service requestor. Meanwhile, it registers services from Web service providers, and verifies and certificates the properties said by Web services. A broker is also a Web service that can be published to and be found in a Web service registry; this makes it readily available to both end-users and new service developers. The broker architecture usually has a built-in evaluation method for each QoS property. However, for MT service selection, it is not proper to adopt just one evaluation method to evaluate all translation services. Instead, the most appropriate evaluation method is to be selected for each input translation source. Thus, we propose a two-phase architecture for MT service selection. Firstly, we present an overview of our two-phase architecture (see Figure 3). The broker for MT service receives a request with translation source from and returns a response (translation result) to the MT service requestor. The extension is to register both MT services and evaluation methods for selection. Meanwhile, in addition to MT service selection, it first has to select the evaluation method. Several considerations are presented below before more details is given. • Wrapping existent software of evaluation methods into services: Due to ongoing research into MT evaluation, new evaluation methods will emerge, and their software will be published. To provide an open-ended interface for integrating additional evaluation methods, it is useful to provide a self-describing Web service interface, wrapping existent software of evaluation methods into flexible online services (Eck et al., 2006). It is easily realized by the service wrapper function provided by the Language Grid platform. • Regarding Language Grid as service provider: The Language Grid service-oriented platform successfully solves various service issues, such as creation, registration, and management. Due to the service description profiles of Language Grid, MT services category and evaluation methods category can be registered conveniently. Thus, Language Grid is an excellent provider for MT services and evaluation methods. • Using data-driven classification to select evaluation method: Classification is necessary to realize evaluation method selection according to translation source. Data-driven classification is suitable for this task. First, no experience is available for such classification. Second, because the attributes of sentences vary dynamically, the data-driven approach is more extensible. Data-driven classification builds a classification function dynamically from a training set. This set can be collected from human selection cases. To realize data-driven classification, we adopt the decision tree approach. First, it offers quick training and classification, which is very user-friendly. Second, it is easy to transform a decision tree into decision rules, which is well supports manual verification. C4.5 algorithm (Quinlan, 1993), one of the most frequently used decision tree, is used in the evaluation method selection. It has several merits including handling missing values, allowing presence of noise, and realizing the categorization of continuous attributes. It should be noted that we view C4.5 as a 'black box' for the classification task; its original functionality was preserved. The two-phase architecture of machine translation service selection (see Figure 3) is based on the above considerations. The broker for MT service selection divides its functions into the evaluation method selection phase and the MT service selection phase. Evaluation methods are handled in the former phase, and the output is the appropriate evaluation method. MT services are handled in the second phase, and the result of a translation service is selected in this phase. The main components of the broker include Attribute Collector, Data-driven Classification, Evaluation Methods Category, MT Services Category, MT Service Executor, Evaluation Method Executor, and Ranker (see Figure 3). Then, we describe the processes of two phases below. • Evaluation method selection phase: A translation source from MT service requestor is analyzed by the attribute collector component, and the analyzed attributes are sent to data-driven classification component. According to the attributes of the translation source, an evaluation method is selected from meth-ods in evaluation methods category component by the data-driven classification. • MT Service selection phase: The translation source is send to the MT service executor component, which invokes the MT services from services in MT service category component. The translation results are sent to the evaluation method executor component, which invokes the selected evaluation method identified in the earlier phase. The evaluation results of the translation are sent to the ranker component, and the best translation result is send to the MT service requestor. Deployment We realized a prototype that implemented the above component functions as detailed below. 1) Evaluation method selection phase: • Evaluation Methods Category: It is a simple MySQL 1 database holding stored service name, the URL, operation names and types, parameter names and types, and preset parameter values. There are three evaluation methods, BLEU, NIST, and WER methods, which are from Stanford Phrasal Evaluation project (Cer et al., 2010), and are wrapped into services by Language Grid platform. • Attribute Collector: Two simple attributes are collected, the length of translation source and the source and target languages. The length of translation source is calculated as the number of words in the translation source. • Data-driven Classification: J48 software, a Java implementation of C4.5 algorithm from Weka data mining tool 2 , is used for classification. Its input is the attribute-value pairs output by the attribute collector component, and its output is the name of evaluation method, according to which, the details of evaluation method can be retrieved from the evaluation methods category component. 2) MT service selection phase: • MT Services Category: It is similar to evaluation methods category component. Three services, Google, J-Server, and Translution services from Language Grid platform, are registered. • MT Service Executor and Evaluation Method Executor: They are implemented based on JAX-RPC 3 service client, which makes it easy to invoke a Web service according to the name space, operation name and type, and parameter name and type. • Ranker: A ranking algorithm is designed and implemented in Java. The input of this algorithm is the evaluation results of the selected evaluation method. The output of the algorithm is a selected translation result of highest QoS value of translation quality. The detail of this new algorithm will be explained in the following section. The above two-phase architecture and components deployment, makes it convenient to realize the proposed broker for MT service selection. Translation Selection Algorithm After that, we illustrate the strategy of selecting the most appropriate MT service. To explain the selection algorithm in detail, a formal description is given as follows. Selection For a translation user, n translation services S={s 1 , s 2 , . . . , s n } are available, along with m evaluation methods E={e 1 , e 2 , . . . , e m }. For each request translation source r, a proper evaluation method e k is to be selected. According to this selected evaluation method e k , a QoS value of translation quality qos(e k , s i ) is to be generated for each service s i . Ranking these QoS values will determine the translation service s select to be selected. While q ji , an evaluation result, is the result of applying the jth evaluation method e j to the translation result of ith MT service s i . However, these evaluation results are likely to conflict with each other, since they are generated by different evaluation methods. We need to select an evaluation method before we can select the service. We use a decision tree to select the target evaluation method, e k . For the request translation source r, the attribute collector collects c attribute values by the set of functions F ={f 1 , f 2 , . . . , f c }. If the decision tree is not trained, the decision rules are not generated. First, a training set is required, which are a set of translation sources, and for each translation source, a proper evaluation method is given. The attributes of these translation sources will be analyzed and used for training. Once the decision tree is 3 http://java.net/projects/jax-rpc/ trained, it easily generates the decision rules. Each decision rule can be described as follows: Here, θ low t and θ up t are the lower and upper boundaries of tth collected attribute value f t (r) (1 ≤ t ≤ c). When attributes are collected from the request translation source, each decision rule is test until the target evaluation method e k is satisfied. Then, it will be sent to the next phase for execution. After appropriate evaluation method e k is selected, the translation quality of a service s i is qos(e k , s i ) = q ki . The translation quality values of all the MT services can then be ranked, and target service s select can be selected as follows. Thus, the algorithm will select an evaluation method e k in the first. Then, based on the evaluation values, it will get QoS value (translation quality) for each service s i . Finally, the QoS values will be ranked, and the target MT service s select will be selected. In addition, we design a normalization of evaluation result q ki , which are probably in different metrics, due to different evaluation methods. The result of normalization q ki is given below, which is a relative value of the average translation quality values of whole MT services and average of MT services except s i . If evaluation results are positive, normalization of translation quality qos (e k , s i ) = q ki , otherwise, qos (e k , s i ) = 1/q ki . Getting a unitary measure is required for community translation developers to aggregate translation quality with other QoS properties such as time and cost. Algorithm We describe the algorithm that works in the broker for MT service selection, see Algorithm 1. It includes two-phase execution. In the first phase, if no decision rules exist, we need to train the decision tree, and generate decision rules. Next, we calculate attributes {f 1 (r), f 2 (r), . . . , f c (r)} from request translation source r by attribute collector functions, then the attributes values are checked by decision rules. If decision rules exists, we can select a target evaluation method selected evaluation, which completes the first phase. In the second phase, it invokes the MT services S for translation results, evaluate translation results by the evaluation method selected evaluation for evaluation results, and get evaluation scores q(e k , s i ) from evaluation results. Then it is easy to rank for the target result s select . There are one more issue need be mentioned here, training the J48 decision tree. We need human-generated translation selection data for training. To prepare each training data, we need to prepare several MT service results, manually rank them, evaluate them by the multiple evaluation methods, and choose the evaluation method which best matches manually-generated ranking. With the target evaluation method, we calculate the attributes, train J48 with these attribute-data pairs, and generate decision rules from trained J48. We use the example in Figure 1 to explain the MT-Service-Select algorithm. The input evaluation methods are BLEU, NIST, and WER. The input MT services are Google and J-Server. There are two Japanese sentences are the request translation sources. The attribute collector has one function, which counts the translation-length, the number of words in the translation source. In the first phase, it is assumed that the decision rules exist (see Section 4.1.). The translation-length of the first sentence is 21. Then each decision rule is checked and the last decision rule, translation-length > 20 → BLEU, matches. Thus, the BLEU evaluation method is selected for the first sentence. While the translation-length of the second sentence is 14, so the the NIST evaluation method is selected for it. Thus, for the first sentence, the BLUE is sent to the next phase, while for the second sentence, NIST is sent to the next phase. In the second phase, for the first sentence, the MT services Google and J-Server are executed, and the service results are generated. Then the selected evaluation method BLEU is executed to evaluate the translation results, and the scores are 9.21 for Google, while 0.15 for J-Server. The results are compared, and the maximum is selected. Thus, for the first sentence, the translation result of Google is selected. For the second sentence, the translation result of J-Server is selected as per NIST. Thus, our algorithm selects Google for the first sentence and J-Server for the second sentence. Experiment The experience analyzed the increase in translation quality and the efficiency of service selection offered by our proposal. Preparation The prototype was tested on three Japanese-English parallel text corpus, a NTT Communication Science Lab corpus (NTT), a medical corpus is used (Medical), and Tanaka corpus 4 (Tanaka). From 3,715 NTT corpus, 2,001 Medical corpus, and 150,127 Tanaka corpus. We sampled out 100 sentence pairs from NTT, Medical, and Tanaka, each, separately. The request data tested consisted of 300 sentences. We randomly divided 300 sentences into six groups, each with 50 pairs. We trained the J48 decision tree using 60 additional pairs, that were sampled out in a similar manner. The training sets were selected through manual MT service results assessment. Only translation-length was gathered by the attribute collector. This length impacts evaluation method selection according to Och (Och, 2003). Finally, the generated decision rules were generated as following. • Parallel texts are used as translation source. One sentence of a parallel text pair is used as translation source, and the other is used as standard reference for evaluation. Evaluation methods, such as BLEU, NIST and WER, can generate more accurate evaluation results from the standard reference, so that the evaluation result will not be affected by reference quality. • Human assessment following the manual method from DARPA TIDES projects 5 at University of Pennsylvania were used as the standard quality. It yields fivelevel scores for fluency and adequacy, {5:All , 4:Most , 3:Much, 2:Little 1:None}. The mean of fluency and adequacy score is used as the human assessment score of translation quality, which is used to assess the translation quality of selected translation results. Analysis Once the translation sources are submitted, the translation result of MT services is selected. Bases on the hu-man assessment, a Hit Rate is used to evaluate how well the output of the proposed mechanism matches the manual selection. Average Score is used to evaluate the translation quality of the output of the proposed mechanism. We explain this by the following example; a user submits 2 translation sources {r 1 , r 2 }, which are translated by two MT services {s a , s b }. The corresponding human assessment scores are {score(r 1 , s a ):1, score(r 1 , s b ):4, score(r 2 , s a ):2, score(r 2 , s b ):5}. Because service s b gets larger scores for both r 1 and r 2 , it is selected. Assuming that the proposed service selection works as intended, service s a for source r 1 and s b for r 2 are selected. Their human assessment scores are {score(r 1 , s a ), score(r 2 , s b )}, as the Average Score of the proposed mechanism is average score = (score(r 1 , s a ) + score(r 2 , s b ))/2 = (1 + 5)/2 = 3. The Hit Rate of the proposed mechanism is hit rate = (0 + 1)/2 = 50%, because for the first source r 1 , the proposed mechanism selects s a while the human selects s b , which are different. For the second source r 2 , they both select s b . Thus, they have one common selection for two translation sources. The hit rate represents how well the proposed service selection follows manual selection. The results achieved when no service selection is performed are shown in Table 1. Google received average human score of 3.37, J-Server got 3.43, and Translution got 3.06. Manual selection on the three sets of translation results yielded 116 sentence by Google, 143 by J-Server, and 41 by Translution. Compared to manual selection, the hit rate of Google is 62.8%, J-Server is 67.5%, and Translution is 54.0%. J-Server has highest average and hit rate for this Japanese-English translation task. From the hit rate, we find that no MT service dominates the other services (otherwise its hit rate will be 100%). The results achieved when service selection is performed, are shown in Table 2. We compare the use of only one evaluation method and the proposed two-phase selection, which selects from among the evaluation methods available. Using just only one evaluation method, WER, the average score and hit rate are 3.47 and 72.0%, which is a little better than J-Server. BLEU has heigher average score and hit rate than WER and NIST: 3.56 and 76.2%. While using the proposed two-phase selection mechanism, the average score and hit rate of MT service selection are 3.81 and 81.7%. From the comparison of Average Score (see Figure 4) and Hit Rate (see Figure 5), we find that the proposed two-phase selection raises the translation quality received by users. Compared to BLEU selection in Table 2, the proposed twophase selection has higher hit rate and 7% promotion on average score. Moreover, compared to only one service in Table 1, like just J-Server, the two-phase type selection mechanism offers an 11.1% increase in average score. Discussion Some limitations of the proposed two-phase evaluation for MT service selection are considered. First, the existing evaluation methods limit the gains possible with the proposed mechanism. The mechanism is not intended to establish a new evaluation method, but to make better use of existing methods. Creating a superior evaluation method is one of the hardest issues in machine translation and natural language processing. Thus, it is meaningful to achieve progress through better utilization of existing evaluation methods. Note that it is easy to import newly created evaluation methods into the proposed mechanism. Second, data driven classification needs a large training set, which involves time consuming manul effort. Mining users logs to build more training sets will be very helpful. If we tell a MT service user that his feedback will help to promote translation quality, he will be more willing to generate useful MT service usage logs. Moreover, we already trying to integrate human activities into composite service . Success in this are will make it easier to prepare large training sets. Third, application of the proposed MT service selection for Related Work First, automatic evaluation methods have been proposed on many mechanisms, includes string-based comparison, syntactic mechanism, and semantic mechanism. String-based comparison compares the translation result to standard references, and it is currently the most popular mechanism. There are several ways to compare the similarity, including lexical distance, and n-gram precision. Two common lexical distances are length of least common sub-string, such as ROUGE-L (Lin, 2004), and ROUGE-W (Lin, 2004), and edit distance, such as WER (Nieen et al., 2000) and TER (Snover et al., 2006b). N-gram precision has also been extensively studied, such as BLEU (Papineni et al., 2002), NIST (Doddington, 2002), METEOR (Banerjee and Lavie, 2005), and ROUGE-N (Lin, 2004). The syntactic mechanism analyzes whether the translation result is in accordance with the syntax of target language, such as linguistic error classification (Farrs et al., 2012). The semantic mechanism checks whether the translation result semantically agrees with the translation source, such as the lexical semantic similarity integration (Wong, 2010). When software of these evaluation methods have been prepared for sharing they can be wrapped into services by Language Grid platform, which makes our proposal more powerful. Next, human evaluation is important to confirm any automatic evaluation method using norms such fluency and adequacy. Semi-automatic evaluation has also received a lot of attention, such as the evaluation method HTER (Snover et al., 2006a), which requires human editing. The proposed mechanism also requires human assessment for preparing training set, which builds up the data-driven classification. Last, making evaluation methods easier to access is becoming a strong demand. There are some research on how to prepare references for these evaluation methods. Currently, there is no powerful way to utilize unsupervised references. Though many studies have pointed out that roundtrip translation is not adequate, others treat round-trip translation as the easy approach with the lowest costs (Hu, 2009). Research is progressing on ways to provide standardized interface (Cer et al., 2010) or even evaluation services (Eck et al., 2006), so that these functions can be utilized by more people. The proposed mechanism has benefited a lot from such existing research. Conclusion We proposed a two-phase evaluation for MT service selection that suits for both end-users and community translation developers. Because of increased the number of MT services, we face the problem of selecting, for the given translation source, the best MT service. Considering ease of implementation and extendibility, we designed two-phase architecture for selecting MT services. In the first phase, we import multiple evaluation methods, analyze attributes of the translation source, and select the most appropriate evaluation method using the decision tree approach. This datadriven classification enables one among multiple evaluation methods to be selected dynamically, and voids the deficiencies raised by employing just one evaluation method. In the second phase, the MT services are executed based on the Language Grid platform. The results of MT services are evaluated by the selected evaluation method. The translation quality values are generated and ranked yielding the best translation result. We designed an algorithm for this MT service selection. Finally, we implemented and tested a prototype based on the proposed mechanism. The results showed that the proposed mechanism offers better translation quality than employing just one MT service. Our proposal raises the translation quality by 7% compared to the approach which employs just one evaluation method, and at least 11.1% promotion than employing just one MT service.
2015-07-06T21:03:06.000Z
2012-05-01T00:00:00.000
{ "year": 2012, "sha1": "f9b8b352ef0965f326f9e6931e6bd48c9f4b0c47", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "f9b8b352ef0965f326f9e6931e6bd48c9f4b0c47", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
215415855
pes2o/s2orc
v3-fos-license
X-ray variability analysis of a large series of XMM-Newton + NuSTAR observations of NGC 3227 We present a series of X-ray variability results from a long XMM-Newton + NuSTAR campaign on the bright, variable AGN NGC 3227. We present an analysis of the lightcurves, showing that the source displays typically softer-when-brighter behaviour, although also undergoes significant spectral hardening during one observation which we interpret as due to an occultation event by a cloud of absorbing gas. We spectrally decompose the data and show that the bulk of the variability is continuum-driven and, through rms variability analysis, strongly enhanced in the soft band. We show that the source largely conforms to linear rms-flux behaviour and we compute X-ray power spectra, detecting moderate evidence for a bend in the power spectrum, consistent with existing scaling relations. Additionally, we compute X-ray Fourier time lags using both the XMM-Newton and - through maximum-likelihood methods - NuSTAR data, revealing a strong low-frequency hard lag and evidence for a soft lag at higher frequencies, which we discuss in terms of reverberation models. INTRODUCTION The multiwavelength emission observed from active galactic nuclei (AGN) is thought to be powered by accretion onto a central supermassive black hole (SMBH). These systems are observed to emit across the whole electromagnetic spectrum with components of both thermal and non-thermal emission contributing to the broad-band spectral energy distribution (Shang et al. 2011). The energy output of Seyfert galaxies is routinely seen to peak at ultraviolet (UV) wavelengths. The dominant mechanism for this is widely considered to consist of thermal emission originating in material in the inner regions of a geometrically-thin, optically-thick accretion disc around the SMBH (Shakura & Sunyaev 1973). Depending on the accretion-flow properties, the region where the UVemitting matter is located typically lies 10-1 000 rg 1 from the central SMBH. Then, these thermally-emitted UV photons are likely responsible for generating X-ray emission via inverse-Compton scattering off hot electrons (T ∼ 10 9 K) in an optically-thin corona, most likely located within a few tens of rg from the SMBH e-mail: alobban@sciops.esa.int 1 The definition of the gravitational radius is: rg = GM BH /c 2 . (Haardt & Maraschi 1993). This produces a continuum of X-ray emission, which commonly takes the form of a power law. Our current generation of telescopes is unable to directly resolve the central regions of AGN. This is due to their very compact nature coupled with their large distances from Earth. Nevertheless, there are methods we can employ which allow us to indirectly infer information about the dominant structure, geometry and physical processes within these systems. Variability studies, in particular, are powerful in this regard, allowing us to probe the observed X-ray variability in a number of model-independent ways -e.g. analysis of the covariance and rms spectra. Additionally, the frequencydependence of the variability can be measured through such methods as estimating the power spectral density (PSD) and energydependent time delays (see below). Curiously, the behavioural properties are often observed to be similar across a wide range of sources with their measured frequencies and amplitudes scaling roughly inversely with the SMBH mass (e.g. Lawrence et al. 1987;Uttley, McHardy & Papadakis 2002;Markowitz et al. 2003;Papadakis 2004;McHardy et al. 2004McHardy et al. , 2006Arévalo et al. 2008). Additionally, the strength of the rms variability in the X-ray band is routinely observed to scale in an approximately linear manner with the flux of the source; i.e. the 'rms-flux relation'. This constitutes another common aspect of X-ray vari-ability that appears to exist over a large range of time-scales and masses, observed in both AGN and X-ray binary (XRB) systems (e.g. Uttley & McHardy 2001;Gaskell 2004;Uttley et al. 2005). Time delays between the observed variations at different Xray energies are commonly detected in bright, variable AGN (e.g. Papadakis et al. 2001;McHardy et al. 2007;Lobban et al. 2014;Kara et al. 2016;Jin et al. 2017;Lobban et al. 2018a,b). In particular, hard X-ray variations are commonly seen to lag behind soft X-ray variations. These are known as 'hard lags' and their first detection was made in XRBs (e.g. Cygnus X-1: Cui et al. 1997;Nowak et al. 1999). These typically occur at low variability frequencies (∼10 −5 − 10 −4 Hz in most AGN), with the magnitude of the lag -typically on the order of tens to hundreds of seconds in AGN, depending on the black hole mass -often observed to be larger when the separation between the two energy bands is greater. Given the similarities in their behaviour, it has been argued that the low-frequency hard lags that we observe in AGN and XRBs are analogous (e.g. McHardy et al. 2006). In this care, their properties -e.g. time-scale, magnitude, etc -are appropriately scaled according to the size of the emitting region In an attempt to explain the observed time delays, various models have been proposed. These range from inverse Compton scattering of photons by the X-ray-producing corona (see Miyamoto & Kitamoto 1989) to reflection by the surface of the accretion disc (Kotov et al. 2001). A popular model was proposed by Lyubarskii (1997) and is known as the 'propagating fluctuations' model. Here, variations in the local mass accretion rate are responsible as they inwardly propagate through the disc. Here, they then excite an extended corona of hot, relativistic electrons, ultimately producing the observed X-rays. This picture works quite clearly for XRBs where X-rays are produced by the disc and an average hard delay is produced whereby stratification of the corona results in the inwardly-propagating fluctuations firstly exciting the outer regions of the corona, producing softer X-rays, before exciting the inner regions, driving emission with a harder spectrum. However, a significant degree of complexity is added to the lag behaviour in AGN at higher frequencies. Here, the time delays are often found to be reversed with more rapid variations in the soft X-ray band lagging behind the correlated variations at higher energies (i.e. 'soft lags'; see De Marco et al. 2013). A mechanism has been proposed by Miller et al. (2010a) in which these lags emerge via scattering of the primary X-rays in more distant circumnuclear material. In the context of this interpretation, the delayed signal is due to reverberation from absorbing material tens to hundreds of rg from the central source (also see Turner et al. 2017;Mizumoto et al. 2018Mizumoto et al. , 2019. Here, the spectral shape of the delayed component is expected to be harder than the primary X-ray continuum with the 'reflected' contribution increasing towards higher energies. The observed time delay due to reverberation is diluted due to the presence of direct emission in the time series. However, at higher energies, the relative contribution of delayed-to-direct emission increases, predicting a strong reverberation signal in the hard X-ray band; i.e. peaking in the NuSTAR bandpass at energies > 10 keV (e.g. Turner et al. 2017). Alternatively, the soft lags are often discussed in terms of the reverberation signal from reflection of the primary X-rays off material very close to the central black hole (e.g. Zoghbi et al. 2011;Fabian et al. 2013). Also see Uttley et al. (2014) for a review. Here, we report on a large series of XMM-Newton + NuS-TAR observations of NGC 3227, a nearby Seyfert 1.5 galaxy at a redshift of z = 0.003859 (de Vaucouleurs et al. 1991) and an estimated luminosity distance of 20.3 Mpc (Mould et al. 2000). The source is very bright in the X-ray band with a typical observed flux of F0.3−10 ∼ 6 × 10 −11 erg cm −2 s −1 from 0.3-10 keV and has an estimated black hole mass of MBH = 5.96 +1.23 −1.36 × 10 6 M (Bentz & Katz 2015). The source has been known to exhibit both bright, unabsorbed and low, absorbed states based on previous Xray analyses (e.g. Lamer et al. 2003;Markowitz et al. 2009Markowitz et al. , 2014Rivers et al. 2011) with an outflowing warm absorber (Beuchert et al. 2015). Interestingly, the source is occasionally observed to undergo variable absorption events apparently due to clouds of gas occulting across the line of sight (e.g. Lamer et al. 2003;Beuchert et al. 2015). This is the second in a series of papers on this co-ordinated XMM-Newton + NuSTAR campaign. Previously, in Turner et al. (2018), we reported on a rapid occultation event occurring towards the end of the observing campaign. We detected significant spectral hardening of the source coupled with a measurable increase in the depth of the unresolved transition array (UTA), thanks to the highresolution Reflection Grating Spectrometer (RGS) on-board XMM-Newton. We find that this transiting event, which lasts for roughly one day, comprises a mildly-ionized cloud of gas with a line-ofsight column density of NH ∼ 5 × 10 22 cm −2 that occults ∼60 per cent of the continuum source. We infer the likely location of this cloud to be the inner broad-line region. In this paper, we focus on the X-ray variability properties of NGC 3227, exploring time series, hardness ratios, primary spectral variations, the energy-dependent rms variability, the PSD, and Fourier time lags. OBSERVATIONS AND DATA REDUCTION NGC 3227 was observed six times by XMM-Newton (Jansen et al. 2001) over a month-long period from 2016-11-09 to 2016-12-09. The six observations were co-ordinated with simultaneous NuSTAR (Harrison et al. 2013) observations. There was also an additional, seventh NuSTAR observation roughly six weeks after the joint campaign on 2017-01-21, co-ordinated with a Chandra GTO observation. An observation log is provided in Table 1. Below, we describe the data reduction procedures, which were performed using HEA-single-and double-pixel events (i.e. PATTERN 4). Meanwhile, background events were extracted from larger circular regions away from the central source and avoiding the edges of the CCD. Generally, the background level was relatively stable, although some periods of background flaring were apparent. These typically occurred towards the beginning or end of an individual observation, usually lasting for just a few ks, and so were filtered out for the subsequent analysis. The total net exposure after background filtering and, accounting for the detector dead time, was ∼310 ks. The average 0.3-10 keV EPIC-pn count rate across all six observations was ∼8.6 ct s −1 , corresponding to a time-averaged source flux of F0.3−10 = 5.7 × 10 −11 erg cm −2 s −1 . Meanwhile, the background count rate was found to be low, at < 1 per cent of the source rate. See Table 1 for a summary of the observations and their observed broad-band count rates and fluxes. Optical/UV Monitor The OM was operated in "imaging" mode using the UVW1 filter, which has a peak effective wavelength of 2 910Å. A series of images was acquired in each observation for the purpose of UV monitoring. A total of 95 images were acquired across the six XMM-Newton observations with a total OM exposure time of 375 ks (typically ∼3-4 ks per individual exposure). All data were processed using the OMICHAIN 5 task within SAS. This takes into account all calibration requirements and performs aperture photometry on the list of detected sources, providing count rates that are corrected for dead time and coincidence losses. NuSTAR NuSTAR is comprised of two Focal Plane Modules (FPMs): FPMA and FPMB, providing continuous coverage over a broad bandpass from 3-78 keV. The seven NuSTAR observations of NGC 3227 covered a total duration of ∼300 ks, with a typical duration of ∼40-50 ks per observation. To extract spectral products, we used the NUPIPELINE and NUPRODUCTS scripts within HEASOFT, using the latest version of the calibration database (v20180419). These were cleaned by applying standard screening criteria, such as filtering out passages through the South Atlantic Anomaly. For each FPM, spectral products and light curves were extracted from circular source regions with a radius of 70 arcsec. Meanwhile, background products were extracted using 75 arcsec circular regions separate from the source and away from the edges of the detector. The time-averaged FPMA+FPMB 3-78 keV background-subtracted count rate was ∼3.36 ct s −1 , corresponding to an average observed broad-band flux of F3−78 = 1.28 × 10 −10 erg cm −2 s −1 . See Table 1 for a summary of the observations. RESULTS Here, we detail our main results from an analysis of the variability of NGC 3227. All data were fitted in XSPEC v.12.9.1 (Arnaud 1996). All errors are quoted at the 90 per cent confidence level, unless otherwise stated. Light curves In this section, we show the XMM-Newton and NuSTAR light curves of NGC 3227. Fig 1 (upper panel) shows the concatenated, background-subtracted XMM-Newton EPIC-pn light curve from 0.3-10 keV in 1 ks bins. This is corrected for exposure losses and any telemetry drop-outs are interpolated over, where necessary. Strong variability is clearly visible with the count rate varying by more than a factor of four over the course of the campaign. The variability is observed to be rapid, even on within-orbit time-scales, with the source flux routinely doubling in just tens of ks. We can quantify the intrinsic source variance by calculating the 'excess variance' (Nandra et al. 1997;Edelson et al. 2002;, which takes into account the measurement uncertainties that also contribute to the total observed variance. The excess variance is the sample variance and σ 2 err = 1 N N i=1 σ 2 err,i is the mean square error. The observed value is represented by xi, its arithmetic mean by x and the uncertainty on each individual measurement by σerr. From this, we can calculate the fractional root mean square variability, Fvar = σ 2 XS x 2 , which we can express as a percentage. Note that this statistic is dependent upon the binning time-scale, which, in this case, is 1 ks. In the case of our broad-band 0.3-10 keV Xray light curve, the fractional variability is high at 33 per cent over the course of the campaign. In terms of within-observation variability, the fractional variability remains high, but varies across the six observations with values of 27, 34, 40, 16, 16, and 30 per cent, respectively. This can be visualized in Fig. 1, where NGC 3227 is more variable in some observations than others. Superimposed on the X-ray light curve in Fig. 1 is the OM UVW1 (2 910Å) light curve (blue). Each datapoint represents a single OM exposure. The average observed corrected UV count rate is 22.9 ct s −1 . We can obtain a rough estimate on the flux in this band using the conversion factors calculated by the SAS team 6 . This provides us with an estimated 2 910Å average flux of 1.1×10 −14 erg cm −2 s −1Å−1 . It is clear from the plot that the UV data are highly variable, even within the time-scale of a single observation (< day) -particularly during obs 2. Quantitatively, the fractional variability is measured to be ∼4 per cent (on time-scales of ∼3-4 ks). There is no clear long-term correlation between the UV data and the X-rays although a smooth transition is observed in the UV light curve as it appears to go through a pronounced dip in the middle of the campaign. Curiously, some short-term, quasiinstantaneous correlated variability is observed -most notably the sharp emission flare during obs 3 (∼215 ks through the campaign), which is prominent in both the X-ray and UV light curves. In terms of whether these UV variations could be energetically reproduced via X-ray reprocessing, we note that the observed X-ray variations are ∼25 times larger than those in the UV band, while the X-ray luminosity is ∼3 times weaker (at 1 keV). As such, larger X-ray variations would be sufficient to drive the observed smaller variations in the higher-luminosity UV band. In the lower panel of Fig. 1, we show the evolution of the hardness ratio of NGC 3227 across the course of the campaign. We use the definition of the fractional hardness ratio: HR = (H − S)/(H + S), where H is the hard band and S is the soft Table 1. Observation log of the XMM-Newton EPIC-pn and NuSTAR observations of NGC 3227. Net exposure times after filtering and accounting for 'dead time' are provided in parentheses after the total durations. The broad-band EPIC-pn and FPM count rates and fluxes are quoted from 0.3-10 and 3-78 keV, respectively. band (see Park et al. 2006 for further discussion). In this instance, H covers 1-10 keV while S covers the 0.3-1 keV band. Variability in the hardness ratio of the source is clearly visible. Most notably, the strong flare in obs 3 is accompanied by a sharp drop in the hardness ratio, indicating that the flare dominates in the soft band. Additionally, significant evolution of the hardness ratio can be observed in obs 6, as the source gradually softens as the source flux brightens over the ∼80 ks period. In Fig. 2, we plot the hardness ratios against the broad-band 0.3-10 keV count rate in 456 1 ks bins across all six observations. A strong correlation is observed (Pearson correlation coefficient: r = −0.765; p < 10 −5 ) implying that the source becomes softer when the flux is higher. Fitting a simple linear model to the data returns values of the slope, a = −0.016 ± 0.001 and the offset, b = 0.356 ± 0.002 (∆χ 2 /d.o.f. = 4 229/454). However, it is clear from the plot that a different mode of variability was present during obs 6 (magenta), also apparent in the marked evolution of the hardness ratio in Fig. 1. This difference in the shape of the spectrum in obs 6 formed the motivation behind the analysis in Turner et al. (2018), where we show that NGC 3227 appeared to undergo a strong occultation event by a cloud of ionized gas manifesting itself in a strengthening of the depth of the UTA in the high-resolution RGS data. As such, we fitted obs 1-5 and obs 6 separately finding significant differences in the hardness-ratio slopes. In the former case, a = −0.013 ± 0.001 and b = 0.329 ± 0.002 (∆χ 2 /d.o.f. = 2 212/375). Meanwhile, in the latter case, a = −0.028 ± 0.001 plotted against the broad-band 0.3-10 keV flux in 1 ks bins. The hard, H, and soft, S, bands are 1-10 and 0.3-1 keV, respectively. The six observations are shown in different colours: chronologically, black, red, green, blue, cyan, magenta, respectively. The two grey lines show simple linear models fitted to the first five observations (dashed) and, separately, the sixth observation (dotted). and b = 0.493 ± 0.005 (∆χ 2 /d.o.f. = 848/77). These two fits are shown in Fig. 2. We now include the NuSTAR data by creating a concatenated light curve from all seven observations. We combined data from the two FPMs and used orbital bins (96.8 min). This is shown in Fig. 3. We firstly plotted the light curve in the 3-10 keV band, which overlaps with the XMM-Newton EPIC-pn bandpass, which we superimpose using 500 s bins. It is clear from the plot that the NuSTAR and XMM-Newton light curves show near-identical behaviour and are well correlated. We note that the start/stop times of the NuSTAR observations are sometimes offset slightly from XMM-Newton. We also show the much harder, 10-78 keV NuSTAR light curve, which shows similar behaviour. Although the absolute strength of the variability is suppressed, the strength of the fractional variability is similar (Fvar = 20 per cent in the 10-78 keV band versus Fvar = 23 per cent in the 3-10 keV band). Spectral decomposition Here, we investigate the spectral variability NGC 3227 across the observing campaign. In Fig. 4 we show all six XMM-Newton EPICpn and all seven NuSTAR spectra. For the NuSTAR spectra, we combined the FPMA and FPMB data in our plots for clarity. For further clarity, we only show the NuSTAR spectra > 10 keV (below which there is overlap with the EPIC-pn band) and < 50 keV as the spectra become noisy at the highest energies. The spectra are "fluxed" against a power law with a flat photon index (i.e. Γ = 0). It is clear that all the spectra are hard with some modest variations in flux, with the bulk of the variability occurring at lower energies while the spectra tend to converge at the highest energies. To visualize the clearest emission and absorption components, we fitted the spectra with a simple power law, absorbed by a neutral Galactic column of NH = 2.13 × 10 20 cm −2 , in which the measurements of Kalberla et al. (2005) are modified by taking into account the additional effect of molecular hydrogen (see Willingale et al. 2013). The Galactic column was modelled with the TBABS component within XSPEC (Wilms et al. 2000), using the appropriate photoionization cross-sections (Verner et al. 1996). We fitted the spectrum over continuum-dominated bands that are usually free from obvious emission/absorption features: 3.0-5.5 and 7.5-10 keV. We then extrapolated the fit over the entire 0.3-50 keV bandpass, allowing the photon index and normalization to vary between observations. The spectra are all observed to be hard with the photon index measurements falling in the range: 1.4-1.7. At energies < 2 keV, it is clear that significant absorption and emission components are present. In particular, strong absorption is present in obs 6 (magenta), which is most prominent at ∼0.7-0.9 keV and is due to absorption by the UTA. This is the signature of the occultation event described in Turner et al. (2018). Meanwhile, evidence of emission is apparent at higher energies, most notably in the form of a strong emission line at 6.4 keV due to near-neutral Fe Kα. Some additional emission with respect to the simple power law is also visible > 10 keV, most likely arising from modest Compton reflection. To investigate the spectral variability, we created high-, medium-, and low-flux broad-band spectra. Here, we split the XMM-Newton EPIC-pn and NuSTAR FPM data into three fluxresolved groups. To achieve this, we firstly defined good time intervals (GTIs) that were common to both satellites, merging them using the MGTIME task. We then used the combined GTI to create a broad-band light curve across the whole campaign in 50 s bins, yielding ∼230 ks worth of common data. We then split this into three separate flux-resolved groups with identical exposure times 7 ; i.e. ∼75 ks each of high-, mid-, and low-flux data. In the 0.3-10 keV EPIC-pn case (as per Fig. 1), the cuts were applied at > 9.97, 6.90 − 9.97, and < 6.90 ct s −1 . This allowed us to create high-, mid-, and low-flux GTIs which we then applied to the respective processing pipelines to create high-, mid-and low-flux spectra for each observation. The flux-resolved XMM-Newton and NuSTAR spectra were then combined using the MATHPHA task. Meanwhile, response files were created by averaging across the six observations using the ADDRMF and ADDARF tasks, weighting them by the appropriate exposure times. This resulted in three XMM-Newton EPIC-pn and NuSTAR FPM spectra in three separate flux bands, which we binned such that there were > 25 ct bin −1 . These spectra were then fitted within XSPEC with a broadband model based on the spectral fitting described in Turner et al. (2018). The spectra are shown in Fig. 5 and are found to have observed broad-band 0.3-50 keV fluxes, from high-to-low flux, of: 1.36 +0.02 −0.01 × 10 −10 , 1.14 +0.01 −0.02 × 10 −10 , and 9.51 +0.02 −0.02 × 10 −11 erg cm −2 s −1 . NGC 3227 is clearly more variable at lower energies with the X-rays varying by up to ∼80 per cent < 10 keV compared to < 20 per cent > 10 keV. Note that, while we fitted 7 We note that applying cuts according to exposure time can lead to biases -i.e. greater statistical weighting may be applied to the higher-flux spectra due to the larger number of counts. As such, we also created flux-resolved spectra applying cuts such that the total number of counts in each flux-slice was identical. In this case, cuts were applied at > 10.85, 8.39 − 10.85 and < 8.39 ct s −1 with respective exposure times of 51.5, 67.7 and 106.2 ks. However, the results were consistent with those described in Section 3.2. Additionally, we also tried a simpler approach, by using the highest-flux observation (obs 5) and the lowest-flux observation (obs 2) to create difference spectra. Again, the results were consistent. The EPIC-pn spectra are shown from 0.3-10 keV while the NuSTAR spectra are shown > 10 keV only for clarity, except for obs 7, which is shown from 3-50 keV. The seven observations are shown, chronologically, in black, red, green, blue, cyan, magenta, and orange, respectively. Lower panel: the ratio to an absorbed power law fitted in a continuum-dominated band from 3.0-5.5 and 7.5-10 keV and extrapolated over the entire bandpass. See Section 3.2 for details. Figure 5. Upper panel: the high-(black), mid-(red), and low-(green) flux spectra of NGC 3227, obtained by splitting the data into three flux-resolved segments. The data are fitted with the model described in Section 3.2. The contributions of the blackbody and neutral reflection components are shown by the dotted and dashed lines, respectively. Lower panel: the ratio of the residuals to the model. Note that all data are binned up and that the NuSTAR data are only plotted > 10 keV for clarity. Meanwhile, the EPIC-pn data are ignored from 1.7-2.1 keV due to calibration uncertainties around the Si K edge in the detector. See Section 3.2 for details. the NuSTAR FPMA and FPMB spectra separately, we combined them for the purposes of plotting. Additionally, while the NuSTAR data were fitted > 3 keV, we only show them > 10 keV for clarity. The model consists of: (i) a primary power-law continuum, (ii) a PEXMON component (Nandra et al. 2007) to model neutral reflection and associated Fe Kα emission, (iii) a high-energy cut-off, (iv) a blackbody component to model the soft excess, and (v) additional multiplicative components to account for the warm absorber. The warm absorber in NGC 3227 consists of three separate zones, which we modelled with version 2.41 of the XSTAR photoionization code (Kallman & Bautista 2001;Kallman et al. 2004). Each zone is characterized by its column density, NH, ionization parameter, ξ 8 and outflow velocity, vout. The best-fitting values of the ionization parameter and outflow velocity across the three zones (zones 1, 2, 3), respectively, are as follows: log ξ = −0.7, 1.4, and 2.9 and vout = 100, 250, and 1 300 km s −1 . These come from our analysis of the RGS data (see Turner et al. 2018) and were fixed in the fit and tied across all three spectra, while the column densities were allowed to vary. We also allowed the respective normalizations and photon indices to vary between spectra. The best-fitting photon indices (tied between the primary power law and the reflection component) were found to be Γ = 1.72±0.01, 1.60±0.01 and 1.40±0.02 for the high-, medium-and low-flux spectra, respectively. This is consistent with the steeperwhen-brighter behaviour described in Section 3.1. The blackbody component has a best-fitting temperature of kT = 0.09 ± 0.02 keV while the e-folding energy of the high-energy cutoff is found to be Ecut = 300 ± 80 keV. Meanwhile, the best-fitting values of the column densities suggest weak variability in the strength of the warm absorber (given the measurement uncertainties) across all three flux-selected spectra (high-, medium-, low-flux, respectively), with the following values: zone 1: NH = 1.46 +0. An alternative test was performed to search for variability of the warm absorber in terms of its ionization parameter instead of column density. Such behaviour has been observed in time-resolved RGS analysis of other AGN (e.g. NGC 4051; Krongold et al. 2007) where log ξ is seen to correlate with the luminosity of the irradiating power law on short time-scales. One might expect to observe such behaviour if the warm absorber is ionized by the central AGN. In this instance, we fixed the column densities at their best-fitting values from our RGS analysis in Turner et al. (2018). These are as follows: NH = 2.1 × 10 21 , 8.3 × 10 20 , and 4.4 × 10 21 cm −2 for the three zones, respectively. In this case, we find no evidence of flux-dependent variations in the ionization parameter with the best-fitting values remaining consistent within the measurement uncertainties across all three spectra. Moreover, the fit is worse by ∆χ 2 = 43 than in the case where the column densities are allowed to vary. We also performed a test to search for any variability of the reflection component by measuring its (unabsorbed) flux across the three flux-selected spectra. We find that the broad-band flux of the PEXMON component is 1.58 +0.08 −0.09 × 10 −11 , 1.38 +0.10 −0.09 × 10 −11 , and 1.24 +0.10 −0.10 × 10 −11 erg cm −2 s −1 in the high-, mid-, and lowflux spectra, respectively. This corresponds to modest variations of around ∼20 per cent, which is significantly weaker than the vari-ability of the primary power law and associated soft excess (see the values reported in Table 2). Its overall contribution to the model is also found to be of moderate strength, at ∼10 per cent from 0.3-50 keV (and ∼20 per cent > 10 keV). Finally, we tested to see whether there was any variability in the component of Fe Kα emission at ∼6.4 keV. Here, we took the baseline model described above but replaced the PEXMON component with a PEXRAV component (Magdziarz & Zdziarski 1995), which only models the reflection continuum and not the associated emission line. We then parametrized the emission line at 6.4 keV independently with a Gaussian. We find that the centroid energy, Ec, and width, σ, of the line do not vary with flux with best-fitting values of Ec = 6.43 ± 0.01 keV and σ = 70 ± 10 eV, while the equivalent width of the line is found to be, from high-, to mid-, to low-flux, respectively: EW = 0.10 ± 0.01, 0.13 ± 0.01, and 0.16 +0.02 Table 2. Difference spectra To further investigate the spectral variability, we computed broadband difference spectra, based on the flux-selected spectra defined above in Section 3.2. Three difference spectra were created by subtracting (i) low-flux data from high-flux data, (ii) mid-flux data from high-flux data and (iii) low-flux data from mid-flux data across the broad 0.3-50 keV bandpass. These were then fitted within XSPEC with the results shown in Fig. 6. Again, we fitted the NuSTAR FPMA and FPMB spectra separately, but combined them for the purposes of plotting. Likewise, the NuSTAR data were fitted > 3 keV, but are only shown > 10 keV for clarity. We applied a simple absorbed power-law model (TBABS × PL) to the spectra, tying the photon index, Γ, and column density, NH, between the XMM-Newton and NUSTAR spectra in each flux-resolved case. We allowed the power-law normalizations to vary to account for variations in the cross-normalization. The best-fitting power-law slopes are (i) Γ = 1.81 ± 0.01, (ii) Γ = 1.91 ± 0.02, and (iii) Γ = 1.70 ± 0.02, while the neutral absorber column density lies in the range: 3−5×10 20 cm −2 . The overall fit statistic is χ 2 /d.o.f. = 10 018/7 427 with clear residuals in the soft X-ray band. Indeed, the fit is very poor < 3 keV: χ 2 /d.o.f. = 3 579/1 371, although a visual examination of the deviations suggest that the spectral variability remains similar across all the flux ranges afforded by this observing campaign. To account for the residuals in the soft band, we then included additional soft components, as required by broad-band spectral modelling (see Turner et al. 2018). We firstly included a blackbody component to model the soft excess. Tying this component between all three difference spectra yielded a best-fitting temperature of kT = 0.09 ± 0.01 keV, with no requirement for the temperature to vary between spectra. We then also included additional multiplicative components to account for the warm absorber High Table 2. Table showing the best-fitting parameters of the variable components fitted to the flux-resolved spectra described in Section 3.2. All spectra are fitted from 0.3-50 keV. All column densities are given in units of cm −2 and all fluxes are given in units of erg cm −2 s −1 . Figure 6. Upper panel: the difference spectra of NGC 3227, obtained by splitting the data into three flux-resolved segments. The XMM-Newton EPIC-pn and NuSTAR FPMA+FPMB spectra are shown in three cases: high−low (black), high−mid (red), and mid−low (green). Middle panel: the σ residuals to a simple fit consisting of an absorbed power-law model. Lower panel: the σ residuals to a broad-band fit including three warm absorber zones. Note that all data are binned up and that the NuSTAR data are only plotted > 10 keV for clarity. Meanwhile, the EPIC-pn data are ignored from 1.7-2.1 keV due to calibration uncertainties around the Si K edge in the detector. See Section 3.2.1 for details. (see Section 3.2). Again, the ionization parameters and outflow velocities were fixed in the fit and tied across all three spectra, while the column densities were allowed to vary. The best-fitting values across all three spectra (high − low, high − mid, mid − low, respectively) are as follows: Table 3. The best-fitting variable parameters from the fits to the broad-band 0.3-50 keV flux-resolved difference spectra described in Section 3.2.1. All column densities are given in units of 10 21 cm −2 . χ 2 /d.o.f. = 3 579/1 371 previously). We note that the photon indices are steeper than those inferred from the time-averaged broadband spectrum. NGC 3227, like other similar sources, exhibits typical softer-when-brighter behaviour, which can naturally lead to a steeper photon index in the difference spectrum than the average spectrum -for example, were the power law to pivot, we would observe greater changes in flux at energies further away from the pivot point. We also note that additional components are likely to contribute to the enhanced variability in the soft band. For example, the spectrum requires an additional soft-band component such as a Comptonized disc blackbody, which can add to the variability at lower energies. Additionally, any inter-orbital variations of the warm absorber may also enhance the low-energy variability at low frequencies. The residuals to this fit are shown in the lower panel of Fig. 6 and the best-fitting variable parameters are provided in Table 3. As before, we tested to see if variations in the ionization parameter could instead account for the spectral changes, but we again found that log ξ remains constant within the measurement uncertainties across all three difference spectra. We did also create difference spectra by filtering on time as opposed to flux. A simple broad-band difference spectrum was created by subtracting the lowest-flux obs 2 EPIC-pn and FPM spectra from the highest-flux obs 5 spectra. We again applied the model described above finding that it provides a similarly good fit from 0.3-50 keV. The best-fitting values are largely consistent with those obtained from the flux-selected difference spectra; i.e. Γ = 2.04 ± 0.02, kT = 0.10 ± 0.01 keV, and NH = 1.54 +0.09 −0.10 × 10 21 , 3.96 +0.67 −0.60 × 10 21 , and < 3.20 × 10 21 cm −2 for the three warm absorber zones, respectively. The overall fit statistic is χ 2 /d.o.f. = 2 280/2 249. The rms spectrum In this section, we compute the 'rms spectrum', which is the rms amplitude of variability as a function of energy (e.g. . We used the broad-band XMM-Newton EPIC-pn data, extracting light curves of equal segment-length in 20 energy bands. These were equally spaced in log(E) from 0.3-10 keV. The fractional excess variance, as given by σ 2 xs / mean 2 , was then calculated for each of the 20 energy bands and averaged over each segment. Figure 7. The fractional rms spectra of NGC 3227 using XMM-Newton EPIC-pn data. Upper panel: the rms spectra are calculated in three frequency bands: 5 × 10 −4 − 5 × 10 −3 Hz (red circles), 5 × 10 −5 − 5 × 10 −4 Hz (green diamonds), and ∼1.4 ×10 −5 − 5 × 10 −5 Hz (blue squares). The total averaged rms spectrum obtained using 100 s time bins and 60 energy bins is shown in black. Lower panel: the rms spectra for each of the six individual XMM-Newton observations using 1 ks time bins. See Section 3.3 for details. To calculate the fractional rms (or fractional variability, Fvar), we then took the square root of the excess variance and averaged this over all XMM-Newton observations from 2016. We investigated the fractional rms behaviour on withinobservation time-scales by computing the rms spectrum over three separate frequency bands, defined by the time-bin size, ∆t, and the segment length. Note that the upper frequency bound is given by νN = 1/2t as it is set by the Nyquist frequency. Our three frequency bands, from high-to-low, are: 5 × 10 −4 − 5 × 10 −3 Hz (∆t = 100 s; 2 ks segments), 5 × 10 −5 − 5 × 10 −4 Hz (∆t = 1 ks; 20 ks segments), and ∼1.4 ×10 −5 −5×10 −5 Hz (∆t = 10 ks; full light curves). These are shown in red, green, and blue, respectively, in the upper panel of Fig. 7. The uncertainties on each measure-ment are standard error on the mean obtained from averaging over the six observations. The larger uncertainties at lower frequencies arise from the light curves have a higher variance on longer timescales. At the highest frequencies, the fractional rms variability is clearly low and flat across the whole bandpass. At these frequencies, the variability is likely dominated by short-term flickering of the source and shows little energy dependence. There is a hint of a drop in rms in the ∼6-7 keV band, which may be representative of a component of Fe K emission that is constant on these time-scales, thus diluting the rms variability. As we go to lower frequencies, however, the rms variability becomes significantly stronger and the spectrum steepens, with enhanced variability towards lower energies. This suggests that the soft X-ray band largely dominates the observed variability. This is supported by the steep photon index (Γ ∼ 2.1) and component of soft excess (kT ∼ 0.1 keV) required by the difference spectra described in Section 3.2.1 (also see Fig. 6). Curiously, the low-frequency rms spectrum also becomes much flatter at the lowest energies; i.e. < 1 keV. Such a flat slope may be consistent with a component that varies only in flux but not spectral shape, perhaps such as a component of soft excess -e.g. a Comptonized disc blackbody or similar -which drives the rms spectrum at the lowest X-ray energies. We discuss the rms behaviour of the source further in Section 4.2. We also compute the total averaged rms spectrum using all of the data across the XMM-Newton campaign with a time resolution of 100 s. Due to the enhanced signal by using all of the data, we can use a higher energy resolution and so we compute this rms spectrum over 60 energy bins [again, equally spaced in log(E)]. The errors are given by equation B2 of . Again, the shape of the rms spectrum is very steep, but also generally smooth and not particularly dominated by fine structure. However, we again observe an apparent drop in rms in the ∼6-7 keV band, likely corresponding to the Fe K emission complex. We also investigated the rms behaviour on the time-scale of individual observations. Here, we used a timing resolution of 1 ks. Each of the six XMM-Newton observations are shown in the lower panel of Fig. 7. In general, the rms spectra are steep, again showing enhanced variability in the soft band, with a generally flattening of the slope at the lowest energies. It is clear that some observations are more variable than others, particularly at lower energies. For example, the steepest rms spectrum arises during obs 3 (green) and this is most likely due to the emission flare towards the end of that observation, which is dominated by the soft band (see Fig. 1). Slightly divergent behaviour can be observed in obs 6 (magenta) as the rms spectrum appears to have a softer slope. A different mode of variability is also evident during this observation in Fig. 2. This is again consistent with the occultation event observed during this observation from a cloud of mildly ionized gas passing through the line of sight, as presented in Turner et al. (2018), which predominantly impacts the X-ray spectrum at low energies. The rms-flux relation The 'rms-flux' relation shows that the rms (i.e. the absolute rootmean-square) amplitude of variability linearly scales with the X-ray flux of a source. It is commonly observed in the X-ray variability of AGN and XRBs, but also ultraluminous X-ray sources (ULXs) and cataclysmic variables (CVs) and, in essence, shows that sources typically display stronger variability during periods when they are brighter (e.g. Uttley & McHardy 2001;Gleissner et al. 2004;Heil & Vaughan 2010). To investigate this in NGC 3227, we used all EPIC-pn data from 2016 and split the data into 2 ks light-curve segments, using a timing resolution, ∆t = 50 s. For each segment, we computed the both periodogram and the mean count rate. By averaging together these periodograms over 8 count rate bins we then calculated the flux-dependent PSD. Finally, we calculated the rms in each bin by integrating under the average PSD and subtracting the Poisson noise before taking the square root. We calculated the rmsflux relation across three energy bands: the full 0.3-10 keV band, a soft 0.3-1 keV band, and hard 1-10 keV band. 9 These are shown in Fig. 8. To test for linearity in the relationship, we fitted a straight line to each of the datasets. In the broad-band 0.3-10 keV case, this fits the data well with a slope, a = 0.056 ± 0.004 and offset, b = −0.134 ± 0.032 (χ 2 /d.o.f. = 8.8/6). The slope corresponds to 5.6 per cent rms on time-scales of 0.1-2 ks. The relationship appears to roughly hold over a factor of > 3 in flux. To estimate the 90 per cent uncertainty on this linear model, we randomly generated 1 000 models from the parameter distribution using the bestfitting values and the covariance matrix and used the 95 and 5 per cent rms values. The 90 per cent confidence band is enclosed by the grey dashed lines in Fig. 8. Fitting a similar linear model to the soft band returns bestfitting parameters of a = 0.064 ± 0.006 and b = −0.062 ± 0.019 (χ 2 /d.o.f. = 11.2/6) while, for the hard band, these values are a = 0.059 ± 0.006 and b = −0.100 ± 0.028 (χ 2 /d.o.f. = 5.4/6). So it appears that the slope is recovered and is consistent across the soft and hard bands. We note that, in the case of each rmsflux relation, the point at which the relation intercepts the x-axis 9 The broad-band 0.3-10 keV rms-flux relation is lower than the sub-bands that make it up. This may imply some degree of anticorrelation between the soft and hard bands. effectively corresponds to a component of constant emission in that given energy band. As such, if constant components appear in both the soft and hard bands, they should co-add to reproduce the broad-band component of constant emission derived from the fullband light curve. Here, the x-intercepts in the soft and hard bands correspond to 0.97 ± 0.31 and 1.70 ± 0.50 ct s −1 , respectively, while in the full 0.3-10 keV band, this value is 2.39 ± 0.60 ct s −1 . Combining the soft-and hard-band components yields a value of 2.67 ± 0.59 ct s −1 , consistent with the full-band rms-flux relation. Finally, we investigated the rms-flux behaviour within each of the individual observations and, while the relation is still positive, the relationship is difficult to constrain due to the fewer number of available segments and smaller range in flux. Similarly, by calculating and plotting the rms against the averaged flux from each individual observation, we can also largely recover the positive relation on longer time-scales. The power spectrum Here, we investigate the PSD of NGC 3227. This provides an estimate of the power of the observed variability and its dependence on temporal frequency. EPIC-pn light curves were extracted for all six XMM-Newton observations of NGC 3227 in 20 s time bins. Then, by computing simple periodograms from each orbit in units of (rms/mean) 2 (Priestly 1981;Percival & Walden 1993;, we were able to estimate the PSD down to frequencies of ∼10 −5 Hz (see Fig. 9). The "raw" periodograms were fitted within XSPEC, initially with a simple model comprising a power law plus a constant: (1) This model has three free parameters. These are the spectral index, α, the power-law normalization, N , and an additive constant of zero slope, C, which accounts for the Poisson noise, which dominates at frequencies (i.e. 10 −4 Hz). We fitted all six observations from 2016 simultaneously. All parameters were tied together between observations except for the normalization of C, which we allowed to vary to account for differing levels of Poisson noise due to changes in the count rate between orbits. To obtain the bestfitting model parameters, we minimized the Whittle statistic, S: Here, yi is the observed value of the periodogram and mi is the spectral density of the model at a given Fourier frequency, ν (see Vaughan 2010; Barret & Vaughan 2012). 90 per cent confidence intervals on each parameter were estimated by finding the set of values for which ∆S = S(θ) − Smin 2.7 (where the behaviour of ∆S approximates to ∆χ 2 ). Fitting the six observations from 0.3-10 keV with equation 1 returned best-fitting parameters of α = 2.29 ± 0.08 and log (N ) = −7.3 ± 0.1. The total number of degrees of freedom was 10 970. These values along with the measured Whittle statistic are provided in Table 4. Allowing both N and C to vary between datasets did improve the fit by ∆S = 90, but there was no change to the measured slope. Additionally, allowing the fit to be driven by the low-frequency 'red-noise slope' by truncating the PSDs at 10 −4 Hz returns best-fitting parameters that are consistent with those listed in Table 4. We also fitted equation 1 to PSDs generated in 'soft' and 'hard' energy bands (0.3-1 and 1-10 keV, respectively). One may expect to observe differences in the PSD at different energies based on the energy-dependence of the rms variability at low and high frequencies, as shown in Fig. 7. For example, it is clear that larger variations are observed between the slow and fast variations in the soft band compared to the hard. Subsequently, we may expect the soft band to have a steeper PSD. However, we found that the PSD does not exhibit any significant energy-dependence with best-fitting values typically remaining consistent within the uncertainties. As such, these variations are most likely contained within the uncertainties of the PSD fitting. These values are reported in Table 4. We then searched for evidence of a break (or bend) in the PSD by fitting a bending power law (plus a constant): where N is the normalization, ν b is the break frequency, and α low 1 and α high 2 are the spectral indices below and above the bend, respectively (see González-Martín & Vaughan 2012 for further details, who find that a significant bend is detected in the PSDs of 15 out of their sample of 104 AGN observed with XMM-Newton). Fitting equation 3 to the 0.3-10 keV PSD improves the fit by ∆S = 10 for two additional free parameters, suggesting the marginal detection of a bend. The low-frequency slope, α low 1 , is consistent with a value of 1, consistent with the X-ray PSD slopes observed in other Seyfert galaxies. Meanwhile, the high-frequency slope above the bend is steeper, with α high 2 = 2.43 +0.13 −0.12 , where the best-fitting bend frequency is ν b = 3.0 +2.4 −1.9 ×10 −5 Hz, largely consistent with previous results (Uttley & McHardy 2005;González-Martín & Vaughan 2012;Arévalo & Markowitz 2014). Extending this fit to the soft and hard bands, we find consistent results (∆S = 9 and ∆S = 10, respectively) with break frequencies of ν b = 2.8 +2.0 −1.2 × 10 −5 and 3.3 +1.8 −1.4 × 10 −5 Hz and high-frequency slopes of α high 2 = 2.42 +0.14 −0.13 and 2.44 +0.17 −0.15 , respectively. Again, the bend detection is marginal and we find no evidence of any energy-dependence of the PSD. The best-fitting values are summarized in Table 4 and the bending-power-law fit is shown in Fig. 9. In Fourier-based PSD analysis, it is possible that the results may be affected by biases. One primary form of bias is 'aliasing'. However, this has negligible effect on the data analysed here as they are contiguously sampled. The second primary form of bias is 'leakage'. See Uttley, McHardy & Papadakis (2002), and González-Martín & Vaughan (2012) -and references therein -for detailed discussion on these biases. In terms of leakage, this may be significant when the PSD slope is intrinsically steep (e.g. α ∼ 2), potentially distorting the periodogram at the lowest observed frequencies (here: ν ∼ 10 −5 Hz). Subsequently, this can reduce the sensitivity to important features such as quasiperiodic oscillations and high-frequency bends while also biasing slopes which are intrinsically steep towards α ≈ 2 (see Deeter & Boynton 1982;Uttley, McHardy & Papadakis 2002;; González-Martín & Vaughan 2012 for more details). One simple method to reasonably recover accurate PSD spectral indices is 'end-matching' (see Fougere 1985). The basic endmatching process essentially consists of subtracting a linear function from the observed light curve(s). The linear trend is defined such that the first datapoint (y1) and the last datapoint (yN ) are joined. Following subtraction, y1 = yN . The mean of the light curve is then restored to its original value. We end-matched the Figure 9. The 0.3-10 keV PSD of NGC 3227 using all six XMM-Newton EPIC-pn observations (black, red, green, blue, cyan, magenta, respectively). The solid line represents the best-fitting bending power-law model and the horizontal dashed lines represent the respective Poisson noise levels. See Section 3.5 for details. 0.3-10 keV EPIC-pn light curves for NGC 3227 and re-fitted equations (1) and (3) to the periodograms. However, we find no significant difference to the best-fitting parameters and the results are consistent with those provided in Table 4. X-ray time lags Here, we analyze the X-ray Fourier time lags in NGC 3227. We initially use the XMM-Newton EPIC-pn data, following the methods described in the literature (i.e. Vaughan & Nowak 1997;Nowak et al. 1999;Uttley et al. 2011;Epitropakis & Papadakis 2016). Here, we can compute the cross-spectrum, allowing us to compare the variability in two separate energy bands. The method can briefly be described as follows: (i) split the six EPIC-pn light curves into segments of identical length in two broad energy bands, (ii) compute the discrete Fourier transform for each segment, (iii) combine these, forming auto-and cross-periodograms, averaging over the number of segments. The result is an estimate for the PSD in each band, the coherence between the two bands (see below) and phase/time lags. We binned up the light curve with ∆t = 100 s and used 50 ks segment lengths, allowing us to access frequencies down to ∼2 ×10 −5 Hz. We define our soft and hard energy bands to be 0.3-1 and 1-5 keV, respectively. The cross-spectral products are shown in Fig. 10, where the auto-and cross-periodograms are averaged over contiguous frequency bins, where each bin spans a factor of ∼1.4 in frequency. Panel (a) shows the PSDs in the two energy bands, while panel (b) shows the coherence. This is calculated from the magnitude of the periodogram (Vaughan & Nowak 1997). Its purpose is to provide a linear measurement of the correlation between the two energy bands. The value of the coherence should fall between 0 and 1, where a value of 0 would mean that there is no correlation between the two bands, whereas a value of 1 would signify perfect coherence (i.e. the observed variations in one band would allow Table 4. The best-fitting parameters of the two models (power-law and bending-power-law) fitted to the PSDs of NGC 3227 in three energy bands: 0.3-10, 0.3-1 and 1-10 keV. The fits are applied to all six XMM-Newton EPIC-pn observations from 2016. See Section 3.5 for details. you to perfectly linearly predict the variations in the other band). As such, the coherence is a valuable measurement for assessing the reality of Fourier time lags. In the case of Fig. 10, it is clear that the coherence is high ( 0.8) across the broad 10 −5 − 10 −3 Hz frequency range, implying a strong correlation between the two energy bands on long time-scales (i.e. 1 ks). At higher frequencies (> 10 −3 Hz), the coherence begins to rapidly drop off. This is due to Poisson noise beginning to dominate, as is also clear from the PSD shown in Fig. 9. Then, in panels (c) and (d), we show the frequencydependence of the phase lags, φ, and time lags, τ , respectively. Note that the time lags are related to the phase lags by τ = φ/2πν. At low frequencies, a significant hard lag is observed, with the 1-5 keV emission lagging behind the softer 0.3-1 keV emission with time delays of up to ∼250 s. Then, as we increase in frequency, the hard lag falls off until a negative soft lag is observed at ∼6-8 ×10 −4 Hz with a measured time delay of τ = −70 ± 30 s. We make an attempt to assess the robustness of the soft lag measurement by following the method described in De Marco et al. (2013) and Timmer & König (1995). Here, we employ extensive Monte Carlo simulations in order to test the reliability of the lag measurement against statistical fluctuations arising from Poisson/red noise. Based on our fitting of the underlying PSD with a bending power law (as in Section 3.5), we simulated 1 000 pairs of stochastic light curves in the 0.3-1 and 1-5 keV energy bands. These were scaled to the mean count rates of the observed light curves in the same bands and were produced with the same background rates and levels of Poisson noise. Then, for each pair of simulated light curves, we computed cross-spectral products, assuming zero phase lag (i.e. φ = 0), using the same time sampling (∆t = 100 s), segment length (50 ks), frequency-binning (factor of 1.4) and light-curve length as we use with our real data. As such, in our simulated cross-spectral products, we can assume that any frequency-dependent time delay arises from spurious statistical fluctuations. Then, to test the significance of the soft lag we see in the real data, we follow the technique described in De Marco et al. (2013). Here, we define a 'sliding-frequency' window. This contains the same number of consecutive frequency bins, Nw, as the observed lag profile. In this instance, Nw = 1. We then compute a figure of merit [χ = (τ /στ ) 2 ] at each step over the frequency range below those which are dominated by Poisson noise ( 2×10 −3 Hz). In each case, we record its maximum value. Then, the number of times that χ from the simulated data exceeds χ from our real data provides us with an estimate of the probability that such a lag could be observed by chance. In the case of NGC 3227, our observed soft lags are significant at a level > 95 per cent. Finally, we also explore the time lags in the harder X-ray band at energies > 10 keV by utilizing the simultaneous NuSTAR data. Due to the low-Earth orbit of the NuSTAR satellite, large gaps are introduced into the light curves. As such, standard Fourier techniques are unsuitable. Instead, we use a 'maximum-likelihood' method, as described in Miller et al. (2010a,b). This method rig- orously accounts for 'gappy' time series and allows for accurate estimates of statistical uncertainties. Here, we create light curves in two broad energy bands and use the maximum-likelihood method to fit a joint model to the PSD in each of the bands and to the crossspectral density. Then, from the phases of the cross-spectral density, we can obtain time delays as a function of temporal frequency. We note that the method was independently tested and verified by Zoghbi et al. (2013). We use all seven NuSTAR observations of NGC 3227 and create time series with ∆t = 100 s. Here, we focus on a softer 3-5 keV band and a much harder 15-50 keV band 10 , using a Fourier frequency width of ∆log 10 ν = 0.3. The frequency-dependent time delays between these two bands are shown in Fig. 11. Again, a positive time delay indicates that the harder band is lagging behind the softer band. Similar to the results with the XMM-Newton EPIC-pn, a hard lag is again observed at low frequencies, with the 15-50 keV band lagging behind the 3-5 keV band, with its magnitude rising towards lower frequencies with time delays of up to ∼1 ks. Meanwhile, at higher frequencies (ν ∼ 5 × 10 −4 − 2 × 10 −3 Hz), the lag becomes negative, with a time delay of ∼150 s. These results are largely consistent with those obtained with the EPIC-pn. A similar result was obtained in an analysis of NuSTAR observations of NGC 4051 (Turner et al. 2017). Energy dependence of the lags In addition to measuring the frequency-dependence of the lags, we also investigate their energy dependence, using the XMM-Newton EPIC-pn data. Here, for a given frequency range, we compute the cross-spectral lag for a series of consecutive energy bands against a constant, broad reference band (see Uttley et al. 2011;Zoghbi et al. 2011;Alston et al. 2014;Lobban et al. 2014 for more details). We generated cross-spectral products for nine equally-logarithmicallyspaced energy bands from 0.3-10 keV against a broad reference band, which we define to be the full 0.3-10 keV band minus the band of interest 11 . So now, a positive lag denotes that the band of 10 Note that the FPM instrumental response and the shape of the source spectrum should be considered when interpreting the results. For example, the 15-50 keV band is very broad, but the results will be dominated by the lower end of that bandpass due to the larger statistical weight at lower energies. 11 Note that the choice of reference band should set the (arbitrary) lag offset in the lag-energy spectrum. To test this, we also computed lag-energy spectra using a) a constant soft reference band (0.3-1 keV) and b) a constant harder reference band (2-5 keV). There were no observed changes in the shape of the resultant spectrum, but just an offset in the magnitudei.e. denoted by a shift on the y-axis. interest lags, on average, behind the reference band. The errors on the lag estimates were computed using the standard procedure of Bendat & Piersol (2010), although we note that these are expected to be conservative estimates. This is because the light curves in each band are highly correlated and so, between adjacent energy bins, the scatter in the lags may be overestimated. We note that a more detailed approach -for example, if one wanted to model the lag spectrum -is described by Ingram (2019) to avoid over-fitting the energy-dependence of the cross-spectrum. However, in this case, it does not have any impact on the qualitative shape of our observed lag spectrum. The lag-energy spectra are shown in Fig. 12. The upper panel plots these in three broad frequency bands < 5.7 × 10 −4 Hz where the hard lag emerges. The three frequency bands are: 1 − 7 × 10 −5 , 0.7 − 2.1 × 10 −4 and 2.1 − 5.7 × 10 −4 Hz. These correspond to splitting the nine lowest frequency bins in Fig. 10 into three groups of three. It is clear that strong energy-dependence of the lags is observed in the two lowest frequency bands, with the magnitude of the hard lag increasing towards higher energies, peaking at τ ∼ 500 s with respect to the broad reference band. At the highest of these three frequency bins, the energy-dependence drops off as the lag approaches zero. In the middle panel, we focus in on a narrower frequency band, where the hard lag is most clearly observed. This is the 1.1 − 1.5 × 10 −4 Hz frequency range. Here, the energy-dependence is very clearly defined, with a maximum delay in the hardest band (i.e. the 7-10 keV band lags behind the 0.3-7 keV band at ν = 1.1 − 1.5 × 10 −4 Hz with a time delay of τ = 864 ± 465 s). We note that the shape of the lag-energy spectrum is suggestive of a log-linear dependence, whereby τ scales roughly linearly with log(E). Therefore, we fitted the lag-energy spectrum with a model taking the form: τ = Alog(E) + B, where E is the energy band and A and B are constants. Here, we find that A = 303 ± 45 and B = −18 ± 34, with χ 2 /d.o.f. = 10.0/8. The model fit is overlaid in Fig. 12. Such a log-linear dependence is seen in various AGN (e.g. Ark 564: Kara et al. 2013;IRAS 18325−5926: Lobban et al. 2014;andPG 1211+143: Lobban et al. 2018a) and XRBs (e.g. Cygnus X-1: Nowak et al. 1999 andGX 339-4: Uttley et al. 2011) and is often discussed in terms of the propagating fluctuations model (see Kotov et al. 2001). However, we note that such a model does not account for existence of high-frequency soft lags. The lowest panel of Fig. 12 focuses on the 5.7−8.1×10 −4 Hz frequency range, where the high-frequency soft lag emerges [see Fig. 10: panel (d)]. Here, the energy-dependence is not so clearly defined. From 4 keV, there is a tentative hint of the lag increasing in magnitude towards lower energies, as seen in other sources (e.g. Kara et al. 2013;Lobban et al. 2018a), although its significance is marginal here. We note that the magnitude of the high-frequency lag does increase at energies > 5 keV. While this tentatively appears similar in shape to the Fe K lags reported in various other sources (e.g. Alston et al. 2014;Kara et al. 2014), such a feature does not appear to be significantly detected here. This is perhaps consistent with the lack of any significantly variable component of Fe Kα emission in the X-ray spectrum. Visualizing the low-frequency time delays in smoothed light curves In addition to measuring time lags in the Fourier domain, it may be also useful to look for evidence of them -and other aspects of low-frequency energy-dependent behaviour -in the time domain (e.g. see Lobban et al. 2018a, where we discuss a similar ap- proach in the context of PG 1211+143, finding curious changinglag behaviour). One possibility is to smooth out the high-frequency variability such that the low-frequency variations are all that remain. In Fig. 13, we show the results of such an approach, whereby we smoothed the 50 ks-binned EPIC-pn light curves by convolving them with a Gaussian of width σ = 2 ks. We did try other Gaussian widths, but found that σ = 2 ks gave the clearest results, given the rapid variability of the source. We smoothed light curves in four energy bands: 0.3-0.7, 0.7-1.5, 1.5-5 and 5-10 keV. To better compare the curves, we then normalized each light curve to the mean 0.3-10 keV EPIC-pn count rate for each given observation. Finally, we computed 90 per cent confidence bands for each light curve by performing 10 000 simulations based on the observed count rate in each bin and plotting the 5 and 95 per cent boundaries from the resultant count-rate distribution. In the case of NGC 3227, the variations and subsequent Figure 13. The EPIC-pn light curves of NGC 3227, smoothed via convolution with a Gaussian (σ = 2 ks). The curves are normalized to their mean 0.3-10 keV count rates and plotted in four energy bands: 0.3-0.7 (black), 0.7-1.5 (red), 1.5-5 (green) and 5-10 keV (blue). Meanwhile, the thickness of the bands denote the 90 per cent confidence intervals. The vertical dashed lines mark the edges of the convolution kernel (i.e. 3 × σ = 6 ks) and, so, beyond these limits, we assume the light curves begin to become unreliable. time delays are rapid (i.e. τ ∼ few hundred seconds) and so the low-frequency lags are not obviously apparent in the light curves. 12 However, other curious behaviour is apparent. In particular, the emission flare during obs 3 (∼60 ks) shows strong energydependent behaviour. It is clear that the flare is dominated by enhanced emission in the softest X-ray bands, while the hardest band (5-10 keV) only increases in flux slightly before reaching a plateau. Similar, but more moderate-strength energy-dependence can be seen in obs 5 during the flare occurring at the beginning of the observation. Meanwhile, in obs 6, it is clear that the soft-band emission (0.3-1.5 keV inclusive) is significantly more variable than the hard band, with its steady increase in flux with respect to its harder counterpart apparent in the hardness ratio analysis presented in Figs 1 and 2. See Turner et al. (2018) for a detailed spectral analysis of this observation. DISCUSSION We have presented a series of fundamental variability properties of the highly-variable AGN, NGC 3227, through a long XMM-Newton + NuSTAR observing campaign. Below, we discuss the results. Spectral decomposition In Sections 3.1, 3.2, and 3.2.1, we explored the spectral variations of NGC 3227. The energy-dependent spectral evolution of the source was tracked through hardness ratio measurements, where we find that the source displays typical softer-when-brighter behaviour. This is particularly evident during the large flare midway through obs 3, which is dominated by an increase in soft-band emission. Curiously, we find evidence for the source to be typically harder than average during obs 6, while the X-ray spectrum gradually becomes softer over the course of 1 day. This appears to be due to a rapid occultation event of the central continuum source arising from the passage of a mildly-ionized cloud of gas (NH ∼ 10 22 cm −2 ) across the line of sight. This is discussed in detail in Turner et al. (2018). In Fig. 5, we fit three flux-selected spectra in order to explore the spectral variability. We apply the baseline model from the spectral analysis by Turner et al. (2018) and find that the bulk of the variability appears to be dominated by changes in the strength of the primary power-law continuum with mild changes in photon index (ranging from Γ = 1.4 − 1.7), providing further evidence of softer-when-brighter source behaviour. Superimposed on this are weak variations of ∼20 per cent in the strength of the neutral reflection component, with the bulk of the modest variability in the NuSTAR bandpass simply accounted for by changes in the powerlaw continuum. Meanwhile, we find that the component of Fe Kα emission at ∼6.4 keV shows hints of flux-variability on these timescales consistent with the weak variability of the neutral reflection component at the 2σ level. This modest variability, relative to the strong continuum variability, is suggestive of an origin in material that is distant from the central X-ray source. Finally, in Fig. 6, we show the broad-band difference spectra of NGC 3227. While the hard X-rays remain largely invariant in spectral shape, just with modest differences in normalization, it is clear that large deviations from an absorbed power-law fit are apparent at low energies ( 2 keV), indicative of additional modes of variability. These residuals are accounted for by a multiplicative component of warm absorption and a soft excess, emerging at energies < 1 keV. The difference spectra are dominated by a steep power-law-like component (Γ ∼ 2.1), further indicative of enhanced variability in the soft band. This is also borne out by the rms variability analysis (discussed below). The difference spectra show very little excess emission in the Fe K band or at energies > 10 keV. This supports a picture whereby the neutral reflector and associated Fe Kα emission originate in distant material. Given the time-scale of this observing campaign, this would correspond to a distance of > 1 light-month from the central black hole. The rms variability In Section 3.3, we show the rms variability of NGC 3227 and its associated energy dependence. On short time-scales (0.1 − 2 ks: ν = 5 × 10 −4 − 5 × 10 −3 Hz), the magnitude of the variability is found to be low, with a flat dependence on energy -i.e. the low-energy and high-energy X-rays all typically vary by ∼5 per cent rms on time-scales < 2 ks. As such, on these time-scales, the emission in the hard and soft bands varies with roughly the same fractional amplitude 13 . As we move towards longer time-scales (2 − 20 ks), however, enhanced variability begins to emerge in the soft band with its magnitude increasing towards lower energies. This suggests that the primary source of spectral variability in this observing campaign from 2016 may be due to a soft power-law-like component, varying in flux, that is steeper than the Γ ∼ 1.7 power law that dominates the hard band. We recently observed similar behaviour in the sources PG 1211+143 (Lobban et al. 2016) and Ark 120 (Lobban et al. 2018b). In this scenario, the X-ray continuum can largely be described as a blend of two components where the soft excess varies slowly and independently of the harder X-ray coronal power law, similar to the suggestion by Arévalo & Markowitz (2014) from a previous 100 ks XMM-Newton observation of NGC 3227 (also see Ton S180: Edelson et al. 2002;Ark 564: Turner et al. 2001;and Mkn 509: Mehdipour et al. 2011). If Comptonization is the mechanism responsible for producing the variable soft-excess component, one possibility is that this arises from intrinsic coronal variations, either in terms of the electron temperature or optical depth. At the lowest frequencies (20-∼70 ks), the rms variability is stronger and its spectral shape becomes steeper. Curiously, at low energies (∼0.3-1.0 keV), the rms spectrum is observed to flatten out, suggestive of additional complexity in the long-time-scale variability. A flat shape could arise from a soft component which varies in flux but whose spectral shape remains largely invariant. Such a component could not extend to higher energies here, though, or the rms would be equally as high, unless it was diluted by a constant hard component which dominates at higher energies. However, as the hard reflection component is weak in this source (just ∼10 per cent of the overall flux from 0.3-50 keV), this scenario can be ruled out. Instead, it may be the case that, at low frequencies, the variability from 1-10 keV is dominated by a steep power-law-like component (i.e. whose intrinsic variability increases with decreasing energy), while the rms variability < 1 keV may be dominated by a component of soft excess varying in flux while maintaining a steady spectral shape. We note that, superimposed on these broad-band spectral changes, the source exhibits additional variability due to line-of-sight variations in the warm absorber. However, these absorption changes do not appear to significantly contribute to the averaged rms spectrum, which may be the case if the warm absorber variations occur on longer time-scales than probed here. We also compute the overall time-averaged rms spectrum using all of the data with a timing resolution of 100 s time bins and finer energy resolution. Again, the rms spectrum is steep, while it becomes flatter at energies < 1 keV. On the whole, the rms spectrum is smooth with limited discrete structure. However, we do observe a small drop in the fractional rms at ∼6-7 keV, which coincides with the emission line from near-neutral Fe Kα. This apparent drop in rms could potentially arise due to dilution from a quasi-constant component of Fe K emission -i.e. if the bulk of the emission originates far from the black hole. Indeed, in the time-averaged spectrum, the Fe Kα emission line contributes roughly 10 per cent of the total observed flux from 6-7 keV. Given the variability and flux level of NGC 3227, if such a component were constant on these timescales, it would likely produce a drop of ∼10-15 per cent in the fractional rms spectrum, roughly consistent with what we observe. Finally, in the lower panel of Fig. 7, we show the rms spectra from the six individual XMM-Newton observations of NGC 3227 using a time resolution of 1 ks. All six rms spectra are steep, although it is clear that some observations are more variable than others. In particular, the emission flare during obs 3 results in a steeper rms spectrum. Meanwhile, divergent behaviour can be observed during obs 6, which coincides with the absorption event described in Turner et al. (2018) in which a cloud of mildly ionized gas is observed the occult our line of sight to the central source. In Section 3.4, we also explore the rms variability of NGC 3227 in terms of its flux-dependence. It is clear from Fig. 8 that the source displays enhanced variability during periods of brighter flux. The observed rms-flux relation is roughly linear, which is consistent with the rapid behaviour variability observed in other accreting sources across a wide range of luminosities and masses, such as AGN, XRBs, ultraluminous X-ray sources and cataclysmic variables (e.g. Uttley & McHardy 2001;Gleissner et al. 2004;Heil & Vaughan 2010;Scaringi et al. 2012). A linear rmsflux relation is often discussed in terms of propagating fluctuations, where the X-ray emission and mass-accretion-rate variations are multiplicatively coupled (Uttley et al. 2005) -although this is not necessarily a unique explanation of this relation. We note that a similarly linear relationship is observed in soft (0.3-1 keV) and hard (1-10 keV) bands (albeit covering a smaller range in flux) and so a linear rms-flux relationship appears to hold across the EPIC-pn X-ray bandpass. The power spectrum In Section 3.5, we computed and modelled the broad-band 0.3-10 keV PSD of NGC 3227. We find a marginal detection of a bend in the PSD (∆S = 10 for two additional free parameters) with a best-fitting bend frequency of ν b = 3.0 +2.4 −1.9 × 10 −5 Hz. The low-frequency slope, α low 1 , is consistent with a value of 1, while the high-frequency slope above the bend is much steeper: α high 2 = 2.43 +0.13 −0.12 . We find consistent results fitting the PSD in the soft (0.3-1 keV) and hard (1-10 keV) bands and, hence, no energydependence of the PSD. To date, a number of discrepant values have been reported in the literature for the bend frequency in NGC 3227. González-Martín & Vaughan (2012) modelled the PSD from a single ∼100 ks XMM-Newton observation, finding a best-fitting bend frequency of ν b = 2.3 ± 0.7 × 10 −4 Hz from 0.2-10 keV, which is a factor of ∼8 higher than our result. However, Uttley & McHardy (2005) and Kelly et al. (2011) independently analysed the PSD using XMM-Newton and RXTE data from 2-10 keV, reporting bend frequencies of ∼2.6 ×10 −5 and ∼3.7 ×10 −5 Hz, respectively. These results are consistent with those we report here. A relatively linear relationship between the bend time-scale of the PSD and the mass of the black hole might be expected based on simple scaling arguments . Indeed, such a correlation has been observed for a number of AGN (e.g. Uttley, McHardy & Papadakis 2002;Markowitz et al. 2003). In the case of NGC 3227, our observed bend frequency corresponds to a timescale of T b = 1/ν b = 33 +21 −26 ks (or 0.39 +0.24 −0.31 days). A simple scaling relation between the time-scale of the bend and the mass of the black hole is provided by González-Martín & Vaughan (2012): Here, T b is the time-scale of the break measured in days and MBH is the mass of the black hole in units of ×10 6 M , while and A and C are coefficients. In the case of NGC 3227, we assume MBH = 6.0 +1. (2006) provide an extension to the mass-time-scale relation by including a dependence on the bolometric luminosity, L bol (also see Körding et al. 2007). In this instance, equation 4 is modified to include an an additional term: Blog(L bol ). Here, L bol is in units of 10 44 erg s −1 , while the bestfitting values of the coefficients are derived by González-Martín & Vaughan (2012) to be A = 1.34 ± 0.36, B = −0.24 ± 0.28 and C = −1.88 ± 0.36. For NGC 3227, this predicts a value of T b = 0.16 days (although with large uncertainties), where we assume L bol = 7 × 10 43 erg s −1 (Woo & Urry 2002). This is also consistent with what we observe within our measurement uncertainties. X-ray time lags In Section 3.6, we investigated the short-time-scale variability of NGC 3227 through X-ray Fourier time lags. Through analysis of XMM-Newton EPIC-pn data, we find a low-frequency hard lag, whereby the 1-5 keV band emission lags behind the softer 0.3-1 keV emission with time delays of up to a few hundred seconds. The magnitude of the lag appears to increase towards lower frequencies. Additionally, we observe a roughly log-linear energydependence of the hard lag that is also borne out in our analysis of simultaneous NuSTAR data, where we show that the lag extends into the higher-energy bandpass, with the 15-50 keV emission lagging behind the 3-5 keV emission with time delays of up to ∼1 ks (see Section 3.6.1). Low-frequency hard lags may be ubiquitous in accreting black-hole systems with the amplitude and frequency of the lags scaling with MBH, ranging from XRBs to AGN. As such, they are likely an important phenomenon, carrying crucial information regarding the structure of black-hole accretion flows. Following our approach in Lobban et al. (2018a), we do make an attempt to visualize the low-frequency lags in the time domain through creating smoothed light curves (see Fig. 13), although the small time delays (i.e. τ ∼ few hundred s) are difficult to pick out. Nevertheless, we observe enhanced emission in the soft band during a strong flare in obs 3 and we also demonstrate the gradual softening of the spectrum during obs 6 as the spectrum uncovers over the course of 1 day following a rapid occultation event (Turner et al. 2018). We also detect a significant high-frequency soft lag in NGC 3227. This manifests itself in the EPIC-pn data in the ∼6-8 ×10 −4 Hz frequency range with the 0.3-1 keV emission delayed with respect to the 1-5 keV emission with a time delay of τ = −70 ± 30 s. A scaling relation was reported by De Marco et al. (2013) linking the amplitude/frequency of the soft lag to the mass of the black hole. This was based on a sample of 15 AGN in which high-frequency soft lags have been detected. They report the following relations between the observed frequency, ν, time lag, τ , and the black hole mass: log ( −1.2 × 10 6 M ), these relations predict the soft-lag frequency to lie in the range: 3.1 − 4.5 × 10 −4 Hz, while the time delay falls in the range: 36−58 s. Therefore, while our observed soft lag does occur at a slightly higher frequency than predicted by existing scaling relations, the measured time delay is consistent with the expected value. A number of models exist that attempt to explain the origin of the lags, ranging from propagating fluctuations (Lyubarskii 1997) to small-scale reverberation (e.g. Zoghbi et al. 2011;Fabian et al. 2013) to secondary Comptonization components (e.g. as per Done et al. 2012;Gardner & Done 2014), which may be associated with the outer layers of the accretion disc. For any model, it is important to explain the full spectrum of lags observed across a broad range of frequencies. A popular model to explain the origins of low-frequency hard lags in XRBs is the 'propagating fluctuations' model (Lyubarskii 1997). Due to the behavioural similarities exhibited by the hard lags in XRBs (e.g. Cygnus X-1: Kotov et al. 2001) and numerous variable AGN (e.g. McHardy et al. 2004;Fabian et al. 2013;Lobban et al. 2014), it has been argued that a comparable mechanism is responsible in all accreting black hole systems. However, this model does not account for the existence of the high-frequency soft lags and so, explaining the time delays over a broad frequency range then requires a two-component model additionally involving small-scale reverberation of the primary X-rays by matter close to the SMBH, perhaps due to reflection (e.g. Zoghbi et al. 2011;Fabian et al. 2013). However, we note that there does not appear to be any requirement for any strong component of ionized reflection (blurred or otherwise) in NGC 3227 (e.g. Markowitz et al. 2009;Beuchert et al. 2015;Turner et al. 2018), although there is likely a requirement for a neutral reflector producing a modest Compton reflection hump > 10 keV. We also do not find any strong evidence of any high-frequency Fe K lags, often strongly associated with the small-scale reverberation model. In addition, by using 'maximum-likelihood' methods to measure the time lags in the simultaneous NuSTAR data, we also find that the soft lag exists (in the same frequency band as in the EPICpn data) between the 3-5 and 15-50 keV bands. Given that the softer reference band (3-5 keV) contains a much lower fraction of reflected emission compared to the hard 15-50 keV band (where the Compton reflection hump becomes apparent), it is difficult to envisage a mechanism whereby the softer 3-5 keV NuSTAR band could lag behind the harder band via small-scale reverberation, similar to the case of NGC 4051 (see Turner et al. 2017). In Miller et al. (2010a,b), a reverberation model is presented in which the lags emerge via scattering of the primary X-rays by circumnuclear material. This photoelectrically-absorbing material is typically placed at tens to hundreds of rg from the central black hole and may be associated with a disc wind/outflow (also see Mizumoto et al. 2018Mizumoto et al. , 2019. Here, the low-frequency hard lags are produced by scattering from this absorbing material while the higher-frequency soft lags may arise from oscillatory, ringing features, which are a consequence of taking the Fourier transform of a sharp feature in the time domain (see Turner et al. 2017 for a detailed discussion). In this scenario, the fraction of scatteredto-direct light increases with energy, predicting a strong delayed signal in the hard X-ray band. Turner et al. (2017) analysed a series of NuSTAR observations of the highly-variable Seyfert galaxy NGC 4051, discovering that a soft lag exists between the softer 2-4, 5-7.5 and 8-15 keV bands and the hard 15-70 keV band. Here, the transition from the low-frequency hard lags to the higher-frequency soft lag is accounted for by reverberation from circumnuclear material with the high scattered fraction of light indicating a global covering fraction of the reprocessor of ∼50 per cent with respect to the continuum source. As shown in Fig. 12, we also observe the time delays increasing towards higher energies. in the case of NGC 3227, ultimately peaking in the NuSTAR bandpass. Evidence for such circumnuclear material in NGC 3227 is well established through previous X-ray studies (e.g. Lamer et al. 2003;Markowitz et al. 2009Markowitz et al. , 2014Rivers et al. 2011) with the source occasionally observed to undergo line-of-sight variable absorption events (e.g. Lamer et al. 2003;Beuchert et al. 2015). Such an occultation event was observed in this very campaign, as shown in Turner et al. (2018) and the spectral decomposition presented in Section 3.2. As such, it is also conceivable that we are seeing evidence of these 'light echoes' in this source from absorbing material tens to hundreds of rg from the primary continuum source. Summary In this paper, we have presented a series of X-ray variability results from a long XMM-Newton and NuSTAR observing campaign on the bright AGN NGC 3227. We find the source to exhibit strong variability in both the X-ray and UV bands. The typical trend is for the source to be steeper when brighter, consistent with other similar-class AGN. However, NGC 3227 does also undergo a period of significant spectral hardening due to an occultation event by a cloud of mildly ionized gas passing into the line of sight. This largely manifests itself as absorption signatures -primarily from the Fe UTA -in the soft X-ray band. We spectrally decompose the source and show that the primary components that comprise the broad-band X-ray spectrum are a power law (Γ ∼ 1.7), a modest neutral reflection component with an associated narrow Fe Kα emission line, an additional component of soft excess and additional zones of absorption arising from mildly outflowing (vout ∼ 100-1 000 km s −1 ) ionized gas. The warm absorber is observed to exhibit moderate variability, primarily in terms of changes in the line-of-sight column density. Meanwhile, the bulk of the observed variability appears to be driven by the continuum, whose magnitude scales with the flux of the source in a roughly linear way. The broad-band X-ray variability shows strong energy dependence. On time-scales of 0.1-2 ks, the variability is weak and displays roughly the same fractional amplitude across the bandpass. On longer time-scales of 2-20 ks, enhanced variability is observed in the soft band, consistent with the spectral variations being dominated by a steep spectral component. This trend continues to longer time-scales of 20-∼70 ks, although with a curious flattening of the fractional variability amplitude at the lowest energies. One possible cause of this is that the soft band ( 1 keV) variability becomes dominated by a component of soft excess -e.g. a Comptonized disc blackbody -which varies in flux while remaining roughly invariant in spectral shape. We note that the variability of the reflection component, however, is weak (∼20 per cent) on all time-scales, suggesting an origin in material that is distant from the central black hole. Finally, we employ Fourier methods and find a marginal detection of a bend in the PSD. Given the black hole mass of NGC 3227, the slopes of the low-and high-frequency parts of the PSD and the bend time-scale are consistent with existing scaling relations in the literature. The variability of the soft and hard bands are well correlated in this source over a large range of frequencies, with a high coherence. As such, we compute X-ray time lags, finding a hard lag at low frequencies and a soft lag with a time delay of τ = −70 ± 30 s at higher frequencies (ν ∼ 6 − 8 × 10 −4 Hz), roughly consistent with the predictions from existing scaling relations. Through maximum-likelihood methods, we extend our analysis to the NuSTAR bandpass, finding that the hard 15-50 keV band lags behind the softer 3-5 keV band. Given that the reflection component in NGC 3227 is weak (with no requirement for any component of relativistically-blurred ionized reflection), and that the softer reference band is much more continuum-dominated, it is difficult to reconcile this as originating via small-scale reverberation. Instead, given the well-established evidence for the existence of large quantities of circumnuclear photoelectrically-absorbing material a few tens to hundreds of rg from the black hole in NGC 3227, we instead consider that these lags may arise from this material via energy-dependent scattering. Such material may typically be associated with a disc wind or outflow.
2020-04-09T01:00:54.615Z
2020-04-08T00:00:00.000
{ "year": 2020, "sha1": "a3044c55e74d7a123f5baf33ddfc45da99f4d1d4", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/494/4/5056/33182398/staa1008.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "a3044c55e74d7a123f5baf33ddfc45da99f4d1d4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247160634
pes2o/s2orc
v3-fos-license
Equal abundance of summertime natural and wintertime anthropogenic Arctic organic aerosols Aerosols play an important yet uncertain role in modulating the radiation balance of the sensitive Arctic atmosphere. Organic aerosol is one of the most abundant, yet least understood, fractions of the Arctic aerosol mass. Here we use data from eight observatories that represent the entire Arctic to reveal the annual cycles in anthropogenic and biogenic sources of organic aerosol. We show that during winter, the organic aerosol in the Arctic is dominated by anthropogenic emissions, mainly from Eurasia, which consist of both direct combustion emissions and long-range transported, aged pollution. In summer, the decreasing anthropogenic pollution is replaced by natural emissions. These include marine secondary, biogenic secondary and primary biological emissions, which have the potential to be important to Arctic climate by modifying the cloud condensation nuclei properties and acting as ice-nucleating particles. Their source strength or atmospheric processing is sensitive to nutrient availability, solar radiation, temperature and snow cover. Our results provide a comprehensive understanding of the current pan-Arctic organic aerosol, which can be used to support modelling efforts that aim to quantify the climate impacts of emissions in this sensitive region. was approximately one week but varied from six to nine days. Total sampled amount of air was typically between 330 and 500 m 3 per sample. After sampling, the filters were stored in a fridge at the station and shipped to PSI after one to three months in a Styrofoam box with a cooler. TIK: Aerosol measurements at the International Hydro-Meteorological Observatory (HMO) Tiksi (71°6'N, 128°9'E) were taken at the Clean Air Facility (CAF), located 500 m from the Laptev sea coast and 5 km from the Tiksi settlement. The 20 m Tiksi meteorological flux tower is located around 300 m from the CAF. A total suspended particle (TSP) inlet was installed approximately 1.5 m above the CAF roof and 5 m above the ground. Aerosols were sampled at an air flow of ~45 L min -1 and during the protocol times. TSP was collected on 47 mm quartz fiber (Pallflex) and Teflon (Zefluor) filters for subsequent analyses in the laboratory. The low concentrations of ambient aerosols necessitated sampling times ranging from one day in November up to three days in September, to allow the loading to exceed the detection limit for relevant aerosol chemistry analyses. Sampling was performed in September and November of 2014, in March, May-June, and September 2015, and in June and September 2016. Upon removal from the sampling system, the samples were wrapped in aluminum foil, plastic tightly closed bag, and immediately put into a deep freeze. For transportation an additional plastic box was used with a tight lid. The duration of transportation was smaller than the duration of storage. More details are provided in Popovicheva et al. (2019). UTQ: PM samples were collected on the North Slope of Alaska from June 2016 through September 2017, at the Climate Research Facility 7.4 km northeast of the village of Utqiaġvik (UTQ), Alaska (71°2'N, 156°4'W), 515 km north of the Arctic Circle. The site is approximately 1.6 km from the nearest coast. TSP samples were collected on QFF (Tissuquartz Filters 2500 QAT-UP; 20 x 25 cm) using Hi-Q high volume samplers ~10 m above ground level. The sampling duration was on average one week at a flow rate of 1.2 m 3 min -1 . Filters were removed from the sampler immediately after sample period had ended and were stored in a freezer on-site when not directly in use. QFFs were baked prior to sampling at 500 o C for 12 h and stored in aluminum foil packets and storage bags in a freezer before and after sampling. Field blanks were taken periodically throughout the sampling campaigns by placing an unsampled filter in a filter holder, placing it in the sampler momentarily, and then removing it and placing the filter in storage. Field blanks were treated in the same manner as sampled filters. Filters were shipped to and from the site in plastic bins in coolers cooled with blue ice packs. VRS: Villum Research Station (VRS) is located in North Greenland (81°36'N, 16°40'W, 24 m asl). The atmospheric measurement site is located 2 km southeast of the Danish Military facility on a small peninsula (Princess Ingeborg peninsula). The region is characterized by a dry and cold climate with 188 mm of precipitation annually and an annual mean temperature of -16.9 °C. Dominant wind direction is from southwest and the observatory is most of the time upwind of the military outpost Station Nord. The annual average wind speed is 4 m s -1 . VRS is surrounded by sea ice with bare ground occasionally present in the summer and appearing more and more frequent the latest years. Polar sunrise is observed at the end of February, while polar night prevails from mid-October. A high-volume sampler (Digitel DHA-80) was operated in the air observatory at a flow rate of 500 L min -1 (STP) and regularly tested towards a transfer standard and adjusted. The inlet head was heated to avoid condensation. The HVS itself is placed indoor in a temperaturecontrolled room. The sample passes through a PM10 head located just on top of the HVS and sample air was collected on quartz fiber filters over one week corresponding to ~5 k m 3 of air. After sampling, the exposed filters are placed between two pieces of aluminum foil and placed in a rilsan bag and stored in the dark in a freezer at -20 o C. During transport by plane, the samples are still kept in the dark but the temperature is ambient (takes normally 2 days). After received at Aarhus University, the samples are stored again in a freezer at -20 o C. ZEP: Sixty seven aerosol filter samples were collected at the Zeppelin Observatory (78°5'N, 11°5'E, 475 m asl) at Svalbard, Norway, between January 2017 and December 2018. The filter samples were collected using a Digitel highvolume sampler (PM10) with a flow rate of 40 m 3 h −1 and a filter face velocity of 72.2 cm s -1 . The sampling inlet being situated 2.5 m above the roof level of the observatory and 7 m above the ground level. Aerosol particles were collected on pre-fired (850°C; 3 h) QFF (PALLFLEX Tissuequartz 2500QAT-UP; 150 mm in diameter) for 1 week, and according to the quartz fiber filter behind quartz fiber filter (QBQ) set up, thus providing dynamic field blanks. Back and front filters were mounted in pre-cleaned filter holders, wrapped in preheated aluminum foil, and locked in two Zip-lock polyethylene bags, taking place in NILUs clean room. Shipment from NILU to the Zeppelin Observatory and vice versa were made in aluminum boxes, typically ten filters in each parcel. During transport from NILU to the Zeppelin Observatory and back again, the parcel was kept in ambient air. At the Zeppelin Observatory, the filters were stored in a freezer (-18 o C) prior to and after being exposed. At NILU, exposed filter samples were stored in a freezer (-18 o C). Thermal-optical analysis (TOA) was performed using the Sunset Lab OC/EC Aerosol Analyzer, using transmission for charring correction and operated according to the EUSAAR-2 temperature program 4 . Aliquots were cut from each of the exposed filters in our clean room, wrapped in preheated aluminum foil, locked in two Zip-lock polyethylene bags, and shipped to PSI for analysis. Upon combination of certain front filters, 60 ZEP samples were measured with offline AMS at PSI. Text S2. Additional information on the AMS measurements We have tested Teflon filters vs. QFF, as Teflon filters were also available (alternating with QFF) from the Russian stations. However, the extraction efficiency of test Teflon filters in water was found to be significantly lower than that of the QFF, both filters being collected concurrently at PSI (to represent the same ambient sample, in order to assess only the potential effect of the substrate on extraction efficiency). In general, the filter substrate should be quartz fiber if the goal of the campaign includes OA monitoring, as the only offline method available to quantify the OC loadings involves the OA thermal decomposition, therefore a polymeric filter material could partially decompose leading to OC artifacts. The inorganic-salt artifact on the AMS CO2 and CO fragment ion signal 5 was also accounted for. We measured ammonium nitrate (AN) and ammonium sulfate (AS) standards (pure, without any filter extracts present) of three different concentrations (4, 12, 36 ppm) at the beginning and at the end of the campaign to correct data matrices for the inorganic AN/AS effects on both the CO and CO2 signals, as a function of loading and time. The resulting "b" parameters (slopes) used to modify the fragmentation table were, b-AN: 0.012 (CO2), -0.0032 (CO); b-AS: 0.0043 (CO2), -0.0026 (CO), with no significant variability between the start and end of the measurement campaign (duration ~5 consecutive days). These "b" parameters were significantly lower than the average/median values by Pieber et al. (2016) 5 , causing minor corrections. Text S3. Auxiliary measurements Additional offline analyses were carried out (using different punches/extracts than for the AMS measurements) to corroborate and validate the source apportionment results (Text S4), e.g. Fig. S12-S13 and Table S6. Elemental and organic carbon (Sunset-EC/OC) were quantified by thermal-optical analysis, following the EUSAAR2 protocol 4 (UTQ: NIOSH 5040; ALT: see next paragraph); water-soluble OC (WSOC) was measured by water extraction followed by catalytic oxidation and non-dispersive infrared detection of CO2 using a total organic carbon (TOC) analyzer 6 . We measured major ions (including methanesulfonic acid, MSA) in selected samples by ion chromatography (IC) 7 . Organic markers were determined for selected samples: sugar-alcohols and sugars were measured by high-performance liquid chromatography (HPLC) associated with a fluorescence detector (LC 240 Perkin Elmer) and HPLC-pulsed amperometric detection 8 ; organic acids were determined by LC-MS. IC-based MSA from the same filters (selected samples) was measured also by other laboratories. Correlation between IC-based MSA and TOC+AMS/PMF MSA-OA at different stations is provided in Table S7. The EC and OC concentrations of the filters collected at ALT were analyzed by a thermal evolution protocol, developed at ECCC as EnCan-Total-900 (ECT9), to quantify the amount of OC and EC in carbonaceous aerosol and their δ 13 C values 9,10 . The fractions were separated from each other, according to their degree of refractoriness. Specifically, carbon fractions were released by the ECT9 protocol in three steps: (1) OC at 550 °C for 600 seconds in pure He; (2) Pyrolyzed OC (PyOC) & carbonate carbon (CC) at 870 °C for 600 seconds in pure He; and (3) EC at 900 °C for 420 seconds in a mixture of 2 % O2 with 98 % He. All fractions were fully oxidized to CO2 by passing through a furnace containing MnO2 maintained at 870 °C. For concentration determination, the CO2 passed through a methanator at 500 C, was converted to CH4, and quantified with a flame ionization detector. Based on isotope measurements ( 14 C & 13 C), it was verified that the ECT9 protocol 11 effectively isolates OC or EC from complex mixtures of reference materials with an uncertainty of about 5 %. Several environmental parameters were retrieved as well. Temperature data for TIK were obtained from Popovicheva et al. (2019) 12 . Temperature, solar radiation and snow depth data for VRS were obtained from the station website: https://villumresearchstation.dk/data/. Normal-climate average (1980-2010) snowfall data for UTQ were obtained from: http://akclimate.org/Climate/Normals; in June, July, and August the long-term absence of snow events at this site is evident, and PBOA concurrently increases significantly with decreasing snowfall, before becoming negligible starting from September. Electronic archive Arctic and Antarctic Research Institute (AARI) term meteorological and upper-air observations data at Research station "Ice Base Cape Baranova" 2013-2020 were obtained from: http://www.aari.ru/main.php?lg=1. Other data were retrieved from ebas (http://ebas.nilu.no) or measured within the current project. Data were averaged to match the time resolution of the filter sample composites measured by AMS. Text S4. Additional information on the PMF analysis Number of factors: Compared to n = 11, other factor solutions were less stable among the different random seed runs, for certain factors (Table S3): MSA-OA first appeared in the 8-factor solution but only in certain seed runs, while it was not identified in the 7-factor solution. Starting from n = 9, the time series of MSA-OA, BSOA, and PBOA became stable among the five random seed runs and were similar to those of the 11-factor solution in both absolute (slope close to 1.0) and relative terms (high R 2 ). This was also the case for POA and haze but starting from n = 10. OOA and CHN-rich also became stable starting from n = 10, but with different absolute contributions than for n = 11. The Field blank-related factor (see "Retention of PMF factors related to ambient organic aerosols" subsection below) remained mixed with other factors for n ≤ 10 and therefore was the last one separated in the 11-factor solution. For n > 11, the solutions resembled the 11-factor solution, except for specific factor splits, e.g. for n = 13, the CHN-rich factor was split into three factors, or the CHN-rich and BSOA factors were split into two factors each. Even though these splits might indicate some inherent variability in these components, the robust separation of associated distinct features was not possible by PMF and/or for this specific dataset and/or sample size. Therefore, with regard to a partial exploration of the rotational ambiguity, the analysis described above showed that the 11-factor solution was the most robust. Error/sensitivity analysis: The uncertainty analysis approach followed in the present study is described as follows: a preliminary uncertainty analysis was carried out by performing 21 free BS runs, of which 16 were similar to the 11-factor "base case" solution. The rejected runs were related to (temporal) co-variability between certain factors (e.g. of POA and haze) and to one sample from TIK (24.09.2014) representing a pollution episode (highest organic mass among all stations) likely not explained by any factor. Unlike all other samples, the Q/Qexp of this sample (only) remained high in the 11-factor solution. Therefore, this outlier sample was not included in any discussion/presentation. We have therefore S4 used from the 16 retained runs the 11-factor average mass spectral profiles' relative fragment ion intensities within one standard deviation (1 SD) to proceed with running BS 100 times for obtaining a modeling error estimate. By applying these constraints on the retrieved factor mass spectral profiles, we aimed at guiding the solution towards environmentally meaningful rotations 13 , but without forcing the profiles into too narrow intervals that could potentially result in unrealistic/biased relative errors. We also introduced a "block BS" approach in these 100 runs, using a block length (l) of 7 consecutive samples, according to the semi-empirical criterion 14,15 l = N 1/3 , where N was the sample size (~350). Therefore, 50 non-overlapping blocks were created (each block contained samples from one station). These were treated as single-sample blocks in the different resampling runs, which assisted in preserving the original time series structure to a certain extent, i.e. to account for the partial co-variability between e.g. haze and POA observed in the preliminary uncertainty analysis. Besides addressing co-variability issues, the blocked bootstrap strategy is also recommended when performing less than ~100 BS runs 16 . By following this approach, all 100 BS runs matched the base case solution with relative errors below 30 % on average for factor mass concentrations > 50 ng m -3 , with MSA-OA clearly being the least uncertain factor (Fig. S6). On the basis of comparable absolute mass concentrations, our relative errors were generally lower than those reported for > 100 ng m -3 in Daellenbach et al. (2017) 17 . In parallel, we assessed the sensitivity of the 11-factor solution by testing a complementary, independent approach conceptually similar to bootstrapping (not adopted as the main one eventually). This approach consisted of running PMF on randomly reduced datasets (< 350 samples in the input matrix) with variable sample size, i.e. 33 %, 50 %, 70 % and 85 % of the 350 samples, 5 times for each sample size (20 sensitivity runs in total), following the approach of Hedberg et al. (2005) 18 . The BSOA, MSA-OA and PBOA factors were identified (matching the base case) in all of these 20 sensitivity runs. At the same time, we observed similar features to those of the selected approach described before, i.e. one TIK sample affecting the output when randomly selected or not (e.g. misattribution of the CHN-rich, OOA and FBrelated factors), and temporal co-variability, e.g. between the haze and POA factors, leading to imprecise factor identification. Overall 9 runs from this approach were matching the base case solution, with an increasing acceptanceto-rejection ratio by increasing the sample size (80 % solution acceptance for 85 % randomly reduced datasets). This is because reduced datasets may be explained by less than 11 factors (if constraints are not introduced) and indicates the importance of large PMF-input datasets in sufficiently capturing the variability in both the chemical composition and temporal trends. Without considering the high-loading TIK sample, 13/20 runs would instead be accepted. This exercise therefore provided further support of the observed stability of the solution, upon running BS with partial constraints on the retrieved mass spectral profiles, as eventually selected for this study and described above. We also compared the 11-factor solution that included all m/z (up to 191) vs. the solution with m/z up to 133 (i.e. base case). The high correlation coefficients indicate excellent agreement for all OA factor time series (Table S4). No difference in the obtained PMF result was therefore observed by considering fragments with m/z > 133 or not. We note that the day-to-day relative contribution of these larger fragments to the total AMS signal by HR organics was 1.5 ± 0.4 %, although many of them exhibited high SNR (in the samples compared to the AMS water-blanks). Retention of PMF factors related to ambient organic aerosols: We observed significant decreases in the fCO2 after fumigation for five selected samples with very high measured initial fCO2 (~0.75), which confirmed the presence of carbonate. By contrast, the decrease in fCO2 was not as significant for five other samples with lower carbonate-related factor content in relative terms (Fig. S3). We also found excellent agreement between the sample mass spectra after fumigation vs. mathematical subtraction of the carbonate-related factor as retrieved by PMF from the respective original measured AMS spectra (Fig. S3), which provides strong evidence of the sufficient removal of inorganic carbonate via fumigation. These support the mathematical subtraction approach used in the present study, especially considering that chemical damage of the organic content can occur upon fumigation 19 . The absolute concentrations of the remaining factors (median and IQR from the 100 BS runs) were corrected/rescaled to measured WSOC using a relative ionization efficiency of 1.4 for all organics 20 vs. 1.16 for carbonate 21 We performed test AMS/PMF runs by including the field blank samples in the input matrix. The contributions of each organic factor in the blanks were then compared to the respective contributions in the samples (Fig. S4-S5). The absolute concentrations of the field-blank (FB)-related factor were not statistically different between the field blanks and the samples (Fig. S4). Further, the FB-related factor exhibits higher relative contributions at stations with non-pre-baked filters (one third -35 ± 17 %-vs. one fifth -21 ± 13 %-of the total signal, i.e. sum of the 11 factors, for pre-baked filters). Also, the FB-related factor profile from the base case solution correlated with one of the two factor profiles identified by applying PMF on the FB AMS spectra from various stations (scatter plot of 578 fragment relative intensities, R 2 : 0.99; slope: 1.00; both profiles had identical fCO2 = 0.43). These provided both quantitative and qualitative support of a systematic association of the FB-related factor and its mass spectral fingerprint with the organic mass on the filter substrate at the different stations. We identified two non-interpretable factors; one was S-rich (F1) and the other was N-rich (F2). Their combined profile fingerprint appeared to be separated as a second factor when applying PMF on the field blanks, with minor relative contributions compared to the FB-related factor but enhanced for ALT. Their combined contributions in the samples were overall significantly lower than those of the FB-related factor, amounting to less than 5 % of the total signal in the base case solution (sum of 11 factor absolute mass concentrations over all stations), with elevated contributions in samples from ALT. In many samples these factors did not contribute with a real signal (Fig. S4), they exhibited relatively low absolute concentrations lacking temporal trend, and did not contain source-marker fragments nor correlated with available auxiliary data. Also, backward trajectory analysis did not provide any indication of specific source regions. These relatively minor factors were therefore not identified and thus were not considered for the discussion in the main text nor in the results presentation. The CHN-rich factor (N:C ~0.11; Fig. S2) was rich in proteinaceous matter and dominated the variability of reduced Ncontaining fragments typically related to amino acids (Table S5). It exhibited yearly-average concentrations >100 ng m -3 in BAR, GRU and TIK. While the composition of this factor is well-understood, links to natural or anthropogenic primary emissions remain elusive. We hypothesize an association with combustion emissions (trash burning, landfills at TIK), biological matter (terrestrial dust, phytoplankton production, bacteria and biological degradation) and/or sea salt aerosol arising from the marine microlayer 22,23,24 . The latter can be supported by its fair correlation with estimated sea salt concentrations at GRU (R 2 : 0.5), which could partially explain a lack of a clear temporal trend at the different stations. However, anthropogenic emissions cannot be excluded. This factor was relatively less defined and its contributions in the samples appeared to be distinguishable from the contributions in the blanks only for high mass concentrations (Fig. S4). Final AMS/PMF result: Recovery analysis was performed using PMF, following a simplified version of the approach of D. Bhattu et al. (pers. Comm.). Briefly, the analysis was carried out on 265 samples where Sunset-OC data were available. Fifty BS runs were performed in total, where the output time series were constrained using all 11 factors (normalized median concentrations) within their IQR, as obtained from the 100 BS runs performed on the water-soluble fraction. In these 50 runs, the water-insoluble OC (Sunset-OC minus WSOC) time series was used as an additional variable in the PMF input matrix (scaled to WSOA of the input matrix). The recovery was then defined as the ratio of the output profiles' water-soluble-to-total signal ratio for each factor. The well-constrained resulting recoveries are shown in Fig. S9 for the six retained OA factors. The lowest water-solubility was ~60-80 % for POA, PBOA, haze and OOA, whereas MSA-OA and BSOA can be considered fully water-soluble. The median recoveries of the Carbonate-related, FB-related F1, F2 and CHN-rich factors were 90 %, 100 %, 99 %, 77 %, and 100 %, respectively. Source-marker AMS fragments: We provide here specific fragments identified in our dataset as characteristic of specific sources, which were also identified in previous studies. These fragments were selected based on their highest contribution to these factors and the dominant contribution of these factors to these fragments. The numbers next to elements correspond to subscripts in standard chemical formulas, while the number behind each fragment indicates its m/z value. All AMS fragments correlating with each AMS/PMF factor full-dataset time series are listed in Table S5. Text S5. Additional information on the back-trajectory analysis The aim of this analysis was to identify long-term inner-Arctic vs. distant (transported) OA source components by coupling AMS/PMF with CWT. Based on the overall obtained results from all factors at all stations, and considering which factors are expected to be transported, we focused on the presentation and discussion of obtained results for haze, POA MSA-OA and BSOA, in order to further support their identification/interpretation, especially for haze. The atmospheric lifetime of pollutants is generally much longer in Arctic winter than at lower latitudes, due to slow dry/wet deposition. Tests were carried out with 10-vs. 14-d BTs for haze arriving at (remote) ALT (most likely component to be most aged over all sites), considering the large mean Arctic age of air in winter 42,43 , but the result remained unchanged. Therefore, we did not extend the 10-d BTs in the subsequent main runs. We note that back-trajectories on the order of 10-d may emphasize the importance of Eurasia for Arctic pollution, while we cannot exclude transport over longer timescales and the importance of more southerly sources, e.g. in Asia. For MSA-OA prevalent in summer, 5-d BTs proved sufficient in identifying potential source origins in our study, in line with the relatively shorter mean Arctic age of air in summer 42 . We are aware of potential inaccuracies and artifacts related to, for instance, sparse Arctic weather data 44 , complex orography around the station 45 , surface effects 46 and generally shallow planetary boundary layer heights (low inversion layers) 47 , however we have used long-term data and merged results (as detailed in the following) in an attempt to reduce their potential impact 48 on the main trends at a regional scale 49 . Fig. S11 were the following: i) haze (10-d BTs; 5-d for Sub-Arctic PAL) is largely transported from Europe and mainland Russia to the different Arctic stations; ii) POA (10-d BTs unless otherwise noted) has mainly Eurasian potential source regions, possibly except for GRU and TIK where a more local source influence can be expected (only summer samples were available), in which cases a trajectory-based approach conceptually fails to provide meaningful information; iii) MSA-OA (5-d BTs) exhibited a clear marine origin at all stations (negligible influence in TIK); iv) a similar (distant) source region in central Siberia is found for BSOA arriving at both BAR and TIK (Russian stations; 10-d BTs); a marine distant source at UTQ (10-d BTs) is not ruled out; more local/regional influence is expected at PAL (5-d BTs), in which case a trajectory-based approach conceptually fails to provide meaningful information. BT results for the BSOA factor in other stations were not considered/interpreted due to lack of response to temperature (see Fig. S14). The main observations from individual station results shown in Supplementary Tables Table S1. Filter sampling coverage. Table S2. Polar night (wintertime absolute darkness) and midnight sun (summertime continuous sunlight) periods at the different Arctic stations. Note that these time periods vary slightly from one station to another due to their different latitude (e.g. earlier onset and larger duration of wintertime darkness at higher-latitude stations). Table S3. Time series correlations between identified factors in various AMS/PMF solutions for different number of factors (from 7 to 13), and the respective factor time series from the base case 11-factor solution. Factors colored in grey (Carbonate, FB-related, F1, F2, CHN-rich) were interpreted to not be related to sampled organics or major OA sources. Table S4. Same Table S3, but for the 11-factor solution by including fragments up to m/z 133 vs. all m/z. The excellent agreement between the 11-factor base case solution and the median factor time series based on the 100 BS runs is also demonstrated. Table S5. Correlation of 266 AMS fragment ion time series (used in the PMF input) with the corresponding full-dataset time series of seven PMF-output OA factors (identified "marker" fragments are indicated with bold). Fragments were classified according to the Pearson's r correlation coefficients, and then they were ordered by increasing m/z (25 HR fragments with SNR < 2.0 are indicated with italics). Fragments correlating with two factors are indicated with the color of the other factor. showing that solutions with more than 11 factors did not further reduce the residuals significantly (<10 %). c) Scaled residuals (Q/Qexp; color scale on the right) of the base case 11-factor solution, for the full dataset (variables: HR fragment relative intensity). Figure S2. All 11-factor AMS/PMF mass spectra (profiles; shown as normalized fragment intensities) in HR with average atomic ratios, where the fragments are color-coded with the family. Factors with legends colored in grey (Carbonaterelated, F1, F2, FB-related, CHN-rich) were interpreted to not be related to sampled organics or major OA sources. The spectra for m/z > 50 are magnified (right panel, x10 -3 ). Time series Variables a b c S14 Figure S3. Comparison of normalized mass spectra, averaged for five samples with higher initial fCO2 and five other samples with lower initial fCO2, with fumigation vs. without fumigation before the AMS measurement, as well as with fumigation vs. "without fumigation minus Carbonate-related factor". The former comparison indicates a substantial decrease in the CO/CO2 signal upon fumigation in samples with larger initial fCO2, while the latter comparison supports the mathematical subtraction of the Carbonate-related factor from the PMF analysis to account for the presence of inorganic carbonate in our samples. Figure S4. Normalized cumulative distribution functions (CDFs) for the water-extracted organic carbon mass concentration of the FB-related, F1, F2 and CHN-rich factors in the samples and in the blanks, for six stations (number of samples = 250) with available field blank filters. Open circles display actual data in thirty concentration bins (lines: fitted curves). Note that the same range of x and y variables is shown in all panels. Insets show normalized counts in log scale with different bin number for demonstration. The blank concentrations in each sample were estimated using station-specific (and season/year-specific, where applicable) blank-filter relative organic factor composition (from PMF) and the sample-specific m 3 of sampled air per cm 2 of filter area. Similar ranges and distributions were found for the FBrelated factor. F1 and F2 did not exhibit concentrations higher than 100 ng m -3 . Together with CHN-rich, these factors did not have statistically different contributions in the samples from those in respective field blanks, i.e. [IQR] of day-today sample-to-blank mass ratio not statistically different from 1.0: FB-related, [0.7, 3]; F1, [1,6]; F2, [1,4], and CHNrich, [0.9, 3]. Figure S5. Normalized cumulative distribution functions (CDFs) for the mass concentrations of the six retained/interpretable WSOC factors in the samples and in the blanks, for six stations (number of samples = 250) with available field blank filters. Open circles display actual data with bin size 10 ng m -3 (lines: fitted curves). Insets show normalized counts in log scale with different bin size for demonstration. The blank concentrations in each sample were estimated based on station-specific (and season/year-specific, where applicable) blank-filter relative organic factor composition (from PMF) and the sample-specific m 3 of sampled air per cm 2 . In contrast to the factors discussed in Fig. S4, these six factors did not contribute significantly to the signal of the field blanks. Specifically, all six factor concentrations in the blanks resided in the first or first two concentration bins (0-20 ng m -3 ). Therefore, their detection limits were relatively low and their high concentrations in the samples can be considered real with high confidence. The full-dataset P99,sample-to-P99,blank mass ratio for POA, haze, MSA-OA, BSOA, PBOA and OOA is 7, 19, 34, 93, 12 and 10, respectively (P: percentile). Figure S6. Scatter plot of the relative fragment intensities vs. their standard deviation (1 SD) for the six retained major WSOA factors from 100 BS runs, and the resulting time series of relative error (1SD/average or relative IQR/2) vs. the average or median factor concentrations. Black line shows the 1:1 line. Individual factor time series linear fits are shown with lines having the same color as the WSOA factor. Figure S7. Station-specific seasonal absolute WSOA mass concentrations, sorted in descending order of the station annual-average OA, and the respective relative factor contributions to WSOA mass (before recovery corrections). The corresponding panels for total OA are found in the main text (Fig. 1). S18 Figure S8. Same as Fig. 2, but for WSOA factors. Figure S9. Estimated AMS/PMF recoveries for the major OA sources. The median values were used to convert the water-soluble to total OA mass. Figure S10. Time series of cumulative (median) absolute factor contributions to total OA mass at each station (median composite dates shown for the sampling period at each station from start to end). Figure S11. Back-trajectory analysis results using ZeFir, based on concentration-weighted trajectories (CWT), where the entire time series of each WSOA factor mass from each station were used as input. The receptor site is indicated with a red circle. The heat maps indicate air parcels responsible for high measured factor concentrations arriving at a receptor site (label), and thus potential major source regions for the associated long-term datasets. Results for stations with very low year-long factor concentrations that are not omitted here should be interpreted with caution. Figure S16. Same as Fig. 3, but for total OA (sum of 6 factors).
2022-03-01T14:45:14.404Z
2022-02-28T00:00:00.000
{ "year": 2022, "sha1": "75d234f6473f1bf8b8f7d540730398db8ecd2f3a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41561-021-00891-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7b24fb0cc6099cbc60c0aeedc7db68b395fca2b7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
265044458
pes2o/s2orc
v3-fos-license
Generating information-dense promoter sequences with optimal string packing Dense arrangements of binding sites within nucleotide sequences can collectively influence downstream transcription rates or initiate biomolecular interactions. For example, natural promoter regions can harbor many overlapping transcription factor binding sites that influence the rate of transcription initiation. Despite the prevalence of overlapping binding sites in nature, rapid design of nucleotide sequences with many overlapping sites remains a challenge. Here, we show that this is an NP-hard problem, coined here as the nucleotide String Packing Problem (SPP). We then introduce a computational technique that efficiently assembles sets of DNA-protein binding sites into dense, contiguous stretches of double-stranded DNA. For the efficient design of nucleotide sequences spanning hundreds of base pairs, we reduce the SPP to an Orienteering Problem with integer distances, and then leverage modern integer linear programming solvers. Our method optimally packs sets of 20–100 binding sites into dense nucleotide arrays of 50–300 base pairs in 0.05–10 seconds. Unlike approximation algorithms or meta-heuristics, our approach finds provably optimal solutions. We demonstrate how our method can generate large sets of diverse sequences suitable for library generation, where the frequency of binding site usage across the returned sequences can be controlled by modulating the objective function. As an example, we then show how adding additional constraints, like the inclusion of sequence elements with fixed positions, allows for the design of bacterial promoters. The nucleotide string packing approach we present can accelerate the design of sequences with complex DNA-protein interactions. When used in combination with synthesis and high-throughput screening, this design strategy could help interrogate how complex binding site arrangements impact either gene expression or biomolecular mechanisms in varied cellular contexts. Reviewer 1: In the manuscript entitled "Generating information-dense promoter sequences with optimal string packing," the authors described solution methods to create promoter sequences that contain many transcription factor binding sites in a specified (typically short) length of DNA.The final solver developed for this task is available online, and the ability to generate promoters with densely packed binding sites could be of general interest to the synthetic biology or cell engineering communities.However, the functionality of at least one promoter library must be shown to demonstrate the expected value of this novel solution method.Demonstration of bacterial promoter library function would suffice. We wholeheartedly agree that experimental validation is important.However, generating a well-designed promoter library is not a trivial task and is a substantial undertaking in its own right.This is something that we have recently started to work on, but it will be a manuscript-scale set of results, which is why we chose to break this work into two parts, one with a computational focus (this manuscript at PLOS Computational Biology), and in the future we anticipate writing a second experimentally-focused manuscript that we will submit to another journal. In principle, conducting a functional screen of promoter variants is straightforward.However, the SPP inherently designs nucleotide sequences with complex DNA-protein interactions, and this added sequence complexity introduces additional criteria for proper functional screening.These criteria involve selecting appropriate binding sites for inclusion in promoter variants, and designing assays that involve cellular events affecting multiple transcription factors.The structure of bacterial promoters is well-defined, with the strength of core promoters being primarily driven by the presence of sigma factor recognition sites.Therefore, in the right environmental context where the target sigma factor is active, it would likely be possible to design a small functional library and it would not be surprising for dense arrays that contain these well-known consensus sites to act as promoters.However, we are concerned that designing and presenting results from these "conventional," albeit synthetic, promoters would not convincingly showcase the strength of our method.Instead, the strength of the SPP lies in its ability to efficiently design sequences that accommodate complex, overlapping DNA-protein interactions.Furthermore, the SPP can generate these types of sequences at a scale that can facilitate studies on transcription factor signal integration, competition, and condition-dependent interactions influenced by genetic context.Studies of this complexity can require libraries ranging from tens-to-hundreds of thousands of variants. The SPP itself is a considerable contribution to in silico biological sequence design.Existing studies that have attempted the design of overlapping sequences have relied on ad hoc methods or very short sequences, neither of which are appropriate for large-scale library design.There is a growing interest in designing synthetic DNA (i.e., sequences not found in natural genomes) both to develop parts with novel function and to generate training examples for machine learning models.At its core, the SPP offers a novel, scalable method for generating synthetic DNA sequences.We now describe this in further detail in the manuscript. In summary, given the complexity of conducting a functional screen and the depth with which the current manuscript explores the SPP, we aim to maintain the focus here on the computational and application-agnostic aspects of the SPP, setting aside functional screening for more targeted future investigations. à See Introduction and Discussion 1.It is often unclear what the library size is for each generated library, referring to the number of sequences that would need to be screened to test for function (number of sequences generated), instead of the author's definition of library size [R] = number of binding sites.This information is necessary to gauge the usefulness of each generated library, as screening 10^6 promoters for function might be possible in one system, while testing 10^1 might be more feasible in another. As the manuscript sells the SSP method for promoter library generation, discussion of the feasibility of testing the generated libraries is warranted. We thank the reviewer for raising this important point, which was a potential source of confusion.We acknowledge that "library size" conventionally refers to the number of variants in a functional screening context, whereas we were using it to mean this but also to describe the binding site collection size |R|.To eliminate this source of confusion, we have revised the terminology throughout the manuscript.Now, "library" refers to the sequences generated for screening, and "binding site collection" describes the number of binding sites input into the SPP to generate a dense array. à See points throughout manuscript To the reviewer's second point, we agree that different promoter design projects may require varying library sizes, depending on the complexity of the desired expression profiles.In the context of integer linear programming and the SPP, the goal of generating sequences with specific characteristics-such as a defined spacing between two particular binding sites to achieve a desired promoter "likeness"-often translates into layering incentives or constraints onto the solver.This approach helps steer the types of solutions that are returned.We have observed that solver time increases as more constraints are applied.For example, tasks like generating dense arrays constrained only by binding site collection size |R| and sequence length L (as in Figure 2) tend to be quicker.However, more complex generative tasks, such as creating dense arrays with "diversity-driven order" (Figure 4) or "positional bias" (Figure 6), add complexity and extend solve time.We have updated the Discussion to explicitly address this trade-off between speed and constraints. We also include actual solve times for a representative library design task, comparing results across solvers. à See Discussion à See Table S1 2. Promoter library sequences, the full sequences in addition to the inputted binding sites, should be included in the supplement or extended data, primarily for libraries that are expected to have function like the bacterial promoter libraries. To clarify, all figures utilized mock binding sites (i.e., randomly generated sequences representing binding sites).This decision was made to highlight the SPP's capabilities in an application-agnostic manner.We have now clearly stated in the Results section and figure captions that all binding sites were random sequences to avoid future confusion. à See points throughout the manuscript Because readers may benefit from access to the data shown in the manuscript, all sequences described in the manuscript are available in our open-source "dense-arrays" library, linked in the Source Code section.The /benchmarks folder provides detailed information for each dense array.It specifies which binding sites from a given binding site collection (input into the SPP) appeared in the solution sequence, along with their coordinates represented as string offset positions.To enhance accessibility, we have explicitly stated in the manuscript that these sequences are available.Additionally, we have included a README file in the /benchmarks folder to help interpret the provided tables.The Source Code section now includes a description of how to access the data. à See Source Code Reviewer 2: Natural promoter regions may contain many overlapping binding sites of transcriptional factors, affecting transcription initiation rates.Despite the common occurrence of overlapping binding sites in nature, the rapid artificial design of nucleotide sequences with many overlapping sites remains a challenge.In this paper, the authors propose a computational approach for designing nucleotide sequences with densely packed DNA-protein binding sites, termed the Nucleotide String Packing Problem (SPP).They first demonstrate that the SPP problem is NP-hard, and thus simplify the problem into Orienteering Problem with integer distances, which can then be efficiently solved using various open-source and commercial solvers.The authors subsequently explore many possibilities of the method in the design of bacterial promoters. 1. Regarding the issue of bias in solutions, the authors attempt to explore the effects of binding site size and sequence on bias, while briefly mentioning the potential impact of different solvers due to their different internal algorithms.However, the explanation for the effects of binding site sequences and different solvers is not sufficiently clear.For the effect of sequences, one approach could be to investigate the influence of bias from the perspective of sequence overlap.Additionally, exploring different solvers and observing their specific effects on bias, if any, could also be attempted here. We thank the reviewer for these suggestions.We have now included more analysis on sequence bias, examined sequence overlap, and tested the effects of bias with different solvers. Working with the binding site collections presented in Figure 3, we investigated the reasons for discrepancies in the representation of binding sites, which were all the same length, across various dense arrays.We found that binding sites whose sequence was more subject to overlap with other binding sites were the ones that were privileged in the top-scoring solutions (Figure S3). à See Results "Diversity of the generated solutions" section à See Figure S3 As for comparing the effect of different solver implementations on the transient bias in binding site representation among similarly-scored solutions, we updated Figure S4 to show that different solvers (Gurobi, SCIP, CBC) indeed return similarly-scored solutions in different arbitrary orders.None of the solvers maximize the entropy of binding site representation in transient solution sets.Additionally, we now include Table S1, which details the solve times for Gurobi, SCIP, and CBC, offering a practical perspective on performance relative to Figure S4. à See Results "Diversity of the generated solutions" section à See Figure S4 à See Table S1 2. The article mentions that "Meanwhile, generative AI techniques are starting to show promise in emulating the complexity of context-dependent promoters (31-37).However, these models often struggle with interpretability, and fine-tuning them to include or exclude specific binding sites still requires specialized expertise (38)."However, in practice, the method designed in this article may rely more heavily on specialized expertise, as understanding different binding sites may involve complex processes.Additionally, whether existing expert knowledge is sufficient to generate binding site libraries consistent with natural promoters is worth discussing.It is recommended that the authors provide a clearer explanation in this regard. We thank the reviewer for pointing this out, and indeed, contrasting our SPP approach with generative deep learning models, based on required user expertise, may not be an appropriate angle. Consequently, all generated promoters are derived from these natural sequences, and these models may struggle to create sequences outside their training distributions.The choice of training data crucially shapes the model, reflecting our assumptions about which sequence distributions we intend to explore (DOI: 10.1038/s41587-023-02115-w). The SPP can generate DNA sequences with "extreme" cis-regulatory logic that likely do not appear in host's genome, offering the potential to present novel expression responses not dictated by the cell's evolutionary history. To address these points, we have updated the Introduction to describe this distinction. à See Introduction 3. Furthermore, due to the ambiguity in determining binding sites in biology and the variation in protein-motif binding across different biological states, whether more densely distributed binding sites correspond to a more suitable promoter is still worth considering.It is hoped that more discussion on this aspect will be provided in the Discussion section.This is a great point.As mentioned in the Introduction, previous studies have shown that natural E. coli promoters often have multiple binding sites in close proximity (DOI: 10.1371/journal.pone.0114347). We acknowledge and now highlight that there is no definitive threshold for classifying a sequence as a transcription factor binding site.Binding sites exist on a continuum, with affinities ranging from so low as to be negligible, to so high that the transcription factor is nearly always bound (DOI: 10.1038/s41576-024-00713-1). Indeed, binding affinity data for transcription factors, often derived from methods like ChIP-Seq, typically encompass a range of binding peaks rather than a single sequence.These sequences can be affiliated with varying degrees of binding affinity, determined by factors such as experimental enrichment (DOI: 10.1128(DOI: 10. /microbiolspec.mgm2-0035-2013) ) or their similarity to a consensus sequence. In practice, when multiple binding sites with labeled affinities are available for a transcription factor, one can choose among these binding sites as inputs for the SPP method.This flexibility allows for the tailoring of output solutions that meet specific design criteria or assumptions.However, it is important to note that comprehensive binding affinity data is available for only a limited set of transcription factors, and that representative consensus sequences do not necessarily equate to the highest thermodynamic binding affinity (DOIs: 10.1016/S0968-0004(98) 01187-6; 10.1186/gb-2003-5-1-201). We now address some of these nuances in the manuscript and have added text to the Discussion. à See Discussion 4. If experimental conditions permit, synthesizing designed promoter sequences and subsequently measuring the strength of artificially designed promoters using methods such as fluorescence protein assays would enhance the persuasiveness of the article. We agree that a fluorescent protein assay would be suitable for experimental validation, and we are indeed preparing to screen many SPP-derived promoter sequences using this method.However, as mentioned in the response to Reviewer 1 (page 1 of this document), we believe this falls outside the scope of the current manuscript. 5. The introduction part lacks a comprehensive overview of the categories of computational methods related to promoter design, and there are additional types of computational methods relevant to promoter design that should be introduced.For example, some promoter strength predictive models (classification/regression), which may play a crucial role in in silico directed evolution. We thank the reviewer for this comment.We recognize that the field of computational promoter design is broad, covering both predictive and generative methods, each with distinct use cases.The SPP is not related to regression or classification tasks on DNA sequences; it addresses the generative design challenge of creating contiguous DNA sequences from many overlapping binding sites.Since the SPP is inherently generative, our commentary specifically targets other generative methods related to promoter design. We have now expanded our commentary in the Introduction to better explain how the SPP contributes to the generative DNA sequence design space-for example, the SPP can supply synthetic promoters as training examples for deep learning models. However, we acknowledge the reviewer's point that SPP-derived promoters could be analyzed using predictive models.These models, typically trained with numerous examples of natural promoters and non-promoter sequences, could then assess our dense array sequences to determine characteristics such as promoter likeness and strength.We now mention in the Discussion how the SPP can be integrated with such predictive models. à See Introduction à See Discussion Minor issue: The title of Figure 4 is not bold, inconsistent with other figures.
2023-11-08T14:14:12.938Z
2023-11-03T00:00:00.000
{ "year": 2024, "sha1": "d2e0ec468ae7d335c13601808414bc3660d46ba6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pcbi.1012276", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "10d7b8a56e5879340dcbeb82a5e6144c34c95173", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Biology" ] }
32165489
pes2o/s2orc
v3-fos-license
Controlling polarization at insulating surfaces: quasiparticle calculations for molecules adsorbed on insulator films By means of quasiparticle-energy calculations in the G0W0 approach, we show for the prototypical insulator/semiconductor system NaCl/Ge(001) that polarization effects at the interfaces noticeably affect the excitation spectrum of molecules adsorbed on the surface of the NaCl films. The magnitude of the effect can be controlled by varying the thickness of the film, offering new opportunities for tuning electronic excitations in e.g. molecular electronics or quantum transport. Polarization effects are visible even for the excitation spectrum of the NaCl films themselves, which has important implications for the interpretation of surface science experiments for the characterization of insulator surfaces. On the nanoscale, materials often reveal extraordinary features. To harness this potential it is essential to grow or manufacture nanostructures in a controlled way. Ultrathin insulating films are an example for which this has been achieved on metal and semiconductor surfaces. We have recently shown that these films develop new and unique properties as their thinness approaches the limit of a few atomic layers and that such supported ultrathin films should be regarded as new nanosystems in their own rights [1]. Here we go one step further and demonstrate by means of first principles calculations that control over the film thickness means control over the polarization of the film. This in turn gives access to properties on the film's surface, for example the energy levels of molecular adsorbates, which are relevant in the context of e.g. catalysis, molecular electronics, or quantum transport. The fact that ultrathin insulator films offer a new perspective of control at the nanoscale is increasingly being recognized. Repp et al., for example, have recently demonstrated that gold atoms adsorbed on NaCl/Cu can be reversibly switched between the neutral and negative charge state [2]. Alternatively, the charge state of Au atoms on MgO films can be controlled by the film thickness [3]. For planar molecules adsorbed on Cu-supported NaCl films, the molecular orbitals can be resolved spatially and energetically [4,5] by scanning tunneling microscopy (STM) and spectroscopy (STS), and even reactions can be followed [6]. Interestingly, STS experiments performed on pentacene molecules adsorbed on NaCl/Cu(111) show a significant influence of the film thickness on the molecular gap [5]. Ultrathin insulator films have also developed into highly valuable and intensively studied model systems for characterizing insulating surfaces. The study of insulator surfaces has proven difficult due to their lack of conductivity, which severely limits the range of applicable surface science techniques. Ultrathin insulator films grown on conducting substrates offer a solution to this dilemma because the films can exchange electrons with the substrate by tunneling [7,8]. Caution has to be applied, however, when transferring thin-film results to the surfaces of technological interest. The properties of ultrathin films may deviate considerably from those of macroscopic films [1] and the excitation spectrum may be affected by the polarization effects presented here. In this Letter we will address both of these points by means of G 0 W 0 quasiparticle-energy calculations [9,10] for the example of a prototypical insulator-semiconductor interface (NaCl/Ge(001)) and CO as a model adsorbate. Supported NaCl films are well-behaved model systems for studying the properties of insulator surfaces [6,11,12,13]. Although they are mostly grown on metals, notably Cu [2,13], Ge(001) is the substrate of choice for studying insulator/semiconductor interfaces [11,12,14,15,16]. In recent years, these films have also attracted increasing interested in the context of STM and STS studies of atomic and molecular adsorbates [2,3,4,5,6]. We use density-functional theory (DFT) in the localdensity approximation (LDA) to determine the atomic structure of NaCl films on Ge(001). The electronic exci-tation spectrum is calculated with many-body perturbation theory in the GW approach as perturbation to the LDA ground state (henceforth denoted G 0 W 0 @LDA). The GW approach has not only become the method of choice for calculating quasiparticle excitations in solids [9,10] as probed in STS or direct and inverse photoemission, but also includes long-range polarization effects. In a bulk material these encompass the screening of additional charge. At a surface or an interface, however, the abrupt change in dielectric constant gives rise to a net build up of charge, so called image charges. This net polarization acts back on the additional charge and increases in strength with decreasing distance between the additional charge and the interface. These polarization effects are absent from the most common density functionals (such as the local-density or generalized gradient approximation, exact-exchange and hybrid functionals), but enter the GW self-energy (Σ=iGW ) through the screened Coulomb potential W . The application of the GW method is therefore necessary to capture these effects [17,18,19,20]. Polarization or image effects are present at any surface or interface, but are most commonly associated with metal surfaces, where the ratio in dielectric constants is largest. In supported ultrathin films, however, a charged excitation (e.g. an electron added to the 2π * level of CO on NaCl/Ge(001)) polarizes two interfaces (vacuum/NaCl and NaCl/Ge(001)). The combination of dielectric constants and film thickness therefore controls the strength of the polarization effects from the semiconductor/insulator interface to the insulator surface. Before we address the polarization effects at the NaCl/Ge interface in detail we briefly describe its atomic structure, which had not been determined previously. The DFT-LDA calculations were performed with the SFHIngX code[21]. We employ a plane-wave basis set (40 Ry cutoff) and norm-conserving pseudopotentials. The NaCl/Ge system is modeled in the repeated slabapproach with a 6-layer Ge slab at the experimental lattice constant (saturated by hydrogen atoms on the bottom side) plus a varying number NaCl layers on the top side [22]. Increasing the Ge thickness to 12 layers produces no significant change in the atomic or electronic structure of the NaCl films. In agreement with experimental indications [16], we find that the Ge dimers of the clean Ge(001) surface prevail below the NaCl film, giving rise to a 2 × 1 surface lattice [11]. The Ge dimers below the NaCl film remain asymmetric, but with a smaller tilting angle (10 • ) compared to the free surface (19 • ). The adhesion of the film is dominated by the electrostatic interaction between the ions in the film and the partial charges developing at the buckled-dimer surface of Ge(001). The relaxation pattern in the bottom layer follows the electrostatic profile of the Ge surface, which attracts the ions next to the dimer, but repels those above the inter-dimer troughs (cf. Fig. 1). The corrugation a) 12 FIG. 2: a) 5σ-2π * splitting of CO on NaCl/Ge(001) at the LDA and G0W0@LDA levels of theory, and b) G0W0 corrections to the 5σ and 2π * molecular orbitals as a function of the NaCl film thickness. in the higher layers is induced by the bottom layer and quickly flattens out as the thickness increases. Next, we use the adsorption of a small molecule to probe the effect of the interface polarization at the film's surface. For this purpose, we placed a single CO molecule in the 2 × 1 surface unit cell (we have no indications for relevant changes at lower coverages). CO physisorbs perpendicular to the NaCl(001) surface with the C-end down (cf. Fig. 1). The CO axis tilts along (110), i.e. the Ge dimer, by 2.6 • for 2 monolayer(ML) NaCl, 0.7 • (3 ML), and 0.1 • (4 ML), respectively. The adsorption energy of 0.28 eV is the same for the two inequivalent Na sites at the surface and does not depend on the film's thickness to within 0.01 eV. It also agrees with the value for a thick free-standing NaCl slab. The molecular states give rise to flat bands in the band structure and do not hybridize with NaCl states. In the following, we focus on the molecular gap (given by the 5σ-2π * splitting) at the Γ point. The G 0 W 0 calculations for the electron addition and removal spectra were performed with the gwst code [23,24,25]. For the correlation (exchange) self-energy, a 14 Ry (28 Ry) plane-wave cutoff and a 6 × 3 × 1 kpoint sampling was used. State summations included 2500 bands (81 eV above the Fermi level). To correct for artificial polarization effects in the repeated-slab approach, a "finite vacuum" correction was applied [26]. In Fig. 2, we compare the molecular gap of CO computed for LDA and G 0 W 0 @LDA for films of 2-4 ML thickness. At the level of LDA, we observe a small reduction of the molecular gap with increasing film thickness due to the thickness-dependent structural changes in the surface of the NaCl film. The G 0 W 0 corrections, however, reverse this trend and introduce a significant increase in the quasiparticle gap from 12.48 eV (for 2 ML) to 12.80 eV (4 ML). The other CO orbitals exhibit analagous thickness-dependent shifts (not shown). These gaps are significantly smaller than for CO on a pure NaCl surface (13.1 eV for 6 ML NaCl, no Ge) or the free molecule (15.1 eV). Similar reductions have been found in G 0 W 0 calculations for molecules adsorbed on insula-tors [27] or semimetals [28] and are a result of surface polarization or charge transfer to the molecule [29]. We will now demonstrate that the reduction of the 5σ-2π * splitting is caused by polarization effects at the two interfaces. Inspection of Fig. 2b) reveals that the thickness-dependent changes are of similar magnitude, but opposite sign. Such a behavior is characteristic for long-range polarization effects [26,28]. To illustrate this, we split the electron addition/removal into two steps. First, the charged state is created on a free molecule. Secondly, we consider the electronic polarization of the NaCl/Ge substrate. If the lifetime of the charged state on the molecule is long compared to the electronic relaxation time of the substrate, we may treat the rearrangement of charge with classical electrostatics. The polarization lowers the energy of the charged state E N ±1 by 1 2 q∆V , where ∆V is the polarization-induced change in the electrostatic potential and q the charge. The energy of the hole state (E N − E N −1 ) thereby increases whereas that of the electron state (E N +1 − E N ) becomes smaller. To substantiate this picture, we estimate ∆V outside the supported NaCl films. For this, the Ge substrate, the NaCl film, and the vacuum region are replaced by homogeneous dielectric media (ε=14, 2.8, and 1, respectively) with abrupt interfaces. The thickness of the NaCl region is taken to be 2.8Å/layer. The effective polarization is then computed using the image-charge method. For ∆V , we then take the value of the image potential at the position of the CO molecule outside the surface. Since this position is somewhat ambiguous due to the spatial extent of the CO orbitals we determine it by requiring that the reduction of the 5σ-2π * splitting of 2 eV at the bare NaCl surface is reproduced by the model. This yields a value of 1.5Å, and the model then gives 5σ-2π * splittings of 12.6, 12.7, and 12.8 eV for the 2, 3, and 4 ML films, respectively, in good agreement with the values from our G 0 W 0 @LDA calculations. On the experimental side, a gap reduction as a function of film thickness has been reported for STS experiments on pentacene molecules adsorbed on NaCl/Cu(111) [5]. In STS, the molecular states give rise to tunneling resonances and the observed tunneling gap amounts to 3.3, 4.1, and 4.4 eV for NaCl films of 1,2, and 3 ML in thickness (the gap of pentacene in the gas phase is 5.3 eV). The overall trend as well as the magnitude agree very well with our G 0 W 0 @LDA calculations for CO/NaCl/Ge considering that the ratio of dielectric constants is much larger in the NaCl/Cu case. A tunneling resonance gap will be hard to observe experimentally for CO/NaCl/Ge, however, because the highest occupied molecular orbital (5σ) is located ∼8.5 eV below the Ge valence band maximum (vbm) in G 0 W 0 @LDA and thus even below the NaCl valence band (4-7 eV below the Ge vbm, cf. Fig. 3). Our results highlight that surface and interface polarization effects are important for adsorbed atoms and molecules. Supported ultrathin films thereby offer un- precedented opportunities for controlling these effects by tailoring both the film's thickness and dielectric constant to the desired properties. These additional parameters may expedite the design of devices in molecular electronics or quantum transport, where the distance of a molecular state to the Fermi level is an important quantity. We will now demonstrate that the same polarization effects also affect excitations inside the ultrathin film and discuss the implications of this for the characterization of insulator surfaces. Fig. 3 shows the density of states (DOS) projected onto the NaCl film for films with 2-4 ML for LDA with and without the G 0 W 0 corrections. The most important change when comparing the LDA and G 0 W 0 DOS is the shift of the NaCl bands [30] relative to the Ge states. This is not too surprising, as the G 0 W 0 corrections to the bulk band gaps are much larger for NaCl (3.3 eV) than for Ge (0.7 eV). Including the G 0 W 0 corrections, the top of the film's valence band lies ∼4.2 eV below that of the Ge substrate in excellent agreement with ultraviolet photoelectron spectroscopy [31]. More remarkable, however, is the change in the shape of the NaCl-derived features when going from LDA to G 0 W 0 @LDA and from 2 to 4 layers. This indicates that the G 0 W 0 shifts for the NaCl states are not uniform and that corrections derived from bulk calculations are not easily transferable to thin films. Excited states are instead subject to additional thickness-dependent and substrate-specific changes that are neither encompassed by a ground-state perspective nor easily derivable from bulk G 0 W 0 calculations for the separate fragments alone. These thickness-dependent variations should be observable in high-resolution spectroscopic experiments. In LDA the DOS of bulk NaCl (not shown) is already attained at a thickness of only three layers. A similar behavior has previously been observed for ultrathin silica, hafnia and alumina films [1]. However, this is no longer the case when charged excitations (e.g. photoemission or tunneling) are treated appropriately. The G 0 W 0 @LDA calculations demonstrate clearly that the DOS of ultrathin films differs from that of bulk NaCl (which is identical in shape to the LDA DOS for a 4 layer film). This implies that caution has to be applied when interpreting spectroscopic results. Spectra of ultrathin films are not representative of bulk samples. The non-uniform G 0 W 0 shifts are also a result of the interface polarization effects. The position-dependence of the self-energy (Σ = iGW ) can be understood with the same dielectric slab model introduced above. Solving the image-charge model yields the induced potential ∆V (z) as a function of the position in the film z. From this, we build the following model self-energy for the occupied NaCl states Σ(z) = ∆Σ − 1 2 ∆V (z) . The constant ∆Σ encompasses all GW self-energy effects that are not due to image effects. Its actual value is not important for analyzing the position dependence of the self-energy. Based on our G 0 W 0 calculations, we estimate it to be −1.05 eV for Cl 3s and −1.3 eV for Cl 3p. In order to assess the local variation of the self-energy in the full G 0 W 0 calculation, the valence bands are projected onto atomic orbitals. For states that are predominantly composed of orbitals from a single atomic layer (>80%) we observe a linear dependence between the G 0 W 0 correction and the projection weight. By extrapolating to 100%, the orbital-dependent quasiparticle corrections shown in Fig. 4 are obtained. The variation of the quasiparticle shifts throughout the NaCl film (G 0 W 0 DOS in Fig. 3) is reproduced well by a projection-weighted sum of these local-orbital contributions. In Fig. 4, we compare the extrapolated shifts to our model self-energy. The good agreement illustrates that the interface polarization effects in the Ge/NaCl system are indeed the cause for the position dependence in the G 0 W 0 self-energy. We acknowledge fruitful discussions with Philipp Eggert and the Nanoquanta Network of Excellence (NMP4-CT-2004-500198) for financial support.
2009-06-24T20:30:33.000Z
2009-06-24T00:00:00.000
{ "year": 2009, "sha1": "808d797261f6f38b9104eeef515d58e76d666442", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.103.056803", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "808d797261f6f38b9104eeef515d58e76d666442", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science", "Medicine" ] }
233639055
pes2o/s2orc
v3-fos-license
The clinical significance of remnant thyroid tissue in thyroidectomized differentiated thyroid cancer patients on 131I-SPECT/CT Background To explore the 131I-SPECT/CT characteristics of remnant thyroid tissue (RTT) in differentiated thyroid cancer (DTC), further assess the risk factors and clinical significance. Methods 52 DTC patients after total thyroidectomy had undergone neck 131I-SPECT/CT before 131I ablation. The diagnosis of RTT was based on SPECT/CT and follow-up at least 3 months. The anatomic locations and features of SPECT/CT of RTT were assessed by reviewers. The risk factors of RTT with CT positive were analyzed by the chi-square test. Results A total of 80 lesions of RTT were diagnosed in this study, most of them were mainly located in the regions adjacent to trachea cartilage (37/80) or lamina of thyroid cartilage (17/80). On SPECT/CT of RTT, low, moderate and high uptake were respectively noted in 10, 24 and 46 lesions, definite positive, suspected positive and negative CT findings were respectively noted in 10, 21 and 49. The RTT lesions with definite positive CT findings were mainly located adjacent to lamina of thyroid cartilage (5/10). Primary thyroid tumor (P = 0.029) and T stage (P = 0.000) were the effective risk factors of CT positive RTT. Conclusions RTT has certain characteristic distribution and appearances on SPECT/CT. Most of RTT with definite CT abnormalities located adjacent to lamina of thyroid cartilage, which suggest surgeons should strengthen the careful removal in this region, especially primary thyroid tumor involving bilateral and T4 stage. This study can provide a certain value for the improvement of thyroidectomy quality in DTC patients. Background Differentiated thyroid cancer (DTC) is one of the most common endocrine malignant tumor, including papillary thyroid carcinoma (PTC) and follicular thyroid carcinoma (FTC), which has the gradual increase rates of incidence and death in recent years [1][2][3]. Total thyroidectomy, radioactive iodine-131( 131 I) ablation, and thyroid stimulating hormone(TSH) suppression are well established treatments for DTC [4]. Complete surgical resection will be beneficial to decrease the risk of recurrence and mortality, and improve the prognosis of DTC patients [5]. However, due to the complex anatomy around thyroid, the surgical operation can easily cause the injury to important tissues such as recurrent laryngeal nerve (RLN) and parathyroid gland, which may lead to serious complications. In order to avoiding these complications, it is generally difficult for thyroidectomy to avoid the occurrence of remnant thyroid tissue (RTT), even for very skilled surgeons. RTT can cause the recurrence or metastasis of DTC, and badly influence the clinical prognosis [6]. Therefore, in order to maximize the benefits of patients, it is still the responsibility of every surgeon to carefully remove the thyroid tissue to minimize RTT. But, the effect of reducing RTT and improving the quality of surgery has been very little in recent years. It may be associated with the fact that the distribution characteristics and risk factors of RTT have not been systematically summarized, which makes careful surgical clearance lack of a clear guidance. RTT should be routinely detected and evaluated after thyroid surgery, which can be used to guide the individualized treatment of 131 I ablation in the next step. The comparison of diagnostic value of various methods for the detection of RTT can be seen in a large number of previous studies [7][8][9][10]. 131 I-SPECT/CT (single photon emission computed tomography/ computed tomography) is recognized to be highly sensitive and specific, which has been widely used in clinical evaluation practice. However, the 131 I-SPECT/CT features of RTT have not been reported in detail. Especially, when the RTT lesions were revealed the definite CT abnormalities on 131 I-SPECT/ CT, its clinical significance for surgeons remains to be further analyzed and researched. Therefore, this study is aimed to explore the distribution and imaging characteristics of RTT in patients with DTC on 131 I-SPECT/CT, to further analyze the risk factors, and to discuss its values for surgeons. Ethics statement The present study was approved by the ethics committees of affiliated Cancer Hospital & Institute of Guangzhou Medical University. All patients provided written informed consent for their clinical information to be reviewed by us. And all methods were carried out in accordance with the approved guidelines. Inclusion criteria Patients were included in this study if they (1) had a diagnosis and pathological confirmation of DTC at the affiliated Cancer hospital & Institute Guangzhou medical university between January 2020 and July 2020, (2) have been performed total thyroidectomy and cervical lymph node dissection, (3) have clear TNM staging according to the thyroid cancer staging criteria of AJCC (American Joint Committee on Cancer) 8th edition, (4) received the first radioiodine 131 I therapy within 1-3 months after surgery, (5) have been performed 131 I-WBS (whole body scan) and neck 131 I-SPECT/CT within 1 week before radioiodine therapy. Exclusion criteria Patients were excluded from this study if they (1) underwent subtotal thyroidectomy, (2) lacked of information about the primary tumor, (3) had a synchronous malignant tumor of other site, (4) had not be followed up for more than 3 months. Finally, a total of 52 patients with DTC were included in this study, 25 females and 27 males, aged range 11-67 years, median age 40 years. Among them, pathology confirmed the diagnosis of PTC in 50 cases and FTC in 2 cases. Surgery All patients had received the surgical operations of total thyroidectomy and cervical lymph node dissection. There were three kinds of operation modes, mode 1 of "complete thyroidectomy + unilateral central neck lymph node dissection", mode 2 of "complete thyroidectomy + bilateral central neck lymph node dissection" and mode 3 of "standard or modified radical operation". All patients were determined the TNM staging according to the ACJJ 8th edition after surgery. WBS and SPECT/CT acquisition All patients underwent scintigraphy scanning (WBS and SPECT/CT) within 3 months after surgery, using a SPECT/CT scanner (Discovery NM/CT 670 Pro, GE medical systems, Israel) with dual-headed gamma camera system, high-energy collimators and a 16-slice spiral scanning diagnostic CT. The WBS images were acquired 24-48 h after taking orally 131 I (Atomic High-Tech Co. Ltd., Guangzhou, China) of 111-185 MBq, using a high-energy general purpose parallel holes collimator with 364 keV photopeak and 256 × 1024 matrix. Neck hybrid SPECT/CT scanning were routinely performed after WBS acquisition. CT scan was firstly performed and the acquisition parameters were as follows: tube voltage 140 kV, tube current 200 mA and matrices 512 × 512. After CT acquisition, the SPECT acquisition protocol was started, and the parameters were as follows: 128 × 128 matrix, 20% energy windows at 364 keV, 60° angular steps with a range of 180° per gamma camera head. JET stream workstation (Philips Medical Systems) was applied to obtain the fusion images of SPECT/CT. Image interpretation The radionuclide images (WBS and SPECT/CT) were independently evaluated by 2 experienced nuclear medicine physicians with interpretation in consensus, using diagnostic software (Compass viewer H 4.0, Medivoly Technology Co. Ltd., Shanghai, China). Diagnosis of RTT Reviewers were required to determine if there was abnormal 131 I uptake foci on neck. The neck foci would be routinely diagnosed as RTT, if they were located in and near the thyroid bed, ruled out the diagnosis of metastatic lymph nodes, excluded the possibility of common ectopic thyroid areas such as thyroglossal duct and root of tongue, and observed the significant decrease of 131 I radioactive concentration in the follow-up [11,12]. Locations of RTT The anatomic locations of RTT on neck SPECT/CT were recorded in detail. According to the distribution characteristics, the locations of RTT were divided into 6 regions (I-VI). The detailed definitions and example images of regions were shown in Table 1 and Fig. 1. SPECT/CT findings of RTT The features of SPECT/CT of RTT were assessed by reviewers. According the CT findings, the lesions of RTT were divided into three types, type I of definite positive findings (obvious soft tissue nodule), type II of suspected positive findings (blurry patchy shadow or soft tissue thickening), and type III of negative findings (no thicken soft tissue shadow and abnormal density). The tracer uptake level of RTT on SPECT were split into low-, moderate-, and high-uptake based on whether to be lower than, equal to or higher than those of the stomach. The clinical data of patients The patients with RTT of the positive CT findings (type I or/ and II) were assigned to positive group, otherwise to negative group. According to the extent of invasion, the primary thyroid tumor can be divided into unilateral (one lobe and/ or isthmus) and bilateral (double lobes). In addition, all patients were collected data as follows: age (< 55 and ≥ 55) and gender (male and female), T stage (T1, T2 and T3-4), N stage (N0 and N1), M stage (M0 and M1), and pathology (PTC and FTC). Statistical analysis The incidence rate of RTT after total thyroidectomy was calculated in DTC patients. The clinical data and imaging features of RTT were counted in detail. Categorical data are expressed as numbers and frequency (%). The chi-square test was used to analyze the effective risk factors of RTT with positive CT findings. All data was analyzed by SPSS 23.0 for windows (SPSS Inc., Chicago, IL, USA) software. P value < 0.05 were considered statistically significant. All patients A total of 52 patients with DTC were included in this study, which were demonstrated 84 abnormal uptake foci on neck by 131 I-WBS. SPECT/CT revealed a total of 106 foci on neck, among which 75.5%(80/106) were diagnosed as RTT, 20.8%(22/106) as thyroglossal tract and 3.8%(4/106) as metastatic lymph node. According to the CT findings, these 80 lesions of RTT were classified to type I of 10 (Fig. 2), type II of 21 (Fig. 3) and type II of 49. Finally, 47 patients were confirmed the presence of RTT. The incidence rate of RTT was 90.4%(47/52). Location and SPECT/CT features of RTT The detail locations and SPECT/CT findings of these 80 RTT lesions were shown in Table 2. Of these, most lesions were found with the features: region I (46.3%, 37/80) and region IV (21.3%, 17/30), high-uptake (57.5%. 46/80,) and moderate-uptake (30.0%, 24/80) and type III (61.3%, 49/80) and type II (26.3%, 21/80). The clinical data of patients with RTT Based on the CT findings, 47 patients with RTT lesions were classified into positive group 29 (61.7%) and negative group 18 (38.3%). The correlation between various clinical data and groups was detailed shown in Table 3. By chisquare test, the data of primary thyroid tumor (P = 0.029) and T stage (P = 0.000) had the significant impact on the CT findings. However, other data (Age, Gender, N stage, M stage, Operation mode and Pathology) was not the effective risk factors. Discussion Total thyroidectomy is the first choice for the treatment of DTC, but RTT is usually existed in post-thyroidectomy patients. The results of this study indicated that the incidence rate of RTT was as high as 90.4%(47/52). Radioactive 131 I therapy is routinely recommended in DTC patients after total thyroidectomy, to ablate postoperative RTT and treat microscopic residual tumor foci [11][12][13]. The dose of 131 I ablation is closely related to the number and distribution of RTT. Accurate estimation of RTT before radioiodine therapy is essential to promote the individualized treatment [14]. In addition, RTT can also increase the risk of recurrence or metastasis of DTC, which may reduce the survival of patients. Therefore, in order to improve the comprehensive prognosis of DTC patients, it is very important to precisely detect RTT and minimize RTT. At present, the alternative image evaluation methods for RTT include CT, ultrasound, techetium-99 m pertechnetate ( 99m TcO 4 − ) planar scintigraphy, 131 I-WBS (whole-body scan) and 131 I-SPECT/CT. Ultrasound is a convenient method, which can clearly measure the shape and size of RTT. Lee SJ [15] reported that ultrasound was not as accurate as CT in assessing the volume of RTT. Planar scintigraphy can provide the information of iodine uptake. Tsai CJ [16] reported that 131 I-WBS detected 206 lesions of RTT, while 99m TcO 4 − planar scintigraphy revealed only 122 (59%) lesions. 131 I is characterized by high sensitivity, which is conventionally performed as detected by 131 I-SPECT/CT. Therefore, compared with 131 I-WBS, 131 I-SPECT/CT can significantly improve the diagnostic accuracy and detection rate of lesions [18,19]. Although 131 I SPECT/CT has been widely used for the evaluationg of RTT after thyroidectomy, while the distribution characteristics of RTT have not been systematically summarized. Avoiding the occurrence of RTT mainly depends on the careful clearance of surgeons, while the investigation on the distribution characteristics and risk factors of RTT will undoubtedly provide some guidance for improving the quality of surgery. In view of the lack of relevant literature, the present study originally divided the locations of RTT into six regions (I-VI). A total of 80 lesions of RTT were revealed by neck SPECT/ CT, most of which were located in region I (46.3%, 37/80), followed by region IV (21.3%, 17/80). This results suggested that RTT lesions are more associated with trachea cartilage and lamina of thyroid cartilage. Its detail mechanism is not clear and need to be further explored. It may be related to the anatomical characteristics of thyroid. The slender tip of thyroid lateral lobe adjacent the lamina of thyroid cartilage makes it challenging for the surgeon to completely remove all thyroid tissue in that location. The long-term compression of tumor makes it necessary to reserve some soft tissues around the tracheal cartilage to avoid its collapse due to the loss of support. The intimae relationship between the RLN and the inferior thyroid artery (ITA) or Zuckerkandl tubercle (ZT) makes some thyroid tissue may be retained [20][21][22][23][24]. In addition, other generally accepted views on RTT generation include: (1) thyroid tissue left attached to a parathyroid gland in order to preserve its vascularity [25]; (2) thyroid tissue at the superior portion of the pyramidal lobe of the thyroid; (3) the defect of upper respiratory function and structure should be minimized in DTC resection. This study demonstrated that most RTT (87.5%, 70/80, type II-III) were negative or suspected positive on CT, which could be interpreted as intraoperative invisible lesions and difficult to be detected and removed by surgeons. However, the remaining (12.5%, 10/80) RTT were revealed definite abnormalities on CT, most (50.0%) of them located in regions IV. These RTT lesions should be visible and detectable by naked eye during operation, which can be removed surgically. This means that the incidence of RTT may be reduced by about 10%, as long as the surgeons strengthen the careful removal of the correct regions, especially the region adjacent to lamina of thyroid cartilage. Moreover, this study also displayed that patients with DTC involving bilateral thyroid tissue or in T3-4 stage were more likely to have RTT with CT abnormalities after surgery. It suggests that surgeons should pay more attention to the careful removal when the primary thyroid tumor involving bilateral or T4 stage. Therefore, this study maybe provide a new idea and clear guidance for surgeons for the careful surgical removal. Conclusions In summary, RTT has certain characteristic distribution and appearances on SPECT/CT. The definite positive CT findings can be noted in 12.50% of RTT, 50.0% of which were located adjacent to lamina of thyroid cartilage. It suggests that surgeons should strengthen the careful removal of thyroid tissue in this region, especially primary thyroid tumor involving bilateral and T4 stage. This study can provide a certain value for the improvement of thyroidectomy quality in DTC patients.
2021-05-05T00:09:21.777Z
2021-03-15T00:00:00.000
{ "year": 2021, "sha1": "bec68695228e34686da6e22648ed38c5c39a5009", "oa_license": "CCBY", "oa_url": "https://bmcmedimaging.biomedcentral.com/track/pdf/10.1186/s12880-021-00612-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9f4232eea3af11ee62738b9922523c06a241655", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
238996734
pes2o/s2orc
v3-fos-license
sandpyper: A Python package for UAV-SfM beach volumetric and behavioural analysis Sandpyper is a Python package that automates profile-based volumetric and altimetric sandy beaches analysis from a large amount of digital surface models and orthophotos. It includes functionalities to facilitate the cleaning of the elevation data from unwanted non-sand points or swash areas (where waves run up on the beach slope and 3D reconstruction is inaccurate) and to model beachface behavioural regimes using the Beachface Cluster Dynamics indices. Introduction Coastal zones host 40% of the world population (Martıńez et al., 2007) with continued expected growth to be focused in least developed countries (Neumann et al., 2015). Sandy beaches, amongst other ecoservices (Barbier et al., 2011), protect inland assets from coastal erosion, dissipating stormy waves energy on their shores. Mitigating beach erosion typically involves the establishment of topographic monitoring programs in key locations (erosional hotspots) to quantify beach dynamics, erosion/deposition volumes, recovery times and model coastal resilience or risk to erosion Ruessink et al. (2019). High temporal and spatial resolution topographic data is ideal, but expensive with most of the ordinary beach surveying methods. Unoccupied Aerial Vehicles (UAVs) and Structure from Motion algorithms (UAV-SfM) are emerging as ideal platform and methodology to obtain cost-effective high-quality beach topographic data (as Digital Surface Models, DSMs; Gonçalves & Henriques, 2015) at the mesoscale, which is an appropriate spatiotemporal scale for coastal managers to work with (Thom et al., 2018). Consequently, researchers already use UAV-SfM to monitor beach dynamics around the world Jaud et al. (2019), but it has been limited so far to a few sites and a few multitemporal dates. However more recently, UAV-SfM technology is being increasingly used for wider-scale and longer-term monitoring projects. For instance in Victoria (Australia), a citizen-science UAV-SfM monitoring program mobilises more than 150 volunteers to fly UAVs on 15 sites every six weeks during the last three years. To date, volunteers flew 350 times, enabling the creation of a DSM and an orthophoto per survey (uncompressed file sizes from 5-100 Gb each). Citizen scientists and SfM are generating an unprecedented archive of imagery which can be reliably used to monitor high-frequency sandy beach volumetric dynamics and behaviors . Statement of Need A drawback of using UAV-SfM for beach monitoring is that due to UAV regulations, flight altitude is often limited to around 80-120 m above ground, which means that the ground sampling distance of consumer-grade UAVs is sub-decimeter, resulting in very high resolution and large imagery files, especially for beach surveys exceeding the 20 ha coverage. Although managing tens of large rasters with Geographic Information Systems (GIS) such as QGIS (QGIS Development Team, 2021) or ESRI ArcGIS is technically feasible, handling tens to hundreds of such files within large monitoring projects quickly becomes impractical. Moreover, in coastal management, erosion assessments from multitemporal DSMs are usually approached by raster subtraction, also known as DEM of difference method (Lane et al., 2000). This is a process to compute elevation difference between time intervals by subtracting the elevation value of each cell in the pre and post rasters. Raster-based operations with full-resolution UAV-SfM imagery becomes very time consuming with important computing power and memory needs that can limit their feasibility. Therefore, tradeoffs for working within a GIS could include raster spatial downsampling, which might cause losing important information about equally important smaller scale geomorphological landforms (Walker et al., 2017), or, tiling the rasters into smaller and more manageable units, which ultimately further increases total pre-processing time. Furthermore, beach-specific challenges are (1) the water motion as waves wash in and out of the swash zone, which prevents SfM algorithm from modelling elevation accurately, (2) dune vegetation and (3) stranded beach wracks (macroalgae, woody debris), which requires time consuming manual processing to remove or filter to avoid biasing sediment volumetric computations. Sandpyper is an open-source Python package that provides users with a processing pipeline specifically designed to overcome the aforementioned limitations, from the generation of crossshore transects and extraction of colour and elevation information from a collection of rasters, to the analysis of period-specific limits of detection and plotting of beachface cluster dynamics indices. It offers users the possibility to perform volumetric and behavioural monitoring of beaches in a programmatic and organised way at the location and single transect scale. Moreover, by using a naming convention, it allows to manage multiple locations with different coordinate reference systems. Although originally developed for coastal areas, Sandpyper can be applied in many other environments where DSMs and orthophotos timeseries are used to monitor changes, such as river levee, glacier or gully monitoring. Some previous works that are somehow related to Sandpyper include Pybeach (Beuzen, 2019), a tool to automate beach dune toe identification and the Digital Shoreline Analysis System (DSAS), a tool to analyse shoreline shifts over time. While Pybeach is no longer maintained, the popularity of DSAS within the coastal erosion studies is fueled by its simple to use interface and the availability of a plug-in for ESRI ArcMap GIS. However, despite Sandpyper's planned expansion to study satellite-derived-shorelines with a method inspired by DSAS, DSAS core objective is the study of horizontal shoreline migrations over time, with no functionalities in terms of three-dimensional profile extraction, volumetric and altimetric analysis or behavioural modeling. To the best of the authors knowledge, this is the first Python package with the specific aim to integrate within an erosion monitoring project employing UAVs and SfM. Moreover, it is the only package which currently implements the Beachface Cluster Dynamics indices (BCDs) computation, which are purposefully designed novel metrics to leverage the very high spatiotemporal resolutions and three-dimensionality of UAV SfM topographic data for quantifying subaerial beach morphodynamics . Sandpyper currently provides the ability to: • automatically create user-defined georeferenced cross-shore transects ( Figure 1A) along a line and extract elevation (from DSMs) and colour (from orthophotos) profiles. • facilitate unsupervised machine learning sand classification ( Figure 1B) and profile masking. • compute altimetric and volumetric timeseries analysis and plotting the results, at the transect ( Figure 1C) and site scales ( Figure 1D). • use spatial autocorrelation measures to discard spatial outliers and obtain statistically significant Hotspots/Coldspots areas of beach change at the site scale ( Figure 2A). • compute first-order transition probabilities of magnitude of change classes ( Figure 2B) to derive BCDs ( Figure 2C). Moreover, Sandpyper is being currently developed to include raster-based volumetric and behavioural analysis and satellite-derived shorelines analysis. Some features already in Sandpyper are: • custom spatial grid generation along a line (waterline, shoreline). • shoreline error assessments in respect to groundtruth shorelines. • shoreline shifts statistics. Sandpyper is aimed at being further developed to be a wider-scope package as it can be applied to any scope involving the extraction of information from a large amount of rasters. Usage Various tutorials and documentation are available for using Sandpyper, including:
2021-10-15T15:12:13.010Z
2021-10-13T00:00:00.000
{ "year": 2021, "sha1": "4042d3b75851db996c85fd46eec24e6d3ac51f25", "oa_license": "CCBY", "oa_url": "https://joss.theoj.org/papers/10.21105/joss.03666.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "200ad9cf147cb1f6ea920fe1fde153393cf862ec", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
53540269
pes2o/s2orc
v3-fos-license
Isospin fractionation and isoscaling in dynamical nuclear collisions Isoscaling is found to hold for fragment yields in the antisymmetrized molecular dynamics (AMD) simulations for collisions of calcium isotopes at 35 MeV/nucleon. This suggests the applicability of statistical considerations to the dynamical fragment emission. The observed linear relationship between the isoscaling parameters and the isospin asymmetry of fragments supports the above suggestion. The slope of this linear function yields information about the symmetry energy in low density region where multifragmentation occurs. In typical intermediate energy nuclear collisions, numerous fragments of intermediate size are produced in addition to light particles [1]. The multifragmentation phenomenon is believed to be related to the liquid-gas phase-coexistence in low density expanding nuclear matter. In a two-component system with more neutrons than protons (N tot > Z tot ) in equilibrium, the gas phase becomes more neutron-rich than the liquid phase [2]. This fractionation phenomenon should reflect the features of the symmetry energy in nuclear matter. Recently, a scaling relation (1) has been observed [3] in the measured fragment yields Y i (N, Z) for two similar systems i = 1, 2 with different neutron to proton ratios. This phenomenon is called isoscaling. If one assumes thermal and chemical equilibrium, the isoscaling parameters α and β are related to the neutron-proton content of the emitting source. In fact, statistical models have successfully explained the isoscaling data [4]. However, as fragments are formed during a dynamical evolution of the collision system, multifragmentation should be understood in the dynamical models as well. In fact, a stochastic mean field model has predicted very large scaling parameters for the dynamically produced fragments in the model [5]. Such result is difficult to understand without any dynamical effects. It is also important to determine if the scaling parameters in the data can be directly related to the asymmetry term of the equation of state (EOS) of nuclear matter in equilibrium, for a dynamic production. To explore whether any kind of equilibrium is achieved regarding the isospin fractionation and the fragmentation in dynamical nuclear collisions, we compare the result of the antisymmetrized molecular dynamics (AMD) simulation with what is expected under a statistical assumption. We first derive, under an equilibrium assumption, a linear relation between the isoscaling parameter α and the fragment isospin asymmetry (Z/A) 2 . We then test such a relationship using results from the AMD simulations. By studying the dependence on the asymmetry term of the effective force, we will explore whether the relation is useful for assessing the asymmetry term of the EOS. AMD is a microscopic model for following the time evolution of nuclear collisions [6,7,8]. It represents the colliding system in terms of a fully antisymmetrized product of Gaussian wave packets. Through the time evolution, the wave packet centroids move according to an equation of motion. Besides, the followed state of the simulation branches stochastically and successively into a huge number of reaction channels. The branching is caused by the two-nucleon collisions and by the splittings of the wave packet. The interactions are parametrized in the AMD model in terms of the effective force between nucleons and the two-nucleon collision cross sections. We perform reaction simulations employing two different effective forces in order to study effects of the asymmetry term. One is the usual Gogny force [9], consistent with the saturation of symmetric nuclear matter at the incompressibility K = 228 MeV. The force is composed of finite-range two-body terms and of a density-dependent term of the form where P σ is the spin exchange operator and t 3 is a coefficient. The second force (called Gogny-AS force) is obtained by modifying the Gogny force with where x = − 1 2 and ρ 0 = 0.16 fm −3 . The two forces coincide at ρ = ρ 0 . Furthermore, they produce the same EOS of symmetric nuclear matter at all density. However, the two forces produce different density dependences of the symmetry energy, as shown in Fig. 1. The choice of x = − 1 2 has been made to ensure that the part of the symmetry energy from the direct term is proportional to the density [10]. At densities below ρ 0 , the Gogny force has somewhat higher symmetry energy than the Gogny-AS force. At densities above ρ 0 , the Gogny-AS symmetry energy continues to rise while the Gogny symmetry energy begins to fall, so that significant differences develop. The AMD simulations were performed for 40 Ca + 40 Ca, 48 Ca + 48 Ca and 60 Ca + 60 Ca collisions at the incident energy E/A = 35 MeV/nucleon and zero impact parameter. The version of AMD of Ref. [8] was utilized. It has been demonstrated that an equivalent version of AMD, for the present purposes, reproduces the experimental data of various fragment observables in 40 Ca + 40 Ca at the same energy of 35 MeV/nucleon with the Gogny force [7,11]. Each studied collision event was started by boosting two nuclei with centers separated by 9 fm. The dynamical simulation was continued until t = 300 fm/c. About 1000 events were generated for each system. In central collisions, as shown in a previous paper [7,11], two nuclei basically penetrate each other and many fragments are formed not only from the projectile-like and target-like parts but also from within the neck region between the two residues. The nuclear matter seems to be strongly expanding, one-dimensionally, in the beam direction. For the intermediate states, we define the liquid part as the part of the system to be composed of the fragments with A > 4 and any two wave packets whose spatial separation is less than 3 fm are treated as belonging to the same fragment. In the context of the results of Ref. [4], Fig. 2 shows the time evolution of the isospin asymmetry (Z/A) 2 of the liquid part for the three reaction systems. At the initial value (t ∼ 0), (Z/A) 2 liq is (Z/A) 2 of the initial nuclei. For the neutron-rich systems, (Z/A) 2 liq increases rapidly before t ∼ 100 fm/c, and then it continues to increase gradually. This effect can be regarded as the isospin fractionation because the liquid part is getting less neutron-rich and the gas part is getting more neutron-rich. Similar fractionation effects are found in other dynamical model simulations [10,13]. The diamond points in Fig. 2 are the results obtained from the Boltzmann-Uehling-Uhlenbeck calculations [12] with an interaction symmetry energy of 12.125(ρ/ρ 0 ) MeV. The isospin fractionation has a clear dependence on the asymmetry term of the effective force. The Gogny force (solid lines) always yields a system with larger (Z/A) 2 liq than the Gogny-AS force (dashed lines). The complementary effect of fractionation can also be found in the gas-phase information, such as the neutron and proton emission rates in Fig. 3. While for the symmetric system ( 40 Ca + 40 Ca) more protons are emitted than neutrons, because of the Coulomb force, for the very neutron-rich system ( 60 Ca + 60 Ca) Given that neutron emission costs less energy in a more neutron-rich system and proton emission costs more, it is not surprising that the isospin fractionation is observed in the dynamical simulations. However, the isoscaling is a nontrivial result, difficult to explain outside of statistical considerations. To explore the aspect of equilibrium in fragment emission in AMD simulations, we further explore the relation between the isoscaling and the fragment isospin asymmetry in equilibrium and in simulations. In the context of the expanding emitting source model [14], it has been pointed out [4] that the isoscaling parameter α, obtained from the yield ratio of the emitted fragments, is related to the (Z i /A i ) 2 of an equilibrated emitting source by where C sym is the symmetry energy and T is the source temperature. However, in the AMD simulations of collisions there are no easily identifiable emitting sources. All fragments are emitted on about equal footing. To remedy the situation, we show that, for an equilibrated system, Eq. (3) holds also when Z i /A i is replaced by Z/Ā i , whereĀ i is the average mass number for fragment charge number Z in system i, provided that fragment properties change gradually with nucleon content. When we consider a system in equilibrium at the temperature T and pressure P , the number (or yield) of nucleus composed of N neutrons and Z protons is given by where the index i specifies the reaction system, with the total neutron and proton numbers N tot i and Z tot i , and G nuc (N, Z) stands for the internal Gibbs free energy of the (N, Z) nucleus. The net Gibbs free energy G tot for the system is related to the chemical potentials µ ni and µ pi by G tot = µ ni N tot i + µ pi Z tot i . In Eq. (4) and the following equations, we suppress the (T, P ) dependence for different quantities including G nuc , µ ni and µ pi . It is clear that isoscaling [Eq. (1)] is satisfied for Eq. (4), with α = (µ n2 − µ n1 )/T and β = (µ p2 − µ p1 )/T , as long as the two systems have common temperature and pressure. For each given Z, the dependence of G nuc on N, assuming gradual changes, takes the Because the important range of N is limited for a given Z, this expansion is practically sufficient even when G nuc contains surface terms, Coulomb terms, and any other terms which are smooth in A as e.g. the term τ T ln A introduced by Fisher [15]. We can regard A straightforward calculation, using the specific form of G nuc of Eq. (5), results in withĀ i (Z) = Z +N i (Z). The equations for the two reaction systems, i = 1 and i = 2, subtracted side by side then yield relating the isoscaling parameter α, the (Z/A) 2 of fragments and the symmetry energy coefficient C(Z) which is a function of (T, P ). An interesting fact is that this relation does not depend on the terms in G nuc other than the symmetry energy term. = 4C/T. Let us check whether this equilibrium relation (9) is satisfied by the AMD simulations that do not incorporate any assumption of equilibrium. Figure 6 shows the correlation in the AMD simulations between (Z/A) 2 liq and the isoscaling parameter α from Figs. 4 and 5 for the three reaction systems. A linear relation is observed in accordance to the equilibrium relation (9) In conclusion, a linear relation is expected between the isoscaling parameter and the fragment isospin asymmetry (Z/A) 2 under statistical assumptions. Isoscaling is observed in the dynamical AMD simulation and the results well comply with the linear relation, suggesting that the fragment isospin composition is subject to the statistical laws even in a dynamical picture of production. The slope of this linear function yields information on the symmetry energy for the fragments.
2018-10-28T14:46:29.144Z
2003-05-14T00:00:00.000
{ "year": 2003, "sha1": "c614eff68a5dd793f3fa2fece360bb33ae0b64db", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-th/0305038", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a5dfc490fa4a4d28e62ff1656d370e72aaa70cfb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
267286638
pes2o/s2orc
v3-fos-license
Deterpenation of citrus essential oil via extractive distillation using imidazolium-based ionic liquids as entrainers Background: The deterpenation of c itrus essential oils (CEO) is crucial in many industries to promote the stability and preserve the organoleptic properties of the final product, improving considerably the oxygenated fraction purity. Methods: Two imidazolium ionic liquids (ILs), [C 4 mim][OAc] and [C 4 mim]Cl, were applied as novel separation agents in a deterpenation process constituted by an extractive distillation column and a flash separator unit, aiming to remove hydrocarbon terpenes from the essential oil. The CEO was modeled as a mixture of the commonly found limonene (monoterpene) and linalool (monoterpenoid). To support the process simulation, isobaric vapor-liquid equilibrium measurements were conducted for the binary limonene/linalool and ternary limonene/linalool/IL mixtures at 5 kPa, and the data were successfully correlated with the NRTL model. Significant findings: The simulation results demonstrate that [C 4 mim]Cl improved the purity of linalool in the final product. Moreover, both [C 4 mim]Cl and [C 4 mim][OAc] reduced the required stages in the distillation column to obtain a terpeneless CEO with a certain purity, [C 4 mim]Cl being the most effective option. Introduction Essential oils (EO) are complex mixtures mostly composed of terpenes and terpenoids that can be extracted from different parts of plants (e.g., flowers, seeds, leaves, peels) generally by steam distillation, hydrodistillation or liquid-liquid extraction [1].The EO obtained from citrus fruits are among the most commercially attractive options, being widely explored as natural fragrances and flavors [2].Besides their pleasant aromas, Citrus essential oils (CEO) show other appealing properties, such as antioxidant, antidiabetic, insecticidal, antifungal, and antibacterial activities [3].Among their vast range of applications, CEO are used as a flavoring or additive ingredient in perfumes, foods, pharmaceuticals, cosmetics, and personal care products [3,4].The broad diversity of the citrus genus also contributes to the CEO large commercial exploitation [3].The most abundant hydrocarbon component in CEO profiles is limonene, whose concentration typically varies between 25% and 98% depending on the variety of the citrus fruit [3,5].Besides, smaller quantities of oxygenated terpenes, including linalool, citral, α-terpineol, citronellol, and geraniol, are often detected in the CEO profiles [3][4][5]. Although hydrocarbon monoterpenes are frequently the major components in CEO profiles [6], these compounds contribute little to the oil aroma and organoleptic properties.Also, hydrocarbons might be easily oxidized, leading to the formation of undesired products [5].Therefore, separating hydrocarbon from the oxygenated monoterpene fraction, a process known as deterpenation, is an essential step to improve the stability and quality of the final product [7].In such a process, the obtained CEO can be classified as concentrated oil, when part of the hydrocarbon fraction is removed, or terpeneless oil, when the hydrocarbon fraction in the final product is much lower [5].These concentrated or terpeneless CEO have improved solubility in water, organic solvents, and other solvents used in food technology [5]. At the industrial level, the deterpenation of CEO is frequently carried out by liquid-liquid (or solvent) extraction or vacuum distillation [5].Despite the simplicity of solvent extraction, this technology usually requires large amounts of solvent, which must be purified before being recycled into the system [8], and often provides CEO with lower purities of oxygenated compounds [7,9] compared to distillation techniques [7,10,11].Regarding the latter, vacuum operation is preferred since lower operating temperatures are required, preventing possible degradation of CEO constituents [8,12]. A crucial step in liquid-liquid extraction or distillation processes is the selection of a suitable solvent or entrainer to aid in the fractionation of the CEO.In the case of liquid-liquid extraction, traditional solvents such as glycols and aqueous solutions of organic polar solvents (e.g., methanol, ethanol, acetone, ethyl acetate) have been evaluated [5].In addition, ionic liquids (ILs) [13][14][15] and, more recently, deep eutectic solvents (DES) [16][17][18] have also been reported.In contrast, alternative solvents as entrainers in distillation processes remain less explored in studies addressing the fractionation of CEOs, though a few works [7,10] show promising results for imidazolium-based ILs.Nonetheless, ILs and DESs have been recently reported as promising entrainers for other extractive distillation-based processes, such as the removal of contaminants from fuels [19] and the separation of azeotropes [20][21][22] and refrigerants [23]. Among the solvents currently addressed as green alternatives to traditional organic solvents, ILs are certainly one of the classes that have received more attention in recent years [24][25][26].Constituted by organic cations and anions, ILs exhibit appealing properties to be used as solvent media in separation processes, such as excellent chemical and thermal stability, low volatility and flammability, and high selectivity solvation ability [27][28][29].Besides, these solvents can be "tailored" by combining different cations and anions to reach specific physicochemical applications, expanding their range of applications [27,30].In previous works from our group [30][31][32], imidazolium and phosphonium-based ILs were studied as separation agents to fractionate binary terpene mixtures.Selectivities and capacities, derived from experimental activity coefficient at infinite dilution, suggested that 1-butyl-3-methylimidazolium acetate ([C 4 mim][OAc]) and 1-butyl-3-methylimidazolium chloride ([C 4 mim]Cl) were potential options to separate hydrocarbon/alcohol monoterpene mixtures.Also, Ganem et al. [10] reported that [C 4 mim] [OAc] improved the fractionation of CEOs via extractive distillation.Thus, in this work, [C 4 mim][OAc] and [C 4 mim]Cl were studied as separation agents to deterpenate CEO.A model mixture composed of R-(+)-limonene (hydrocarbon) and linalool (oxygenated) was selected to represent the CEO, in line with several works available in the literature [7,10,[13][14][15]33].Isobaric vapor-liquid equilibrium (VLE) measurements of the binary limonene/linalool mixture and the ternary limonene/linalool/[C 4 mim][OAc] and limonene/linalool/[C 4 mim]Cl mixtures were carried out at 5 kPa, using a dynamic recirculation ebulliometer.Then, these data were correlated with the NRTL activity coefficient model [34,35], and the fitted parameters were used to simulate an extractive distillation process with the commercial software Aspen Plus V11.For comparison purposes, the separation process was simulated with and without IL. Chemicals The terpenes and ionic liquids used in this work are listed in Table 1, along with their CAS, chemical structure, mass fraction purity, and water content (%).The latter was measured by Karl-Fisher titration (Metrohm, 848 Titrino Plus).All the compounds used in this work were stored at room temperature and used as received from the supplier. Isobaric VLE measurements The isobaric VLE experiments of the binary (limonene + linalool) and ternary (limonene + linalool + IL) mixtures were conducted using a dynamic recirculation ebulliometer (Fischer GmbH, model 0602) coupled to a vacuum pump (Edwards, model RV5), a pressure control unit (Fisher system M101), and a thermostatic bath (Nova Ética, model NT 281).The pressure and temperature ranges of the dynamic recirculation ebulliometer are (0.25-300) kPa and (293.2-523.2) K, respectively.The experiments were performed at 5 kPa to avoid higher operation temperature ranges where the organoleptic properties of the CEOs can be compromised [8,10].The ebulliometer operates under continuous recirculation of the liquid and vapor phases until reaching the equilibrium, assumed after (30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40) min of continuous and smooth vapor phase circulation at constant temperature and pressure.Two samples with volumes between (0.1-0.5) cm 3 were collected from the liquid and condensed vapor phases with a gas-tight syringe through sampling nozzles covered by silicone seals. The composition of the mixtures was determined by refractometry (Mettler Toledo, model RE40D, uncertainty of 1×10 − 4 nD) using calibration curves covering the mixture composition range.The calibration curves were built considering at least 17 standard solutions prepared independently using an analytical scale (Shimadzu, model AUY220, ± 0.1 mg).For the limonene + linalool + IL systems, the standard solutions had a constant mass fraction of IL of (0.050 ± 0.001), following the approach proposed by Souza et al. [36].To validate the experimental procedure, the VLE of the ethanol + water mixture (at 13 kPa), and the vapor pressures of pure ethanol and R-(+)-limonene (range 5-95 kPa) were measured and compared to the literature data. Process simulation A binary mixture of limonene (mass fraction w limonene = 0.90) and linalool (w linalool = 0.10) was selected to represent the citrus essential oil [7,10,33].Considering that vacuum distillation is one of the most suitable methods to deterpenate CEO [5,7], an extractive distillation column (EDC) was selected.In this case, a limonene-rich phase is obtained at the top of the distillation column (distillate), and the terpeneless CEO, enriched in linalool, exits at the bottom.Additionally, the use of ILs ([C 4 mim][OAc] or [C 4 mim]Cl) as entrainers to facilitate the deterpenation of the CEO was tested.In this scenario, the IL fed to the EDC is recovered in the bottom stream with the terpeneless CEO, and an additional step is required for its recovery.Hence, a flash separator unit (FSU) was added to separate the linalool-rich phase from the IL, which is then recycled to the distillation column.The process simulation was performed using the Aspen Plus V11 commercial software.A schematic flowsheet of the process is presented in Fig. 1.The IL input parameters are presented in Table S1.The deterpenation process flow proposed in this work is similar to those reported by Ganem and co-authors [7,10], where the authors propose a process containing an extractive distillation unit connected to a flash separator operating under a high vacuum.More details on the simulation of the deterpenation process are presented in Section S1 of the SM.CO 2 produced in the distillation unit was predicted considering the approach proposed by Gadalla et al. [37,38] and described in detail in Section S1 of the SM. VLE measurements The vapor pressures of ethanol and R-(+)-limonene are listed in Table S2 and compared with the literature data in Fig. S1 of the SM.In Table S3, the Antoine constants regressed with the experimental vapor pressures measured in this work are presented.Likewise, the VLE data obtained for the ethanol/water system (shown in Table S4) are discussed and compared with data collected from the literature (Fig. S2) in Section S3 of the SM.The NRTL parameters correlated for the ethanol + water mixtures using the VLE data obtained in this work, and the obtained deviations (σ(T), σ(y 1 ),σ(y 12 )) are given in Table S5.The VLE data obtained in this work is comparable to those available in the literature [39,40], while the dataset was found to be consistent by the method proposed by Kang et al. [41] (Q VLE = 0.8). The isobaric (5 kPa) VLE data measured for the R-(+)-limonene/ linalool binary mixture are listed in Table 2.The overall quality factor (Q VLE ) [41] obtained for the system was 0.45.Moreover, a comparison between the data obtained in this work and reported by Ganem et al. [7] (Fig. S3) reveals that the datasets present similar trends. The VLE data of R-(+)-limonene/linalool/[C 4 mim][OAc] and R- (+)-limonene/linalool/[C 4 mim]Cl mixtures are presented in Table 3 and Table 4, respectively.In these cases, the mass fraction of the IL was kept at 5%.The modified McDermott− Ellis method by Wisniak and Tamir [42,43] was applied to the ternary mixtures.This method has been successfully used to evaluate multicomponent VLE datasets' consistency [44], including systems with salts or ILs [45,46].A detailed description of this method is presented in Section S5 of SM, and the consistency test results of the ternary systems are shown in Table S6.All the experimental data points for the ternary systems passed the test.In Fig. 2, the obtained relative volatilities (α 12 ) of the terpenic mixtures are presented.The α 12 value calculated from the VLE data reported by Ganem et al. [10] for the limonene/linalool/[C 4 mim][OAc mixture (at w [C4mim][OAc] = 4.5%) was included in Fig. 2 for comparison purposes. The definition of α 12 is presented in SM. The results obtained in this work reveal that the introduction of [C 4 mim][OAc] (5% in mass fraction) slightly increases the relative volatility of the R-(+)-limonene/linalool mixtures for x ′ 1 < 0.8, which is somewhat lower than expected.However, the relative volatility obtained from the data available in the literature [10], for an IL mass fraction of 4.5%, is surprisingly much higher than the values found in this work.Unfortunately, no further comparison is feasible since the other VLE data points reported by Ganem et al. [10] were obtained for higher IL mass fractions (>10%) than the 5% considered in this work.In the case of R-(+)-limonene/linalool/[C 4 mim]Cl, results showed a more significant increase in the relative volatilities of the terpenic mixture, particularly when low amounts of limonene are present in the liquid phase.An increase of at least 50% in the α 12 values is observed compared to the binary terpene mixture.These are promising results for the use of the studied IL in the deterpenation of CEO by extractive distillation. Process simulation The simulation of the process illustrated in Fig. 1 was conducted by varying the reflux ratio on the condenser (R), the theoretical number of stages of the distillation column (n), and the solvent/CEO feed ratio (S/ F, in mass fraction).The first goal was to evaluate the effect of the ILs on the linalool and limonene recoveries (represented, respectively, by the amounts of linalool in the terpeneless CEO and limonene in the distillate divided by their amounts in the feed CEO).In this step, three values of R (0.5, 1.0, 1.5) and S/F (0.025, 0.050, 0.075) were tested, and n was varied from 5 to 30.The IL was supplied at the second stage of the distillation column, while the CEO was fed at the column's middle stage (n/2).The VLE data obtained in this work were correlated with the NRTL model [34,47] using the software Aspen Plus, and the model parameters were considered in the process simulation.The NRTL parameters and σ deviations are presented in Table S7 of the SM. The results are summarized in Fig. 3 (R = 0.5 and R = 1.0) and Fig. S4 (R = 1.5),where the limonene and linalool recoveries are presented as a function of n for the selected reflux ratios.The recoveries improve by increasing the number of stages, more evidently up to n = 15.Similarly, an increase in the reflux ratio leads to higher limonene and To better assess the effects of the R and S/F values on the product recovery, the results obtained varying those parameters for a fixed number of stages (n = 15 or n = 30) are presented in Fig. S5 of the SM.At a reflux ratio of 0.5 and n = 30, the linalool and limonene recoveries, without an entrainer, are 88.1% and 98.7%, respectively.The addition of [C 4 mim][OAc] slightly increases those values.On the other hand, [C 4 mim]Cl delivers much better performances in all analyzed scenarios, with limonene and linalool recoveries varying from 94.7% and 99.4% (S/F = 0.025) to 99.99% and 99.9% (S/F = 0.075), respectively. Likewise, similar trends were obtained for n = 15, reavaling that [C 4 mim]Cl is still much more effective in performing the separation than [C 4 mim][OAc] at columns with fewer stages.In all scenarios, an increment in the S/F ratio leads to higher recoveries of the products. To further explore the effects of R on the processes, Fig. 4 was built, where the number of stages of the distillation column is presented as a function of linalool recovery in the terpeneless CEO, for all the studied reflux ratios, in the absence of IL and in the presence of [C 4 mim]Cl.The addition of [C 4 mim][OAc] was not considered in this scenario since the product recovery curves are very close to those obtained for the binary limonene/linalool mixture, as discussed above.On the other hand, [C 4 mim]Cl has a clear impact on the required column stages.For instance, to reach a minimum recovery of 95% of linalool when R = 1.0, the presence of [C 4 mim]Cl reduces n from 16 (IL-free) to values varying between 14 (S/F = 0.025) and 9 (S/F = 0.075).Likewise, distillation columns of 10 and 9 stages (S/F = 0.025 and S/F = 0.075, respectively) are required to achieve 95% of linalool recovery when R = 1.5, while a minimum of 13 stages is necessary in the absence of IL.When comparing the achieved linalool recoveries for columns with equal stages, the effects of [C 4 mim]Cl are evident: while the IL-free process with a column Fig. 3. of the limonene and linalool recoveries (%) as a function of the number of stages in the distillation column for the three studied reflux ratios (0.5, 1.0, 1.5), and different S/F ratios. S.M. Vilas-Boas et al. with 15 stages delivers a recovery of linalool of 86% (at R = 0.5), the recoveries are improved to 92% and 99% when S/F of 0.025 and 0.075 of [C 4 mim]Cl, respectively, are fed to processes. Due to the positive effects of adding [C 4 mim]Cl to the deterpenation process evidenced above, the conditions to achieve a specific minimum linalool purity in the terpeneless CEO were also evaluated.The target linalool purities were fixed at 85%, 90%, 95%, 99%, or 99.9%.In such a scenario, the total specific heat duty (sum of the heat duty of the reboiler and flash separator per kg⋅h − 1 of terpeneless CEO), and the specific cooling requirements (on the condenser of the distillation column), were calculated.Avoiding higher operating temperatures in the distillation column, n and R values varied from 5-60 and 0.5-1.5, respectively.The pump energy requirements are negligible compared to process heat duty and, therefore, neglected.The results obtained for R = 1.0 are presented in Fig. 5, while Figs.S6 and S7 show the results for R = 0.5 and R = 1.5, respectively.Since no significant improvements were observed by adding [C 4 mim][OAc] to the process, the results with this IL were omitted from Fig. 5 and Figs.S6 and S7. As expected, more stages in the distillation column are required to obtain higher linalool purity in the final product.Also, the limonene purities in the distillate increase as the target linalool increases, though high limonene purities (w limonene > 0.98) are achieved in all cases.Moreover, an increment in the reflux ratio reduces n in most cases, having a positive effect on the process investment cost.Nonetheless, increasing R values lead to higher total specific heat duties and cooling requirements, increasing operational costs.Indeed, the reflux ratio has a stronger impact on the specific heat duties, cooling requirements than the number of stages in the distillation column or the S/F ratio.Regarding the latter, an increase in the amount of IL tends to reduce the number of stages in the distillation column, particularly when higher linalool purities (w linalool > 0.99) are desired. Adding [C 4 mim]Cl to the process considerably reduces the minimum number of stages required to achieve the target linalool purities at certain operating conditions.On the other hand, higher specific heat duties are also observed.Although an increase in the specific heat duties leads to higher operating costs, savings in the total number of stages in the distillation column reduce the investment cost.At R = 1.0 and S/F = 0.050 (Fig. 5), reductions between 30% (CEO with 85% of linalool) and 39% (99% of linalool) in the minimum n value were registered, whereas the corresponding increment in the specific heat duties was always lower than 29%.By keeping R = 1.0 and increasing the S/F to 0.075, columns containing at least 40% fewer stages are required to achieve the target linalool purities.In these operating conditions, n decreases from 59 (IL-free) to 25 when 99.9% is the linalool target purity, with a 28% increment in the specific heat duty.Nevertheless, less expressive reductions of the required column stages (< 17%) are observed at R = 1.0 and S/F 0.025, while increments of around 24% in the heat duties were observed. At R = 0.5, [C 4 mim]Cl also significantly benefits the deterpenation process since it improves the maximum purity of the terpeneless CEO (Fig. S6).While the process free of IL delivers a maximum purity of 88%, the utilization of [C 4 mim]Cl, even at a S/F = 0.025, enables the obtention of essential oil with 95% purity (n ≥ 37); linalool purities of 99% and 99.9% are also achievable at solvent-feed ratios of 0.050 and 0.075.Besides, reductions up to 57% (S/F = 0.075) in the number of stages are achieved when [C 4 mim]Cl is added to the process to attain 85% purity in the terpeneless CEO, at R = 0.5, while the corresponding heat duty increment is 35%.At R = 1.5, higher contents [C 4 mim]Cl in the process also leads to less stages in the column, but the reductions are less pronounced than those obtained for lower studied R values, 0.5 and 1.0. It is worth mentioning that, at an equivalent reflux ratio, the deterpenation process with [C 4 mim]Cl as the entrainer also exhibits higher cooling requirements than other options.The increments, however, are more pronounced when higher solvent-feed ratios are applied.At S/F = 0.050, the cooling requirements increased between 1% and 8% compared to the IL-free process, whereas increments between 3 and 20% are observed at S/F = 0.075.In contrast, the increases in this parameter were lower than 1% when a S/F = 0.025 of [C 4 mim]Cl is applied to the process.The cooling requirements (energy released on the condenser) could be considered in energy integration studies. Process optimization The optimum conditions for the proposed deterpenation process were obtained using the Aspen Plus® Model Analysis/Optimization tool with the Sequential Quadratic Programming (SQP) convergence method [48][49][50].The objective function was defined as distillation column reboiler specific heat duty (i.e., the ratio of the reboiler heat duty and the mass flow rate of the terpeneless CEO), and the process variables were the reflux ratio, the S/F ratio, the number of stages, and the feed position of the essential oil in the distillation column.The process constraints were the linalool mass fraction purity (>0.99±0.01%)and mass flow rate (>99±0.1 kg⋅h − 1 ) in the terpeneless CEO.The R and S/Fvalues varied between (0.5-1.5) and (0.25-0.75), respectively, whereas n varied between 5 and 60.The essential oil feed conditions, the distillation column pressure, and the flash separator operating pressure and temperature values were the same as those presented in Fig. 1.The IL was fed at the second stage of the distillation column. The optimum parameters obtained for the processes with [C 4 mim] [OAc] or [C 4 mim]Cl are compared with the values for the IL-free process in Table 5.The CO 2 consumption for optimum conditions is included in Table 5. The results reveal that the process with [C 4 mim][OAc] requires a column with 4 stages less than the IL-free process.Similar specific heat duties (1.8 kW/kg⋅h − 1 ) and colling requirements (1.4 kW/kg⋅h − 1 ) are obtained in both cases.When [C 4 mim]Cl is added as entrainer, a reduction of 29% is observed in the column-required stages (n = 41) compared to the IL-free processs (n = 58), whereas an increase of 19% in the specific heat duty is observed.Consequently, adding [C 4 mim]Cl to the deterpenation process might lead to lower investment costs and higher operational costs.The estimated CO 2 emission in the distillation column reboiler is 43 kg⋅h − 1 for both IL-free and the process with [C 4 mim][OAc], and 52 kg⋅h − 1 when [C 4 mim]Cl is added as entrainer.The optimum specific heat duties listed in Table 5 are at least 10% lower than those presented in Fig. 5 (R = 1.0) when a linalool purity of 99% in the CEO is specified. Conclusions The results of the process simulation reveal that [C 4 mim]Cl significantly improves the linalool recovery in the deterpenation process, while the effects of [C 4 mim][OAc] on this parameter are less substantial.Moreover, [C 4 mim]Cl is often a promising option when targeting specific linalool purities in the terpeneless CEO.The addition of [C 4 mim]Cl to the process, at R = 0.5, improves the maximum attainable linalool purity from 88% (IL-free) to 95% (S/F = 0.025) and 99.9% (S/F of 0.05 and 0.075).Besides, savings up to 58% (R = 1.0,S/F = 0.075, w linalool = 0.995) in the minimum required stages are achieved when supplying the chloride-based IL to the distillation column.Adding [C 4 mim]Cl to the process increases the total specific heat duties, but the required number of stages (n) decreases.By aiming to minimize the process heat duty, a reduction of 29% in n is achieved when [C 4 mim]Cl is the entrainer, while an increment of 19% in the specific heat duty is observed. Reducing n lowers the investment costs, which compensates for the increase in the operating cost.Moreover, [C 4 mim]Cl increases the specific cooling requirements of the deterpenation process by up to 20% (at equivalent reflux ratios), which could be considered in energy integration studies.This work encourages further research in the field, focused on investigating potential neoteric solvents (e.g., ILs, DES, bio-based solvents) and the required technologies to improve the deterpenation of CEOs. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 1 . Fig. 1.Schematic flowsheet of the deterpenation processes composed of an extractive distillation column and a flash separator unit. Fig. 4 .Fig. 5 . Fig. 4. Comparison of the required number of stages as a function of the linalool recovery for the processes absent of IL and with [C 4 mim]Cl.Different reflux ratios were tested: R = 0.5 (yellow lines), R = 1.0 (red lines), and R = 1.5 (blue lines).(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Table 1 Chemical structure, CAS, source, and mass purity, of the terpenes and ionic liquids studied in this work. Table 5 Optimized parameters for the deterpenation processes to obtain a terpeneless CEO with 99% of linalool.
2024-01-28T16:47:46.586Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "83a199088521ebeff16efb271ae09d740b32c828", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jtice.2024.105367", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "dcbd2d36ef0696e7bbb16d28208db2ce2e0f70b3", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [] }
199491659
pes2o/s2orc
v3-fos-license
Past–future information bottleneck for sampling molecular reaction coordinate simultaneously with thermodynamics and kinetics The ability to rapidly learn from high-dimensional data to make reliable bets about the future is crucial in many contexts. This could be a fly avoiding predators, or the retina processing gigabytes of data to guide human actions. In this work we draw parallels between these and the efficient sampling of biomolecules with hundreds of thousands of atoms. For this we use the Predictive Information Bottleneck framework used for the first two problems, and re-formulate it for the sampling of biomolecules, especially when plagued with rare events. Our method uses a deep neural network to learn the minimally complex yet most predictive aspects of a given biomolecular trajectory. This information is used to perform iteratively biased simulations that enhance the sampling and directly obtain associated thermodynamic and kinetic information. We demonstrate the method on two test-pieces, studying processes slower than milliseconds, calculating free energies, kinetics and critical mutations. Supplemental information to "Past-future information bottleneck framework for sampling molecular reaction coordinate, thermodynamics and kinetics" SUPPLEMENTARY NOTES Note 1 In this note we define various terms introduced in the main text as well. In all of these we use P (X), P (X, Y) respectively to denote the probability distribution of a random variable X and the joint probability distribution of two random variables X and Y. Other variables are the same as defined in the main text. Mutual information This is a commonly used information theoretic measure to describe how much information is shared between two random variables X and Y . It is defined as: Shannon entropy The Shannon entropy for a random variable X is defined as: Cross entropy The cross entropy between two probability distributions P (X) and Q(X) is given by: Kullback-Leibler Divergence The Kullback-Leibler (KL) D KL (P ||Q) divergence between two probability distributions P (X) and Q(X) is given by: Exact decoder definition The exact decoder can be defined as the the conditional probability distribution of X ∆t given a RC χ. It can be calculated by using Bayes theorem: Acceleration factor As shown in [1], and under the conditions detailed there, it is possible recover the unbiased timescales from biased simulations through the calculation of a simple acceleration factor. If the biased simulation time is t, the associated unbiased timescale τ can be calculated by: where V (χ(t )) is the bias experienced by the system at time t , constructed as a function of the RC χ. This can equivalently be calculated as a discrete sum over the M integration time-steps. Note 2 In this note we provide the missing details of various derivations given in the main text. Gibbs's inequality and variational lower bound The bottleneck function L defined in the main text for our neural network architecture is given by a difference of two Shannon entropies [2]: Gibbs's inequality [2] guarantees that that the KL-divergence between two probability distributions is always larger than 0. We thus have: Only in the limit that our approximate decoder is exactly the same as the exact inverse-Bayes decoder, D KL (P θ ||Q φ ) equals 0. By combining Eq.7 and Eq.8 , we get the relationship: Here, H(P (X ∆t )) only depends on the data set and is independent of the parametrization of the encoder and the decoder. Thus the term H(P (X ∆t ) can be completely ignored while optimizing the parameters θ and φ. Maximizing the objective function L = −C(P θ (X ∆t |χ)) is then equivalent to maximizing the lower bound of L, which is the expression stated in Eq.4 in the main text. PIB objective L for unbiased trajectory For a unbiased trajectory X 1 , X 2 , ..., X M +k , for given θ, we can get corresponding χ 1 , χ 2 , ..., χ M using χ i = i c i s i . We also have a sampling of the states of the system after corresponding ∆t intervals: X 1+k , X 2+k , ..., X M +k . Together the pair (χ i , X i+k ) sampled from the dataset follows the distribution P θ (X ∆t |χ). With this, we have: log Q(X n+1 |χ n ) (10) PIB objective L for biased trajectory For a biased trajectory X 1 , X 2 , ..., X M with corresponding biasing potential values V 1 , V 2 , ..., V M , the unbiased probability distribution of X can be calculated by: The encoder P (χ|X) and decoder P (X ∆t |χ) are taken to be independent of the bias. The first assumption is strictly true, while the second is valid for small enough ∆t as explained in the main text. Therefore Note 3 In this note we provide details of the mutual information calculation for critical residue prediction. We use the backbone dihedral angles to describe the motion of each residue. Here we denote them as φ i , ψ i , where i is the index of a particular residue. We use I(θ, χ) to quantify the correlation between dihedral angle θ (where θ could be φ or ψ) and the unbinding process, where I(θ, χ) is mutual information between a dihedral angle θ and the RC χ. For each residue, we have two dihedral angles. At the same time, for one dihedral angle, we can calculate its mutual information with χ 1 or χ 2 . So we have four quantities for each residue( I(φ, χ 1 ),I(ψ, χ 1 ), I(φ, χ 2 ) and I(ψ, χ 2 ) ). We rank the importance of each residue by the maximum of the four quantities. Here we only consider the parts of trajectory that is bias-free(χ 1 > 0.4 and χ 2 > 0.3). Because we require that the energy barriers between metastable states should be bias-free to ensure we can get the correct reweighted dynamics of the unbinding process. However, we only guarantee that the main energy barrier between bound and unbound state has zero bias when we perform biased MD simulation while bias can still be added on barriers between other metastable states. Looking at the unbiased region all us to reduce the influence of the biasing potential on the dynamics of the system and focus more on the transition states. Simulation set-up for Alanine dipeptide in vacuum We follow [3] to set up our simulation for alanine dipeptide in vacuum. The simulations are performed with the software GROMACS 2016/GROMACS 5.0 [4,5], patched with PLUMED 2.4 [6]. We constrain bonds involving hydrogens using the LINCS algorithm [7] and employ an integration time-step of 0.002 ps. The temperature is kept constant at 300K using the velocity rescaling thermostat (relaxation time of 0.1 fs) [8]. We employ no periodic boundary conditions and no cut-offs for the electrostatic and non-bonded Van der Waals interactions. Simulation set-up for L99A T4L-benzene We follow [9] to set up our simulation for L99A T4L-benzene. The simulations are performed with the software GROMACS 2016/GROMACS 5.0 [4,5] patched with PLUMED 2.4 [6]. The simulations are done with the constant number, pressure, temperature (NPT) ensemble with temperature 298 K and pressure 1.0 bar. Constant pressure is maintained using Parrinello-Rahaman barostat while the constant temperature is maintained using the v-rescale thermostat (modified Berendsen thermostat) [10]. The simulation box with periodic boundary condition is filled with TIP3P water. The side lengths of the box are 10Å and there are around 10, 000 water molecules. The interaction is described by the force field CHARMM22*. The integration time step here as well was taken to be 2 fs. SUPPLEMENTARY DISCUSSION To discuss the flexibility in choosing basis functions, we apply the same framework with another set of order parameters. As shown in Supplementary Table 1 and Supplementary Table 2, there are two central differences between this new set of order parameters and the set we discussed in the main text: (a) all helix-helix distances have been removed, (b) protein-ligand distances are defined by the distance between ligand and a new set of residues. In Supplementary Figure 6(a), the alpha carbons used to defined the protein-ligand distances in two basis function sets are colored differently, and our results demonstrate that the choices of distances are fairly arbitrary.We followed the protocol discussed in the main text. Only two minor changes were made, demonstrating further robustness of our protocol. First, we extended the simulation time from 4ns to 10ns in each trial. Second, the energy was increased by 8kJ in each round instead of 5kJ. We should stress that these two parameters can be chosen from a relatively wide range as long as the simulation time is long enough to sample local configurations and the increment of bias is not too big to include the poorly sampled region. Order parameters weights in each RC are shown in Supplementary Figure 6 (b). With these, we were able to gain unbinding trajectories to do the critical residues calculation as discussed before. Since RCs were constructed from a basis function set that can't fully describe the whole system, they only contained partial information of the unbinding process. With this new set of basis function, critical residue calculation does not give exactly the same ranking as shown in the main text, however, as shown in Supplementary Figure 6 (c), the key residues are preserved. Some of the important residues are: Phe114, Gly110, Val111, Ala112, Thr109, Ile29, Glu108, Ser136, Lys135 and Trp138. These residues can still be classified into two broad groups: (a) residue 114, residue 135, residue 136, residue 138 and residues 108-112 contribute to breathing movement (b) residue 29 lies in a disordered region and can be picked due to noise or existence of likely allosteric communication pathway. These results show that the method is robust in terms of the choice of order parameters and has the potential to be applied to studying other complex systems. The k of f reported in the main text is calculated as the inverse of the fitted time constant here, 303 ± 76 ms. In (b), we provide the fitted time constant with associated error bars for 3 different biasing protocols, demonstrating convergence of the estimated time constant, at least on a log-scale. It is worth noting that for the weakest bias, we had the lowest number of dissociation events and hence the time-constant is a lower bound due to not having captured enough slow events. Hence the agreement should in principle be even better.
2019-08-09T16:33:55.758Z
2019-08-08T00:00:00.000
{ "year": 2019, "sha1": "e4140d67d59c61de86d2a9de31380e01a07ef85b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-019-11405-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e4140d67d59c61de86d2a9de31380e01a07ef85b", "s2fieldsofstudy": [ "Computer Science", "Chemistry" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
17748043
pes2o/s2orc
v3-fos-license
Measuring spin of a supermassive black hole at the Galactic centre -- Implications for a unique spin We determine the spin of a supermassive black hole in the context of discseismology by comparing newly detected quasi-periodic oscillations (QPOs) of radio emission in the Galactic centre, Sagittarius A* (Sgr A*), as well as infrared and X-ray emissions with those of the Galactic black holes. We find that the spin parameters of black holes in Sgr A* and in Galactic X-ray sources have a unique value of $\approx 0.44$ which is smaller than the generally accepted value for supermassive black holes, suggesting evidence for the angular momentum extraction of black holes during the growth of supermassive black holes. Our results demonstrate that the spin parameter approaches the equilibrium value where spin-up via accretion is balanced by spin-down via the Blandford-Znajek mechanism regardless of its initial spin. We anticipate that measuring the spin of black holes by using QPOs will open a new window for exploring the evolution of black holes in the Universe. INTRODUCTION The Galactic centre, Sagittarius A* (Sgr A*), is a compact source of radio, infrared, and X-ray emissions having variability in the range of a few tens of minutes to hours (Baganoff et al. 2001;Genzel et al. 2003;Yusef-Zadeh et al. 2006). These emissions seem to originate from a hot and low-density accreting gas plunging into a supermassive black hole (Yuan et al. 2004;). A precise measurement of its mass and spin is a long-standing issue for astrophysics to investigate the mechanism of energy extraction from spinning black holes for astrophysical jet production as well as the evolution of supermassive black holes along the cosmic hitory (Bardeen 1970;Blandford & Znajek 1977;Wilson & Cobert 1995). Although the mass of Sgr A* has been constrained by using the stellar orbit method, a precise measurement of its spin for the best-estimated mass has been poorly conducted. Recently, Miyoshi and colleagues have detected multiple quasi-periodic oscillations (QPOs) of radio emissions in Sgr A* (Miyoshi et al. in prep.), whose periods are close to the Keplerian period at the innermost stable circular orbit ⋆ E-mail: kato.yoshiaki@isas.jaxa.jp (YK) of a supermassive black hole with mass 4 × 10 6 M⊙. Because of the excellent spatial resolution of the Very Long Baseline Array (VLBA), the quasi-periodic radio emission certainly originates from within the central sub-mas scale, approximately 100 rg around the central black hole at a distance of 7.6 kpc, where rg = GM/c 2 = 0.01 M/10 6 M⊙ AU is the gravitational radius (G, M , and c are the gravitational constant, the mass of black hole, and the speed of light, respectively). This is the first time that such multiple QPOs have been identified in the vicinity of a supermassive black hole. The spatial pattern of emission regions cannot be explained by the Keplerian rotation of a single emitting body at a given radius. Four simultaneous QPOs (16.8,22.2,31.4,and 56.4 min) are detected and the first three periods are identical to QPOs in the near infrared and X-ray observations during different observation epochs (see Table 1). Three identical periods in the different wavelength are stable at least for several years and the frequency ratio of last two periods is close to 3:2. Such a stable double peak QPO is a well-known feature for high-frequency QPOs (HF-QPOs) in Galactic Xray sources (Remillard & McClintock 2006). The multiple periodicity and their coincidence between the different wavelengths, and also the different observation epochs, indicates that the origin of QPOs in the Galactic centre is closely re-lated to the dynamics of an accretion disc feeding the black hole. Therefore we measure the spin parameter of a black hole in Sgr A* by using the period of QPOs based on discseismology (e.g., Nowak & Wagoner 1993). METHOD AND MODEL One promising mechanism of generating multiple QPOs is a global disc oscillation excited by the resonance between geodesic modes of the disc (the so-called resonant disc oscillation model: Abramowicz & Kluźniak 2001;Kato & Fukue 2006;Kato et al. 2008). The resonant frequency is the combination among geodesic frequencies at the radius where the resonance occurs. When the resonance condition is specified, both the resonant frequency and the resonant radius is determined uniquely in terms of the black hole mass M and the spin parameter a * ≡ Jc/GM 2 where J is the angular momentum of the black hole. Therefore the metric of the black hole can be constrained by the frequency of the QPOs. Resonance may occur at a radius where the frequency ratio of the geodesic modes is a ratio of small integers and resonant response can either spontaneously grow or damp the oscillation itself (Abramowicz & Kluźniak 2001). One of the most prominent resonances is a mode-coupling between acoustic waves and non-axisymmetric modes such as a warp in the disc, the so-called wave-warp resonance (Kato & Fukue 2006;Kato et al. 2008). For example, this resonance is excited at a radius rres where ΩK = 2κ. Here ΩK and κ are the Kepler frequency and the epicyclic frequency, respectively (see Fig. 1 of Kato & Fukue 2006 for the relation between a * and rres). ΩK and κ at the resonant radius rres = rres/rg measured at infinity are expressed as ΩK = GM r 3 res 1 + a * r 3/2 res −1 (1) and κ = GM r 3 res 1 − 3/rres + 8a * (2rres) −3/2 − 3a 2 * (2rres) −2 1 + a * (2rres) −3/2 (2) as derived by Okazaki et al. (1987). The resulting frequencies of QPOs are mΩK ± κ and mΩK where m is the azimuthal mode number, and some lower mode oscillations related to such resonances are reported by numerical studies (Kato 2004). RESULTS 3.1 Unified model of QPOs Figure 1 shows the period of the observed QPO overlayed with lower mode (m = 1, 2) resonant periods related to the wave-warp resonance as a function of the black hole mass ranging from a stellar mass black hole to a supermassive black hole (skipping over the intermediate mass region). QPOs in the Galactic centre are selected with regard to the multiple detection among different wavelengths (Table 1). We found that such QPOs in Sgr A* detected at identical frequencies are consistent with a mass-period relation for the spin parameter a * ∼ 0.4 (see Fig. 1b). At the same time, HF-QPOs in the Galactic X-ray sources agree well with the Table 1). The black hole mass is assumed to be (3.7 ± 1.5) × 10 6 M ⊙ (Schödel et al. 2002). Resonant oscillations for m = 1 and 2 are shown as solid (Ω K ), dashed (Ω K + κ), dotted (Ω K − κ), and gray solid (2Ω K ) lines. Note that 2Ω K − κ = Ω K + κ and 2Ω K + κ are omitted for simplicity. Thin and thick lines indicate the periods for the spin parameter a * = 0.3 and 0.4, respectively. resonant periods for the same spin parameter within the error of the estimated mass (Fig. 1a). Therefore we identify the three identical periods (16.8, 22.2, and 31.4 min) with resonant modes 2ΩK, ΩK + κ, and ΩK, respectively. Unique spin parameter Now we can determine the spin parameter of black holes by using the periods of QPOs corresponding to ΩK. For instance, 31.4 min is used for Sgr A* and periods of lower HF-QPOs are used for the Galactic X-ray sources. Note that the frequency of single peak HF-QPOs are treated as ΩK. In order to constrain the resultant spin parameter, the estimated mass of a supermassive black hole in Sgr A* is taken from recent measurements (Schödel et al. 2002;Ghez et al. 2008;Gillessen et al. 2009). Figure 2 shows spin parameters of all samples evaluated by using the discseismic measurement. All spin parameters are relatively small ( 0.7) in comparison with the equilibrium value of spinning black holes (≈ 0.95) predicted by a numerical study . When all samples are fitted by using a linear relation as a function of the black hole mass, the spin parameter becomes larger than 1 for black holes with M 10 7 M⊙. Instead of a linear relation, we obtain a best-fit unique spin parameter a * = 0.44 ± 0.08 , which is depicted by a gray shaded region, for 1σ uncertainty by linear least square fitting. Evolution of BH spin and mass Next, we should ask why black holes have a unique spin parameter in spite of the fact that their age as well as mass accretion history may vary in general. Actually, our results contradict recent studies that predict extremely spinning Crosses indicate spins for the Galactic X-ray sources whereas diamonds indicate those for the Galactic centre in terms of black hole masses measured by the stellar orbits method (Schödel et al. 2002;Ghez et al. 2008;Gillessen et al. 2009). A gray shaded region indicate the best-fit spin parameter a * = 0.44 ± 0.08 for 1σ uncertainty. black holes (Shapiro 2005;Volonteri et al. 2005). In order to test the feasibility of such a small unique spin parameter, we have to study the spin-up process by mass accretion and the spin-down process by the energy extraction as a result of the Blandford-Znajek mechanism, simultaneously. Figure 3 represents the equilibrium value of spin and also the time evolution of black holes surrounded by a relativistic standard accretion disc (Novikov & Thorne 1973;Page & Thorne 1974; see also Kato et al. 2008), assuming given disc parameters such as the viscosity parameter α (Shakura & Sunyaev 1973), the magnetized parameter β, the ratio of the gas pressure to the magnetic pressure, and the mass accretion rateṁ =Ṁ /ṀEDD normalized by the Eddington mass accretion ratesṀEDD = 4πGM/cκes where κes is the electron scattering opacity (e.g., Kato et al. 2008). In general, these parameters are not independent because magnetohydrodynamic (MHD) turbulence in the disc is thought to be the source of viscosity and their values can only be examined numerically. For instance, we employ α = 0.01 on the basis of three-dimensional MHD simulations showing the total stress corresponds to α ≈ 0.02 − 0.06 (Hawley 2000;Machida et al. 2000) forṁ ≪ 1 and α ≈ 0.01 forṁ ∼ 1 (Hirose et al. 2006). Recent MHD simulations also exhibit the natural emergence of large-scale magnetic fields (the socalled magnetic tower) at the inner region of an accretion disc (Kato et al. 2004). The formation of a magnetic tower is key to the extraction of the energy and angular momentum of a spinning black hole by the Blandford-Znajek mechanism and it has been suggested that the necessary condition for the energy and angular momentum extraction at the innermost region of an accretion disk is β ≈ 1 (McKinney & ). The equations we solved in this study are the followings: whereṀ , ein, and lin are the mass accretion rate, the specific energy and the specific angular momentum at the inner edge of the accretion disc, respectively. The electromagnetic power loss P from the black hole is assumed to be that of the Blandford-Znajek mechanism: where rH is the radius of the event horizon and ΩF and ΩH are the angular velocity of the magnetic fields permeating the horizon and the angular velocity of the black hole, respectively (see Moderski & Sikora 1996;Beskin et al. 2003). The strength of magnetic fields B ⊥ permeating the event horizon is assumed to be regulated by the pressure of accretion disc p disc so that B 2 ⊥ = 8πp disc /β. Note that the electromagnetic power loss is not negligible when β is less than the order of the unity. The relativistic standard accretion disc model provides a complete set of equations for describing the pressure of accretion disc at the given radius as a function of the viscosity parameter α, the black hole mass m = M/M⊙, the spin parameter a * , and the mass accretion rateṁ. For a givenṁ, the radiation pressure dominated region appears within the radius: (6) where B, D, H, and Q are the general relativistic correction factors (Page & Thorne 1974). To summarize, the pressure of the accretion disc can be described as follows: and where R1 =r −3/2 B −2 D −1 C and R2 = r −51/20 B −14/5 D −9/10 CH −1/2 Q 4/5 are the radial dependence including the general relativistic correction factors at the Boyer-Lindquist coordinated radiusr = c 2 r/GM . The radius for evaluating the strength of magnetic field is asssumed to ber0 = 1.3rms whererms is the marginally stable circular orbit (Bardeen et al. 1972): where Finally, we rewrite the equation (3) & (4) by using the normalized variables as where symbols are the Eddington time τEDD = M/ṀEDD, the specific energy inputẽin = ein/c 2 , the efficiency of the Blandford-Znajek mechanism ηBZ = PBZ/ṀEDDc 2 , the specific angular momentum inputlin = clin/GM , the horizon radiusrH = c 2 rH/GM = 1+ 1 − a 2 * 1/2 , and k = ΩF/ΩH = 1/2 for the maximum efficiency of the Blandford & Znajek mechanism. Here we assume that the inner boundary is at the marginally stable circular orbit and both the energy and the angular momentum of accreting matter at the boundary are advected into the black hole. The specific energy and the specific angular momentum at the boundary are: We numerically integrated equations (13) & (14) with given initial parameters and track the evolution of black hole mass and spin. We also determined the equilibrium spin for m = 10, 10 6 , 10 8 by solving da * /dt = 0 in the equation (14) by using bisection method. Figure 3a shows the equilibrium value of spin as a function of αṁ. The equilibrium spin becomes larger when either α orṁ becomes larger. The best-fit spin parameter determined by the discseismic method corresponds to an equilibrium value ofṁ ≈ 1. Figure 3b shows the time evolution of spin, where the spin parameter of each model converges to a unique value regardless of the initial one. When the mass accretion rate is regulated by the Eddington value (ṁ = 1), the spin converges to the equilibrium value ≈ 0.55 for stellarmass black holes within the order of 10 8 years and then slowly approaches the equilibrium value ≈ 0.4 for massive black holes. Whenṁ = 0.1, the spin converges to a value ≈ 0.5 within the hubble time, but never actually reaches the equilibrium spin. Therefore the resultant spin is consistent with the small unique spin ≈ 0.44 when the mass accretion rate is regulated by the Eddington valueṁ ∼ 1 with the appropriate disc parameters. On the other hand, when the accretion disc is somehow in a super-critically accreting phase, withṁ = 10, the spin converges to the equilibrium value of ≈ 0.96 within the order of 10 7 years. Although the equilibrium spin of the super-critical accretion phase is larger than the unique value, it could approach to this value during the subsequent sub-critical accretion phase in less than 10 9 years. The evolution of the black hole mass is not affected by the initial spin parameter (see Fig. 3c). Note that the final mass becomes 10 6 times larger than the initial mass forṁ 1. CONCLUSIONS It has been suggested that the supermassive black hole in the Galactic centre used to be in the nearly critical mass accretion phase for more than the order of 10 8 years. A possible explanation for such a large mass accretion history is the massive star formations in the proximity of the Galactic centre region. During the critical accretion phase, the spin reaches the unique value and the mass becomes ∼ 10 6 M⊙, which is then maintained during the subsequent low accretion rate phase. Note that stochastic mass accretion history may also help to create the moderately spinning massive black hole (King & Pringle 2006). Similarly, black holes in Galactic X-ray sources have been in the nearly critical accretion rate phase for order 10 8 years as well, suggesting their companion stars should be low-mass stars. Because they have reached the quasi-equilibrium state, the limit-cycle activities and also the emergence of jets does not alter their spin evolution. Thus, we conclude that the spin parameter of a supermassive black hole in the Galactic centre has a unique value of a * = 0.44±0.08. Conversely, the mass of a black hole consistent with the unique spin is M = (4.2 ± 0.4) ×10 6 M⊙. Without detecting the event horizon, we have constrained the mass and spin of the supermassive black hole at the Galactic centre. The method we used here depends entirely on geodesic frequencies that are independent of the distance and viewing angle of a black hole. Once the unique spin parameter of the black hole in the Galactic centre has been confirmed by detection of the event horizon in the future observations (e.g., Takahashi 2004), studies of QPOs in other galaxies will open a new window to survey the growth history of massive black holes (Markowitz et al. 2007;Gierliński et al. 2008).
2010-01-23T04:11:23.000Z
2009-06-30T00:00:00.000
{ "year": 2010, "sha1": "7b9ca20624cb7855656bfafcf168a62c57722d42", "oa_license": null, "oa_url": "https://academic.oup.com/mnrasl/article-pdf/403/1/L74/4013957/403-1-L74.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "7b9ca20624cb7855656bfafcf168a62c57722d42", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55264689
pes2o/s2orc
v3-fos-license
Study of promises for the use of ultrasound in the technology of secondary benefication of processing wastes The results of the study, aimed at designing of benefication technology of mine processing wastes of Voznesenskiy ore district, are presented in this paper. By the expert appraisals, the amount of tailings dumps of the Yaroslavskaya Mining Company is estimated to be more than 30 million tons. Sample testing has shown that fluorite content varies from 13% to 23%, calcite – up to 14%, and zinc – in the range of 0.4-0.6%. The research was conducted with tailing samples containing 20.7% of CaF2 and 10.2% of CaCO3. Mineral grains of the secondary raw material were examined for their surface state and a package of necessary preliminary operations, which provide an efficient material flotation reagents interaction, was determined. It has been established that extraction of fluorite to concentrates that would match existing requirements is possible in principle. The use of ultrasonic treatment of material in its preparation for flotation is conditioned by an effective desorption of films of different nature from mineral particles. The suggested scheme of fluorite flotation involves preliminary treatment of raw material by ultrasound with dumping of released mud fractions. The optimal operating regime allows obtaining concentrates of higher quality than it is under usual technology without the ultrasonic exposure. In this process, the extraction of fluorite to a concentrate with 94.47% of CaF2 increases up to 61.11%, which is higher by 2.96% against the original studies. Introduction The major factor of achieving high efficiency of flotation in the processing of low-grade difficult raw material and processing waste is the creation of conditions for selective adsorbation of reagents on the surface of mineral particles.For the finely disseminated ores, which require fine size degradation, the problem of surface purifying is conditioned by the presence, in prepared material, of a huge amount of mud with the developed surface which surrounds particles of productive size, combining into non-selective aggregate [1,2].In the process of mine wastes dressing the problem is aggravated, since mineral grains' surface has residuals of reagents used at the stage of primary processing, various formations conditioned by the long-term storage in the salt-water environment [3].To ensure necessary contact of mineral particles with reagent and further selection it is essential to supplement the flotation scheme with the operations of surfaces desorptions.Meanwhile, the separation into a different product with further removal from the scheme of desorbed chemical and mechanical surfaces, in order to avoid their negative effect on the technology' further processing, is relevant. Materials and methods of research Desorption can be performed with the usage of different methods: mechanical, chemical, physical-chemical, and methods based on usage of complicated energetical impulses.One of the most perspective directions is usage of ultrasonic machining.A wide range of different transformations follows the ultrasonic machining of rock slurries.According to the studies of Agranat B. A., Glembotskiy V. A. and others [4][5][6], the influence of acoustic vibrations on hydro-mineral mixes can be multi-faceted.Thereby, the physical-chemical characteristics of liquid phase, the characteristics of adsorptive and hydrate layers on the surface of mineral particles are changing.Transformation of different physical-chemical characteristics of pulp can be happening under the effect of ultrasound, e.g.conductivity, oxidation-reduction capability, water pH.Moreover, possible destruction of carboxycontaining compounds' colloidal structures, which are used in the flotation process of the majority of calcium minerals, can be followed by increase in dispersibility level of used collectors, intensification of flotation reagents' action, and decrease in their consumption.Traditional methods of mineral surface cleaning are chemical and heat treatment, mechanical rubbing usually doesn't guarantee necessary quality of cleaning, especially cleaning from filming and impurities located in micro fractures and holes.The usage of ultrasound for these purposes seems to be very promising.The removal of natural films and coats together with secondary formations from the surface of mineral particles improves the contact of flotation reagents with a mineral and supports the activations of adsorptive processes [7][8][9]. Nowadays the usage of processing wastes is the most promising direction of solving the problem of raw material sources shortage for a range of manufacturers in the mining industry.According to the current data, the amount of fluorite ores processing tailings in the tailing dump of Yaroslavskaya Mining Company (YMC), located in the Voznesenskiy ore district (VOD), Primorskiy region, is estimated to be more than 30 million tons.The contains of fluorite is within 13 -23%, of calcite -not more than 12 -14%.The average number of carbonate module ), which mostly defines the enrichability of carbonate-fluorite ores, amounts in 1.4 -1.6.Moreover, available for mining ores of VOD contains no more than 26 -29% CaF 2 and less than 20 -25% CaCO 3 .All of the ores, being primary raw materials of the company, are considered to be complex ores.The main reasons for this are -extremely thin mutual intergrowth of mineral phases, presence of tiny growths, additional minerals and calcium components, which are close to flotation by characteristics, on the fluorite particles [10].In technogenic tailings fluorite is presented by the most problematic grains, which were not extracted at the stage of primary benefication.The content and characteristics of processing material determine the necessity of special way of technological evaluation. The research of possibility of getting fluorite concentrates from processing material was made on the material of tailing sample, taken from the tailing dump of dressing plant of YMC.The chemical analysis discovered the following contents of main components: CaF 2 -20.7%,CaCO 3 -10.2%,SiO 2 -32.2%,Zn -0.38%.During the conduction of research there have been found technological solutions, which can assure the possibility of fluorite concentration together with the achievement of satisfying figures due to rational selection of disruption mode, pH environment in main fluorite flotation and usage of selectively operating compositions of modificators and collectors [11].During the abovementioned process the concentrates have been extracted, containing more than 93% of CaF 2 with the extraction of 58-60% of fluorite.The decrease of CaF 2 content for 1 -1.5% gives the possibility to boost the extraction (for 10% and more) due to the redistribution of the most contaminated with additives fluorite grains.That is why the research of influence and additional preparation of flotation feed, aimed at cleaning of mineral grains, desorption from the particles' surface of different chemical, slurry and other coverings, is worth noticing.The additional preparations of the surface can be followed by a reasonable increase of desorption processes selectivity and relevant rise in the quality of concentrates. Results and discussion The research of optimization of the process of flotation' fluorite extraction from the processing materials with the usage of ultrasonic machining of given material was based on the scheme, given at the fig. 1.The ultrasonic machining was used on the researched material at its original feed size, before being granulated.The machining was processed with the use of ultrasonic device IL100-6, produced by Saint-Petersburg company "Ultrasonic equipment INLAB".The ultrasound operating frequency -22 kHz.The operation time -7 min, which was calculated based on results received earlier, in our ore-based research [12].Desorbed chemical compounds and slurry particles were extracted into separate product and removed from the scheme, because the possibility of negative impact on the further technology processing is rather high. Fluorite concentrate Middlings The flotation of abovementioned material was done with the usage of TOFA as of collector, modificators -mix of ammonium fluoride salts and lignosulfonates.Depending on the consumption of main reagents (three modes) the concentrates of different quality were extracted.The results of experiments, given at the fig.2, show major influence of ultrasound on the process.After the treatment we had a chance to get concentrates with the CaF 2 content 93.5 -94.47%, against 93.46% in the experiment, conducted without preliminary insonation.At the same time we can see the growth of fluorite extraction for 2.27 -2.96%.Acoustic effects let us increase the extraction for more than 5%.Moreover, it is worth mentioning that experiments with the moderate mode of proportion mix of collector and modificators (mode 2) resulted in getting the concentrate with the CaF 2 content of 94.47%, which is quite close to the requirements for the concentrate brand FF-95.The extraction of fluorite in this case is reasonably higher (61.11%). The content of silica, which is harmful additive and is limited due to the All-Union State Standard (GOST) and technical conditions, is estimated for 1.9 -2.3%, which is suitable for brands FF-92, FF-92-A. Conclusions Therefore, the usage of ultrasound at the stage of preparation for flotation of processing wastes aimed at desorption of primary processing reagents tailings and different coatings with the further separation of slurry fraction ensures effective preparation of mineral particles to the intercommunion with collectors. In the optimum technological performance with the preliminary processing of materials it is possible to get a concentrate with the content of 94.47% of СаF 2 with the extraction of 61.11% of fluorite.The extraction of fluorite into high-quality concentrate, containing CaF 2 95.12%, decreases down to 46.05%. In the usual technological performance, without the usage of ultrasound, in all other equal conditions, the extracted concentrate contains СаF 2 93.46% with the extraction of 58.15%. Fig. 1 . Fig. 1.The scheme of processing tailings flotation with the preliminary ultrasonic desorption of primary cleaning circuit surfaces. Fig. 2 . Fig. 2. Comparative figures of fluorite benefication from the processing tailings with and without ultrasonic treatment of given material: 1 -concentration ratio; 2 -CaF 2 content in the concentrate; 3fluorite extraction; 4 -selectivity index of fluorite and calcite.At the experiments with the hardest mode of carbonates drop (mode 3) the concentrate with content of CaF 2 95.12% was extracted.The extraction of fluorite in these conditions decreased till 46.05%.Taking into account that current consumers' market demand for high-quality fluorite concentrate of brand FF-95, FF-97 has notably grown, the result achieved is worth attention.According to the previously made research, during the usual mode flotation, without the usage of ultrasound, the fluorite extraction into high-quality concentrate with the content of CaF 2 being 95.07%accounted for 40.7%.Acoustic effects let us increase the extraction for more than 5%.Moreover, it is worth mentioning that experiments with the moderate mode of proportion mix of collector and modificators (mode 2) resulted in getting the concentrate with the CaF 2 content of 94.47%, which is quite close to the requirements for the concentrate brand FF-95.The extraction of fluorite in this case is reasonably higher (61.11%).The content of silica, which is harmful additive and is limited due to the All-Union State Standard (GOST) and technical conditions, is estimated for 1.9 -2.3%, which is suitable for brands FF-92, FF-92-A.
2018-12-07T09:41:59.113Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "b9c6bb59a0b3702d2bed00c0b36ff3ce1daf21ec", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/31/e3sconf_pcdg2018_03015.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b9c6bb59a0b3702d2bed00c0b36ff3ce1daf21ec", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
26397224
pes2o/s2orc
v3-fos-license
Use, history, and liquid chromatography/mass spectrometry chemical analysis of Aconitum Aconitum and its products have been used in Asia for centuries to treat various ailments, including arthritis, gout, cancer, and inflammation. In general, their preparations and dispensing have been restricted to qualified folk medicine healers due to their low safety index and reported toxicity. In the past few decades, official guidelines have been introduced in Asian pharmacopeias to control Aconitum herbal products. However, these guidelines were based on primitive analytical techniques for the determination of the whole Aconitum alkaloids and were unable to distinguish between toxic and nontoxic components. Recent advances in analytical techniques, especially high performance liquid chromatography (HPLC) and electrophoresis coupled with highly sensitive detectors, allowed rapid and accurate determination of Aconitum secondary metabolites. Reports focusing on liquid chromatography/mass spectrometry analysis of Aconitum and its herbal products are discussed in the current review. This review can be used by the health liquid chromatography/mass spectrometry regulatory authorities for updating pharmacopeial guidelines of Aconitum and its herbal products. Introduction Aconitum (or monkshood) is a herb native to China and certain parts of Europe [1,2]. It has been used for centuries to treat pain due to arthritis, gout, cancer, inflammation, migraine, and sciatica. Despite such important therapeutic activities, it is highly toxic and ingestion of the raw material can lead to arrhythmia, heart failure, and even death. Even topical applications can be dangerous because its toxic alkaloids can be absorbed through the skin [3]. Toxicity of aconite is related to the C19-norditerpenoid ester alkaloids such as aconitine (AC), deoxyaconitine (DA), mesaconitine (MA), hypaconitine (HA), and yunaconitine (YA) [4]. These alkaloids significantly activate sodium channels and cause widespread membrane excitation in cardiac, neural, and muscular tissues [5e8]. Additionally, muscarinic activation may cause hypotension and bradyarrhythmia. Symptoms of Aconitum sp. poisoning include numbness followed by paralysis of the upper and lower extremities. Patients with aconite poisoning, due to the consumption of herbal broth containing a large amount of Fuzi, usually suffer from cardiovascular symptoms including chest pain, palpitation, bradycardia, sinus tachycardia, ventricular ectopics, ventricular tachycardia, and ventricular fibrillation [9,10]. The main causes of death from aconite include ventricular arrhythmias and asystole [11]. There is no specific antidote for aconite poising but only supportive treatment to restore normal heart functions. The supportive treatment of aconite poisoning includes the use of amiodarone and flecainide as antiarrhythmic agents. Intragastric lavage or oral administration of charcoal can decrease alkaloid absorption. A fatal dose can be as little as 5 mL of aconite tincture, 2 mg of pure aconite, or 1 g of the root [12,13]. For decreasing the toxicity, aconite must be processed prior to use. To date, more than 70 traditional and modern methods have been applied to decrease the toxicity of aconite roots [13,14]. According to the Chinese Pharmacopeia, only two assays are accepted for the quantitative analysis of alkaloids in Aconitum sp. Based on these two assays, the maximum allowance of alkaloid content, calculated taking AC into consideration, is 0.15% and 0.20%, respectively. Any herbal product with a lower concentration of the toxic alkaloids can be used in China, but such regulation is not acceptable elsewhere. Microscopic examination of the processed aconite roots (P-ARs) reveals the presence of gelatinized starch masses, which do not appear in the unprocessed aconite root (unP-AR) samples [15]. The plant is rich in diterpene and norditerpene alkaloids as the major active constituents. These compounds exhibit interesting activity toward voltage-gated Na channel either as agonists (e.g. AC and MA) or as antagonists (e.g., lappaconitine and 6benzoylheteratisine) [16e18]. Activities toward certain neuronal receptors were also noted such as norditerpene alkaloids (e.g., methyllycaconitine). These alkaloids exhibited selective antagonistic activity on the neuronal nicotinic acetylcholine receptor in a nanomolar range of concentration, rendering it as a potential drug lead in Alzheimer diseases [19e21]. 2. Historical glimpse The name Aconitum comes from the Greek word akonitos, meaning "without struggle" or "without dust," or from the Greek city Acona, where a naturalist in the 3 rd century once identified the plant [22]. Other historical sources suggest that the name came from the hill of Aconitus. It is a hill in Greek mythology where Hercules fought with Cerberus, the threeheaded dog that guards the entrance to Hades. Saliva from this creature dripped onto the plant, rendering them extremely poisonous. It is also claimed that the cup-shaped flower made the poisonous cup that Medea prepared for Theseus. The plant played an important role in Roman history, as it is assumed that Nero ascended to the throne after poisoning Claudius by tickling his throat with a feather dipped in monkshood. The emperor Trajan (98e117 AD) banned growing of this plant in all Roman domestic gardens [23]. One of the most remarkable pieces describing the role played by this plant in ancient Roman society was described by the writer Ovid. He referred to aconite as the "step-mother's poison." In his work Metamorphoses, he described a certain period of Roman history by the following lines: "Guest was not safe from host, nor father-in-law from son-inlaw; even among brothers it was rare to find affection. The husband longed for the death of his wife, she of her husband; and murderous step-mothers brewed deadly poisons, and sons inquired into their fathers' years before the time." Shakespeare highlighted the potency of this herb in his novel Romeo and Juliet, in which he stated that Romeo committed suicide using this poison [22]. In addition, in Macbeth, the witches' brew calling for "tooth of wolf" refers to monkshood. Certain species are also known as wolfsbane because arrows dipped in the poison kill wolves. Until 1930, it was used in the USA and Canada as a painkiller, diuretic, and diaphoretic. It was used externally in the form of ointments to treat rheumatism, neuralgia, and lumbago and as a tincture to lower pulse rate, relieve fever, and treat cardiac failure. Reported cases of toxicity led to the ban of its use in conventional medication. 3. Liquid chromatography/mass spectrometry techniques for analyzing Aconitumcontaining samples Modern analytical techniques have been employed in the last 2 decades to evaluate herbal medicines sold in Asian markets j o u r n a l o f f o o d a n d d r u g a n a l y s i s 2 4 ( 2 0 1 6 ) 2 9 e4 5 aiming to improve their safety and reduce adulteration [24e26]. Aconitum and its products are strictly monitored due to their extreme toxic components, especially diesterditerpenoid alkaloids (DDAsdAC, MA, and HA). Initially, alkaloidal titration methods were used to monitor these alkaloids. However, compared with the high-performance liquid chromatography (HPLC) method, alkaloidal titration method estimates not only toxic DDAs, but also monoesterditerpenoid alkaloids (MDAs), unesterified alkaloids, and lipo-alkaloids (LPAs) [27], which are less toxic than DDAs and may have potential effect, especially for inflammatory disorders such as arthritis [28]. Therefore, HPLC became the popular method to detect DDAs and MDAs [MDAs such as benzoylaconine (BAC), benzoylmesaconine (BMA), and benzoylhypaconine (BHA)] in recent days. The degradation of DDAs into MDAs were noted in methanol and ethanol [29,30], especially for AC. Thus, using ethanol or methanol as the extraction solvent or mobile phase in HPLC may lead to inaccurate results. Hydrolysis of DDAs into MDAs is observed especially with a long decoction time (> 120 minutes) [28]. This degradation phenomenon was detected upon using water as the decoction solvent, as well as cow urine and cow milk (Tables 1 and 2) [31]. Furthermore, the stability of DDAs was found to be highly pH dependent and they were stable only in the range of 2.0e7.0. The relative concentrations of DDAs, especially AC and MA, decreases significantly at pH >10 [29]. The evidence showed that storing DDAs in an ambient temperature or at 4 C may cause degradation, and samples are suggested to be kept at À20 C to decrease the degradation rate [32]. To reduce decomposition of DDAs, chloroform and dichloromethane were found to be the most appropriate extraction solvents [29,33]. Although 1% hydrochloride as the extraction solvent could improve the extraction yield, degradation of DDAs occurred [34]. To shorten the extraction time, an ultrasonic bath [35] or a microwave-assisted extraction procedure [36] was applied. The choice of the mobile phase plays an important role in the resolution of alkaloidal peaks on HPLC. Acetonitrileeaqueous 0.1% formic acid [37], acetonitrileeacetic acid solution [38], or acetonitrileeammonium bicarbonate buffer at pH 10 ± 0.5 [39,40] resulted in excellent peak resolution in several studies. Recent studies introduced new protocols, which did not require sample preparation, and the herbal products were analyzed in their crude forms. DART-MS (direct analysis in real time mass spectrometry) plus multivariate data analysis method was utilized as a solvent-free method to analyze samples directly in their native condition, without the need of tedious sample manipulation and preparation [41]. Moreover, minimum amounts of samples were analyzed with high accuracy with the introduction of new methods such as ultrahigh pressure liquid chromatography with linear trap quadrupole and Orbitrap mass spectrometry system (UHPLC-LTQ-Orbitrap-MS n ) [42]. Aconite is usually combined with other Chinese herbs by practitioners of traditional Chinese medicine (TCM). In order to detect the recovery of toxic alkaloids in these formulas, many liquid chromatography/mass spectrometry (LC/MS)related techniques were used, such as ultraperformance liquid chromatographyeelectrospray ionization/mass spectrometry (UPLCeESI/MS) [43], Ultraperformance liquid chromatographyephotodiode array detector (UPLCePDA) [44], UPLC/MS [45], UPLCequadrupole time-of-flight mass spectrometry (Q-TOFeMS) [46], ultrafast liquid chromatographyeion trap/time-of-flight mass spectrometry (UFLC/MS-IT-TOF) [47], and UPLCeESIeQ-TOFeMS [48]. In recent years, various aconite-related medications were introduced into the Chinese market. In order to speed up the quality control process, rapid resolution liquid chromatography coupled with tandem mass spectrometry (RRLCeMS/MS) was applied to analyze the toxic alkaloids in these products such as Shen-Fu formula. This method provides excellent limit of quantification (LOQ) (7e50 pg/mL) as well as limit of detection (LOD) (2.3e17 pg/mL) [49], which are much more sensitive than LC/MS. There are many Aconitum sp. in the world (> 75 species); some of them are used in folk medicine, especially in Asia. The processing methods differ from country to country and from use to use. Determination of fingerprints for raw herbs and medicinal formulas is essential for the establishment of appropriate quality control procedures. Certain studies introduced similarity evaluation, hierarchical cluster analysis, or principal component analysis (PCA) to evaluate the similarity and variation of aconite samples [37,50]. Partial least squares-discriminant analysis (PLS-DA) and orthogonal projection to latent structure analysis were extremely useful in the classification of metabolic phenotypes and in the identification of different metabolites [51]. This review covers recent attempts to establish analytical protocols developed to analyze products containing alkaloids of raw herb or P-ARs using HPLC/MS-related techniques. Significant accomplishments were reported, showing in detail the optimum procedures to evaluate alkaloidal contents of aconite samples. Most of the important findings were reported in the past 15 years starting from the year 2000, and this review summarizes important studies using different HPLC/MSrelated laboratory equipment in a chronological order. Xie et al [39] developed an efficient protocol using HPLC for the separation of six aconite alkaloids, including AC, MA, HA, BAC, BMA, and BHA, in aconite roots and related 12 proprietary Chinese medicines (Tables 1 and 2). They found that ethyl acetate was the optimum solvent for extracting alkaloids from the basified solution. They also evaluated the effect of pH on the separation of alkaloids, and found that all peaks were separated at pH above 9.95 and the optimum pH value for separation was 10.0 ± 0.2. The effect of using different concentrations (5mM, 10mM, or 20mM) of ammonium carbonate, the mobile phase buffer, was studied. It was found that the optimum concentration was 10mM for excellent peak separation and background noise reduction. Alkaloidal peaks were identified by comparing their retention time and UV spectra with the reported data. This method was applied for the identification of 12 aconite-root-containing Chinese medicines, and two P-AR and two unP-AR samples. As expected, P-ARs showed lower levels of the toxic alkaloid AC. However, concentrations of the other toxic alkaloids, HA and MA, were significantly higher than that of AC in the P-AR samples, suggesting that the use of AC concentration as the only marker for toxicity reduction after processing is not enough for general safety guidelines. RRLCeESIeMS/MS [49] j o u r n a l o f f o o d a n d d r u g a n a l y s i s 2 4 ( 2 0 1 6 ) 2 9 e4 5 j o u r n a l o f f o o d a n d d r u g a n a l y s i s 2 4 ( 2 0 1 6 ) 2 9 e4 5 Quantitation of AC, MA, and HA in different P-AR and unP-AR samples was achieved using LC (Tables 1 and 2) [52]. The effect of the extraction solvent was studied through the use of different concentrations of ethanol (50%, 75%, or 90%). Based on the results, 75% ethanol showed the highest level of HA recovery. In addition, the effects of extraction duration (30 minutes or 60 minutes) and extraction cycles (3 times or 5 times) were studied, which showed no advantages on the alkaloid recovery yields with longer extraction time or more extraction cycles. In the optimization steps of the HPLC protocol, the effect of pH was evaluated, showing that all peaks were efficiently separated at pH above 9.5. The LOD for all alkaloids was 15 ng. Marked differences in alkaloidal contents of the P-AR samples showed lower levels of the three alkaloids compared to those of the unP-AR samples. However, a variation among the P-AR samples was also observed, which might be attributed to the uses of different aconite species, geographical sources, or processing methods. It is noteworthy to mention that despite the fact that AC concentration was lower than the limit established by the Chinese pharmacopeia, concentrations of other alkaloids were higher than that of AC in some samples, suggesting that the detection of multiple alkaloidal markers is needed to clarify the toxicity level of aconite preparations. The variation in alkaloidal contents among four different species of Aconitum (Aconitum carmichaelii, Aconitum pendulum, Aconitum hemsleyanum, and Aconitum transsectum) was studied using HPLC [53]. Different solvents were tested as the mobile phase to separate BMA, MA, AC, HA, and DA from the samples of the four Aconitum sp. Methanolewaterechloroformetriethylamine/0.1% trifluoroacetic acidetetrahydrofuran and methanolewatereacetonitrile/ acetonitrile and ammonium hydrogen carbonate buffer were evaluated, and the use of the latter solvent system resulted in the best separation. Separation was optimized at 35 C using a gradient elution. The LOD for the developed method was ! 30 ng/mL, with a recovery percentage of > 94.65% for all five alkaloids. The results revealed a significant variation in the alkaloidal contents of the four species, highlighting the importance of using the correct species with known alkaloidal contents in the herbal preparations intended for consumer use. Tang et al [54] developed a protocol for the determination of AC, HA, and MA in a variety of matrixes, including raw materials, single-ingredient powder extracts, multiingredient powder extracts, pills, and capsules, using HPLC coupled with UV detection, and the results were confirmed using tandem mass spectrometry. The authors used liquideliquid extraction followed by solid-phase extraction to remove interferences prior to LC separation. The extraction solvent had a significant effect on the alkaloidal yields, with diethyl ether solubilizing more neutral alkaloids compared to the dichloromethane. The use of a mixture of diethyl ether and dichloromethane resulted in a cleaner background but a lower yield. It was also found that the longer the extraction time under alkaline conditions, the more the alkaloids become susceptible to hydrolysis. The effect of the mobile phase additives on the efficiency of separation of alkaloids was studied, which showed that the optimum separation was achieved using 20mM triethylamine (TEA) and 5% methanol. The results of LCeUV and LC/MS/MS were in close agreement. The stability of DDAs was studied using HPLC/ESI/MS n [29]. The information on stability is crucial for the estimation of the herbal products' biological activity as well as their toxicity. The stability of DDAs in different solvents was investigated, the results showing that these alkaloids are stable in dichloromethane but not in methanol. It was found that when ether was used as the extracting solvent, AC concentration declined to 51.8%, but concentrations of the other alkaloids MA and HA did not change more than 10%. This finding suggested that the extraction process should be carried out as soon as possible, to avoid decomposition of AC. Different substituents at the nitrogen atom of DDAs led to different rates of decomposition. Moreover, the effect of pH was evaluated, which showed that the three alkaloids AC, MA, and HA were stable in the pH range of 2.0e7.0. If the pH values of the buffer solutions were in the range of 7e10, the relative concentrations of AC and MA were significantly decreased. If the pH was above 10, the three alkaloids decomposed. The effect of storage on AC concentration was studied, which showed that storing aconite at pH 8 and 25 C for 6 months resulted in 50% reduction of AC concentration. An HPLC method was developed for estimating the quantity of BMA as the main constituent of aconite alkaloids in Radix Aconiti Lateralis Preparata (Fuzi, aconite roots; Tables 1 and 2) [30]. BMA was reported to possess potent pharmacological activities, such as analgesic and antiinflammatory activities. Several extracting solvents were evaluated for their efficiency in extracting BMA, revealing that the optimum solvent was 50% ethanol. In the optimization process for the development of the HPLC analytic methods, it was found that the use of acetonitrile and phosphoric acid (0.1%) with triethylamine as the mobile phase improved peak symmetry. Lowering the pH below 2.6 or raising the pH above 4.9 resulted in a longer elution time, so the elution was carried out at pH 3.0. The LOD for BMA was found to be 8 ng with an injection volume of 20 mL. The study also concluded that significant variations in BMA concentrations were observed among different batches of the P-ARs and among different proprietary products, indicating the importance of strict quality control for any herbal product containing aconite. The use of oxidative-damaged endothelial (ECV304) cells along with LCeMS was applied for the detection of bioactive alkaloids of Aconitum szechenyianum [55]. The developed system depends on the interaction of the alkaloidal extract with endothelial cells. The extract was subjected to oxidative stress using H 2 O 2 , followed by the aggregation of cell membrane proteins through changing the pH to 4.0, to release the special binding components from cell receptors. Separation and analysis of the alkaloidal content were achieved using HPLC, and characterization of these components was performed by LCeMS. In the obtained fingerprint of A. szechenyianum, five peaks were detected, and by studying the fragmentation pattern of these compounds, two compounds were identi-fieddMA and AC. A matrix-assisted laser desorption ionization mass spectrometry (MALDI-MS) method was developed for the qualitative profiling of the P-ARs of the Chinese herbal medicine A. j o u r n a l o f f o o d a n d d r u g a n a l y s i s 2 4 ( 2 0 1 6 ) 2 9 e4 5 carmichaelii, Fuzi [32]. The use of the developed protocol eliminated the need for sample preparation, and the method was applied directly to powdered roots. Results of the MALDI-MS experiments were compared with those of LCeMS, and both results were in agreement with each other. The authors used LCeMS for characterization of Fuzi components resulting in the detection of 60 peaks, which were divided into three groups. One group with retention times below 8 minutes and molecular weight (MW) of 400e500, the second group with retention times between 8 minutes and 12 minutes and MW of 500e800, and the third group with retention times of more than 12 minutes and MW above 800 were detected. The effect of storing the samples at different temperatures was evaluated by measuring the degradation rate in a standard solution of aconite upon storing at À20 C, 4 C, and room temperature. At À20 C, the standard solution showed the lowest degree of degradation, suggesting the importance of storing the aconite samples at a low temperature. This study concluded that the tested batches of Fuzi showed significant variation in the concentrations of DDAs, as demonstrated by MALDI-MS and LCeMS. An HPLC coupled with a diode array detector (HPLC-DAD protocol was developed for the identification and quantification of the three major aconite alkaloids, including AC, MA, and HA, in the roots of A. carmichaelii [56]. Contents of LPAs in the roots of A. carmichaelii were evaluated using liquid chromatography atmospheric-pressure chemical ionization mass spectrometry (LC-APCI-MS n ), indicating the presence of 26 LPAs. These compounds are interesting from different perspectives, because they possess several pharmacological activities and many AC-type alkaloids are converted to LPAs in the intestine. The authors showed that LPAs can be detected in the P-AR and unP-AR samples. By contrast, the three major AC-type alkaloids, AC, MA, and HA, could not be detected in the P-AR samples. The major alkaloid in the unP-AR sample was MA. In this study, the anti-inflammatory activity of the P-AR and unP-AR samples was evaluated using a COX-inhibitory assay. Both samples showed moderate COX-2 inhibitory activity, with the P-AR samples showing a slightly more potent effect. A detailed study on the fragmentation pattern of aconite alkaloids was conducted by Yue et al [57]. They identified 111 compounds out of 117 from A. carmichaelii using HPLC/ ESIeMS/MS n and Fourier transform ion cyclotron resonance/ ESIeMS in the positive ion mode. Among the identified alkaloids, 11 MDAs, 10 DDAs, and 81 LPAs, as well as novel alkaloids, including one MDA, two DDAs, and 48 LPAs, were detected in A. carmichaelii. Moreover, one DDA, seven LPAs, and two alkaloids with small MWs that possess C19norditerpenoid skeleton were reported in A. carmichaelii for the first time. HPLC was also applied for the analysis of an Ayurvedic herbal product, Mahamrutyunjaya Rasa, which is composed of Aconitum ferox, Solanum indicum, Piper nigrum, and Piper longum in a ratio of 1:1:1:1 [33]. The marker compounds for these components are AC, solanine, and piperine. The effect of the extracting solvent on alkaloidal recovery was studied through comparing the alkaloidal yields with the use of chloroform, ethyl acetate, or diethyl ether. The optimum solvent for extraction was chloroform. In the optimization process for developing the HPLC method, pH showed a significant effect on the separation of the three marker compounds as well as their separation from other interfering chemicals. The optimum separation was achieved at pH 7.5e8.0. The composition of the mobile phase was studied by evaluating the separation of marker compounds using acetonitrile, KH 2 PO 4 buffer, and methanol at different concentrations, including 65:15:15, 60:25:15, and 55:35:15 (v/v), showing the best separation at 60:25:15. The LOD and LOQ for AC were 0.210 mg/mL and 0.693 mg/mL, respectively. The results indicated that the concentration of AC varied across the tested samples and the Relative Standard Deviation (%RSD) values were > 10%. These findings highlighted the importance of strict regulation during the preparation of Ayurvedic herbal products containing aconite. An HPLC-DAD method assisted by similarity hierarchical clustering analysis was applied for the identification of four species of Aconitum roots [58]. The root fingerprints were established and compared. The method was validated, showing its potential in differentiating the roots of Aconitum kusnezoffii (AKR) from other species. The effect of the extraction methods on the yields of MA, AC, and HA was evaluated. The samples were extracted using an ultrasonic bath with different volumes of diethyl ether at different extraction times. It was found that the optimum results were obtained using 10 mL of ether for 30 minutes. These results were compared with the results of the normal extraction procedures using a percolator. Soaking the roots for 24 hours yielded results similar to the ultrasonic extraction for 30 minutes, and thus the ultrasonic extraction was selected as the optimum extraction method. An isocratic mobile phase composed of acetonitrilee0.25% glacial acetic acid (60:40, v/v), with pH ¼ 10.5 adjusted using ammonia, was selected for elution. Ten samples of AKR grown at different cultivation or wildness regions, in various cultivating environments, or in different harvesting years were analyzed and their fingerprints were compared. Ten peaks were selected as common peaks in all samples. Careful analysis of these peaks revealed that the relative peak areas varied dramatically, but the relative retention times were consistent for all the 10 samples. When the chromatographic profile of Aconitum karacolicum was compared with the AKR profile, five common peaks were detected. When the chromatographic profiles of morphologically similar species, Aconitum austroyunnanense and Aconitum contortum, were compared with the chromatographic profile of AKR, significant differences were observed, as illustrated by the absence of certain characteristic peaks in the profile of A. austroyunnanense and A. contortum. These findings were confirmed using hierarchical clustering analysis, which showed that the samples could be divided into three clusters. One cluster included all AKR samples, another cluster contained all A. karacolicum samples, and the last cluster contained A. austroyunnanense and A. contortum samples, suggesting the reliability of this method in differentiating AKR samples from other closely related aconite species. Lu et al [35] investigated the effect of different processing methods on the content of toxic alkaloids in A. carmichaelii (Fuzi; Tables 1 and 2). They processed the samples according to the methods reported by the Chinese herbal medicine practitioners. The content of AC, MA, and HA in the tested j o u r n a l o f f o o d a n d d r u g a n a l y s i s 2 4 ( 2 0 1 6 ) 2 9 e4 5 samples (84 samples in total) was evaluated by HPLCeDAD and LCeMS. The efficiency of different solvents [methanol, methanolewater (1:1, v/v), and water] in extracting the target alkaloids was evaluated, and methanolewater (1:1, v/v) was found to be the optimum solvent. Effects of the extraction time, volume, and repetitions of the alkaloidal yields were evaluated. To extract 0.5 g of the powder, ultrasonication for 60 minutes using 5 mL of methanolewater (1:1, v/v) was required to achieve 98% recovery of the three alkaloids. The optimum mobile phase was composed of acetonitrile and ammonium bicarbonateeammonium hydroxide, with pH adjusted at 10. Different wavelengths, 220 nm, 232 nm, and 240 nm, were evaluated for the detection of the three peaks, and 240 nm was found to be the optimum wavelength. The detection limits for AC, MA, and HA were found to be 0.9 mg/ kg, 0.6 mg/kg, and 1.3 mg/kg, respectively. The quantitation limit was estimated to be 3.5 mg/kg for AC, 2.2 mg/kg for MA, and 4.8 mg/kg for HA. The results were in agreement with previous reports, suggesting that processing of aconite roots significantly reduces their toxicity as the sum of the toxic alkaloids was 3.91e34.80% of the original value in raw Fuzi. An HPLC/ESIeMS n method was developed for the quantification of LPAs in Radix Aconiti, Radix Aconiti Kusnezoffii, and Radix Aconiti Lateralis Preparata [59]. Different solvents were evaluated, and methanol was found to be the optimum mobile phase for the highest resolution of the LPA peaks. The developed method was applied for the analysis of the three herbs, resulting in the identification of 32 alkaloids based on their fragmentation pathway. The average recovery percentages of the alkaloids were 91.1e105.9%. The concentrations of AC, MA, and HA in A. carmichaelii, A. pendulum, AKR, Aconitum taipeicum, and A. szechenyianum were determined using an efficient HPLC method (Tables 1 and 2) [40]. The extraction process was optimized by L16 (45) orthogonal test and univariant methods. These methods indicated that the optimum extraction procedures of the alkaloids can be achieved by refluxing the sample three times in six volumes of acidic alcoholic solution for 1 hour. The average recovery rates for MA, AC, and HA ranged from 99.49% to 101.9%, from 101.2% to 103.1%, and from 96.62% to 98.43%, respectively. The mobile phase was optimized after testing the effect of using acetonitrilee0.2% acetic acid (adjusted to pH 6.25 with triethylamine), acetonitrileephosphate buffer (pH ¼ 8.67), acetonitrileeammonium bicarbonate buffer (pH ¼ 8.50e9.00), or methanoleammonium bicarbonate buffer (pH ¼ 8.00e10.00) on peak resolution. The best separation was obtained using acetonitrileeammonium bicarbonate buffer at pH 9.5. Aconitum-type alkaloids were analyzed in one famous Chinese herbal formula, Yin Chen Si Ni Tang, which is used for the treatment of liver disorders and jaundice [48]. This formula contains Artemisiae scopariae (Yinchenhao), Radix Aconiti Lateralis Preparata (prepared Fuzi), Rhizoma Zingiberis (Ganjiang), and Radix et Rhizoma Glycyrrhizae Preparata Cum Melle (prepared Gancao). Several components in this formula, including flavonoids and coumarins, hindered the analysis of Aconitum-type alkaloids. The developed UPLCeESIeQ-TOFeMS method along with the postacquisition data processing software, Metabolynx XS, succeeded in the identification of Aconitum-type alkaloids in Yin Chen Si Ni Tang. Using the developed method, 62 ions were assigned to Aconitum-type alkaloids and identified tentatively by comparing the information on their accurate mass and fragments with that of the authentic standards or by MS analysis and retrieving the reference literatures. Functionalized analysis was applied for the detection of Aconitum-type alkaloids from A. carmichaelii using vascular endothelial growth factor receptor cell membrane chromatography with LCeMS [60]. This method depends on the detection of the inhibitory effect of the target compounds on vascular endothelial growth factor receptor and thus predicting their potential as future cytotoxic compounds. Using this protocol, factions separated by the vascular endothelial growth factor receptor cell membrane chromatography column (the 1 st dimension) were transferred and adsorbed on an enrichment column, which were sent to the LCeMS system (the 2 nd dimension) for separation and preliminary identification. The results indicated that the active compounds of A. carmichaelii were MA, AC, and HA. The efficiency of alkaloidal titration, the method utilized for the quality control of herbal products containing aconite recommended by the Chinese Pharmacopeia, was compared to an HPLC method developed for the identification of MA, AC, and HA in commercial samples of P-ARs [27]. The results showed that no toxic alkaloids were detected in any of the commercial samples, indicating that the processing method was efficient in removing the toxic alkaloids from the samples. The validity of the method was demonstrated through subjecting the samples to in vivo tests showing no signs of toxicity. When the results of the HPLC method were compared to the alkaloidal titration method, a significant discrepancy in the results was observed. The alkaloidal titration method indicated that the samples still contained 0.2% alkaloids. The main drawback of the alkaloidal titration method is the lack of specificity to the toxic alkaloids, because it estimates not only the toxic DDAs, but also the MDAs, unesterified, and LPAs. These results suggested the importance of using different methodologies for estimating the toxicity of herbal products containing Aconitum-type alkaloids. An HPLC method was developed for the analysis of AC, MA, and HA in the Chinese herbs Caowu (CW) and Chuanwu (CHW) (Tables 1 and 2) [38]. Separation of these alkaloids was highly affected by the concentration of triethylamine phosphate in buffer solution, and the best separation was achieved using 25mM triethylamine phosphate. The average recovery rates of AC, MA, and HA were found to be 91%, 89%, and 87%, respectively. Concentrations of the three alkaloids were less in the processed CHW compared to those in other processed samples. The effects of boiling raw herbs in water for different periods of times were also studied, which showed that AC and MA disappeared after boiling in water for 150 minutes. However, HA was found to survive the heating process, suggesting its importance as a marker for herbal products containing aconite. The validity of the HPLC method was confirmed by comparing its results with the results of an automated analytical system (HPLC) and ESI/MS/MS. The results were comparable, suggesting the future potential application of the developed method in investigating the quality of herbal products containing aconite. An HPLCeQ-TOFeMS method was developed for the analysis of alkaloids in the unprocessed Radix Aconiti and Radix Aconiti Preparata [61]. The effects of the extraction method (soaking or ultrasonic bath), extracting solvent (50% ethyl acetateeisopropanol, 80% ethyl acetateeisopropanol, ethanol, methanol and acetone), solvent volume (50, 100 and 200 mL), extraction time (30, 60 or 120 min) and extraction times (once, twice or three times) were evaluated. The optimum result was obtained using the following conditions: 5.0 g of the ground powder was soaked for 10 minutes in ammonia water (5.0 mL) at pH 10, and then the mixture was extracted with a 100 mL mixture of ethyl acetateeisopropanol (1:1) by ultrasonication (30 minutes). The optimal mobile phase was acetonitrile and 0.1% v/v glacial acetic acid. The developed method was applied for the analysis of the P-AR and unP-AR samples, and the obtained peaks were identified as AC alkaloids. The detected peaks were divided into three groups: (1) alkaloids with MW 400e500, which were named as nonester alkaloids; (2) alkaloids with MW 500e800, which were assigned as DDAs and MDAs; and (3) alkaloids with MW > 800, which were identified as LPAs. A rapid method was developed for the analysis of YA in the P-AR and unP-AR samples as well as in TCM preparations containing aconite herbs (Tables 1 and 2) [36]. YA is often ignored in the quality control measures of herbal preparations containing aconite despite its reported toxicity. To detect this alkaloid, a UHPLCeMS/MS method was developed, which was able to detect YA as well as AC, MA, HA, BAC, BMA, and BHA. Microwave-assisted extraction was utilized to extract the target alkaloids. The solvent for microwave-assisted extraction was 50% methanol (containing 2.5% formic acid) and the irradiation power was 420 W for 1 minute. Thirty-one samples were analyzed, and the contents of the seven alkaloids were determined. Alarmingly, the content of YA varied significantly in some of the evaluated samples, from 0.015 mg/g to 10.41 mg/g. A concentration of 10.41 mg/g is toxic and should be controlled, suggesting the importance of detecting the concentration of YA in aconite-containing herbal products. A UHPLC-LTQ-Orbitrap-MS n method was developed for the analysis of DDAs in A. carmichaelii [42]. Using the developed method, the authors were able to detect or characterize 42 DDAs. The fragmentation patterns of the major diagnostic alkaloids, including AC, MA, and HA, were investigated. Using the developed method, 23 new compounds were suggested, including 16 esterified DDAs with short fatty acid esters along with four N-dealkyl-type DDAs. The authors showed the advantages of using UHPLC with its small-particle-size stationary phase (1.7 mm) in comparison with the conventional HPLC (5.0 mm), resulting in improved resolution and shorter analysis time. The safety of Xiaohuoluo pill, a TCM, which is used in mainland China to treat wind cold damp impediment, limb pains, and numbness, has been the subject of recent investigations (Tables 1 and 2) [37]. This TCM is composed of Radix Aconiti Preparata and Radix Aconiti Kusnezoffii Preparata as the main herbs, accounting for 42% of the entire prescription. It is sold in herbal drug markets and produced by several suppliers without extensive quality control measures. An efficient UPLCeESIeMS method was developed for the rapid analysis of the Xiaohuoluo pill, and the results of analyzing different samples were evaluated using chemometric analysis of PCA and orthogonal projection to latent structural discriminant analysis. In the process of developing the analytical methods, it was found that the positive ion mode response was much higher than the response in the negative ion mode for MA, AC, HA, BMA, BHA, and BAC, which might be attributed to the ionization of the nitrogen atom in the alkaloids. The optimum mobile phase was found to be acetonitrileewater containing 0.1% formic acid (35:65, v/v) for the best resolution and peak shapes. Using the developed method, the lower LOQs for MA, AC, HA, BMA, BAC, and BHA were found to be 1.41 ng/mL, 1.20 ng/mL, 1.92 ng/mL, 4.28 ng/ mL, 1.99 ng/mL, and 2.02 ng/mL, respectively. Recovery percentages of these alkaloids ranged from 99.7% to 101.7%. The developed method was applied for the analysis of different samples. The results indicated that in the Xiaohuoluo pill, concentrations of the DDAs (MA, AC, and HA) were obviously less than those of the MDAs (BMA, BAC, and BHA), indicating the potent effect of herbal processing in changing the alkaloidal concentrations. The quantitative determination of alkaloids in the Xiaohuoluo pill indicated that MA, AC, and HA concentrations were below the required level, as suggested by the Chinese Pharmacopeia. These results suggested that the studied Xiaohuoluo pill is safe for use, if the indicated dosage and regimen are followed. A ultra-performance liquid chromatography coupled with quadrupole time-of-flight high-definition mass spectrometry (UPLCeQ-TOF-HDMS n ) method was developed for the analysis of the crude lateral roots of A. carmichaelii and three P-AR products, Yanfuzi, Heishunpian (HSP), and Baifupian (BFP), which are used by TCM practitioners [62]. The method utilized PCA for establishing the differences between the metabolic profiles of P-AR and unP-AR samples. The authors were able to select 19 metabolites as biomarkers, and they detected the changes in their concentrations as a result of processing. The results indicated that processing was effective in decreasing the concentrations of DDAs. Concentrations of AC, MA, HA, DA, 10-OH-mesaconitine, in Heishunpian and BFP, were significantly decreased, while those of these alkaloids increased in Yanfuzi. This finding suggested that despite the widely accepted assumption that processing is highly effective in decreasing the toxicity of herbal products containing aconite, P-AR products should be carefully analyzed. An HPLCeESIeMS n method was developed for the identification of alkaloids in the crude and processed A. carmichaelii [34]. The alkaloids were extracted from the herb using 1% (v/v) hydrochloric acid, which extracted most of the alkaloids. It was found that the addition of ammonia to the mobile phase depressed tailing of the peaks. BAC and BMA could not be separated if the ammonia concentration was less than 1%. Application of the developed method to the analysis of A. carmichaelii led to the identification of 48 AC-type alkaloids by studying their MS n spectral data. The crude and processed chromatograms were compared, which indicated that the contents of MDAs increased after processing, while the concentrations of DDAs decreased. An LCeMS method was developed for the analysis of alkaloids in the processed Fuzi decoctions, Baifupian and Heishunpian (Tables 1 and 2) [49]. During the process of method development, the effects of the mobile phase on the j o u r n a l o f f o o d a n d d r u g a n a l y s i s 2 4 ( 2 0 1 6 ) 2 9 e4 5 separation of the marker alkaloids were investigated, which showed that the best peak shape and resolution can be achieved using a mixture of acetonitrile and an aqueous 0.1% formic acid solution. Seven alkaloids were detected, including higenamine, BHA, BMA, BAC, AC, HA, and MA. The LOQs were 7.80 pg/mL for higenamine, 25.00 pg/mL for BHA and BAC, 10.00 pg/mL for AC and MA, and 50.00 pg/mL for BMA and HA. The LODs ranged from 2.30 pg/mL to 17.00 pg/mL for the targeting alkaloids. Application of the method for differentiating between Baifupian and Heishunpian decoctions revealed significant variation in the alkaloidal contents among these decoctions. As demonstrated in many other studies, concentrations of certain toxic DDAs decreased significantly with processing; however, this trend was not universal for all DDAs. Contents of AC and MA were much lower than those of BAC and BMA in Heishunpian and Baifupian decoctions. By contrast, the concentration of HA in Heishunpian and Baifupian decoctions was higher than that in BHA. The analysis also indicated that the concentrations of BHA, HA, and BAC in Heishunpian decoction were higher than those in Baifupian decoction. However, contents of MA and BMA in Heishunpian decoction were lower compared with those of Baifupian decoction. These findings supported the notion that the variation in the processing protocols can lead to a significant variation in alkaloidal concentrations. A detailed investigation on the effect of different processing techniques on the alkaloidal content of Radix Aconiti was achieved using UPLCeESI/MS n [50]. After establishing the fingerprints of the P-AR and unP-AR samples, similarity evaluation, hierarchical cluster analysis, and PCA were performed to evaluate the similarity and variation of the samples. The authors processed the samples according to the Chinese Pharmacopeia, and they labeled the P-AR samples as the qualified Radix Aconiti and the unP-AR samples as the unqualified Radix Aconiti. The total ion chromatograms of the P-AR and unP-AR samples showed significant variations, especially in the region of the DDAs. It was revealed that the content of MDAs was higher in the P-AR samples than that in the unP-AR samples. The results also indicated that BMA was the abundant compound in the P-AR samples. Due to the concentration variations between DDAs and MDAs along with sensitivity limitation of the spectroscopic techniques, the authors suggested that quantitative determination of MDAs can be achieved using UPLCeUV, and DDAs can be detected using UPLCeESI/MS. Contents of LPAs decreased significantly after the first hours of processing and then remained constant. The decline in the concentration was attributed to the hydrolysis of LPAs under the processing conditions and the afterward stability to the reaction of MDAs, DDAs, and fatty acids forming LPAs. PCA was used for the analysis of the main alkaloidal markers affecting the quality of samples. Fingerprints of the nine analyzed samples were obtained and the characteristic peaks, 39 common peaks from total ion chromatograms of UPLCeESI/MS n and 34 common peaks from UPLCeUV chromatograms, were identified. A microcalorimetric assay along with a UPLC method were applied for the analysis of five different species of aconite, including Radix Aconiti, Radix Aconiti Singularis, Radix Aconiti Kusnezoffii, Radix Aconiti Lateralis Preparata, and Radix Aconiti Brachypodi [63]. Using the developed UPLC method, fingerprints of the five Aconitum plants were established. Biological effect of the alkaloids in the tested Aconitum sp. on Escherichia coli metabolism was studied using a microcalorimetric assay. The metabolic process of E. coli was studied, and it was found that the process can be divided into different phases: the first exponential phase (AeB), the lag phase (BeC), the second exponential phase (CeD), and the decline phase (DeE). The effect of using different concentrations of Aconitum sp. was studied, which showed significant changes in the metabolic curve with various concentrations of plant samples. A comparison of the fingerprints of the aconite samples led to the identification of 15 common peaks. Among these peaks, those corresponding to AC, HA, and MA were identified. The correlation between these peaks and the changes in E. coli metabolism was established, which showed that MA and HA had a negative effect on metabolism, while AC was found to promote bacterial metabolism. A UPLCeQ-TOF-HDMS method was developed for the analysis of alkaloidal contents in the roots of AKR (CW), the mother root of A. carmichaelii (CHW), and the daughter or lateral roots of A. carmichaelii ["Shengfuzi" in Chinese] [51]. The results indicated that the optimum mobile phase was 0.1% formic acid in water and 0.1% formic acid in acetonitrile. After analyzing the samples using the developed method, certain statistical tools were applied, including PCA, PLS-DA, and orthogonal projection to latent structure analysis. These tools helped in the classification of the metabolic phenotypes and identification of the differentiating metabolites. The PCA results are shown as score plots indicating the scatter of the samples. If the score plots are clustered together, this indicates similar metabolomic compositions, whereas they are considered compositionally different if the score plots are dispersed. PLS-DA predicts a list of metabolites through measuring the distance from different groups. The S-plot is utilized to identify metabolites according to the orders of their contributions to the separation of clustering. Using these statistical tools, 22 metabolites between Shengfuzi and CHW and 13 metabolites between CHW and CW were identified as biomarkers. Interestingly, concentrations of MA and AC were higher in CW. This phenomenon was attributed to the fact that CW is grown in cold weather, which may induce the production of toxic alkaloids. Moreover, it was found that songorine, carmichaeline, and isotalatizidine were absent in CW, despite their presence in Shengfuzi and CHW. The effect of adding Cinnamomum cassia to the alkaloidal content of Sini Tang, which is formed of Zingiber officinale, Glycyrrhiza uralensis, and A. carmichaelii, was studied using HPLC-DAD [64]. Certain complexes were formed, which were analyzed using proton nuclear magnetic resonance ( 1 H-NMR) and UV/Vis spectroscopy. To clearly study the effect of complexation on toxic alkaloids of A. carmichaelii, eight batches of the P-AR and one batch of the unP-AR A. carmichaelii roots, as well as one batch of G. uralensis roots, Z. officinale rhizome, and C. cassia bark were tested. A. carmichaelii roots were processed by repeatedly soaking them in salt water and boiling until the sliced roots turned black before drying in an oven. The processed samples of A. carmichaelii were analyzed using HPLC. Based on the obtained chromatograms, AC and MA contents were below the LOD in all batches. Only HA was detected and thus was selected as the marker compound. j o u r n a l o f f o o d a n d d r u g a n a l y s i s 2 4 ( 2 0 1 6 ) 2 9 e4 5 Effect of the extracting solvent was studied, which showed that HA could be detected only if MeOH was used as the extracting solvent; however, other organic solvents were unable to extract HA. HPLC analysis of the unprocessed batch of A. carmichaelii extracted with MeOH indicated the presence of HA (269.34 þ 0.58 mg/g). The use of 1% HCl yielded 251.12 mg/g HA, MeOH:H 2 O (1:1) furnished 199.48 mg/g HA, and 71.32 mg/g HA was obtained using H 2 O. However, analysis of the prepared A. carmichaelii decoction did not show any traces of HA. The effect of combining other herbs with A. carmichaelii, such as G. uralensis, Z. officinale, and C. cassia, on HA content was studied. No HA was detected in the presence of G. uralensis, but it was detected when Z. officinale or C. cassia or both were combined with A. carmichaelii; HA concentration was below 40% of the original concentration in A. carmichaelii. The effect of combining singular components of G. uralensis, Z. officinale, and C. cassia on HA concentration was also studied, which showed that liquiritin and isoliquiritin were able to reduce the concentration of HA. Other components did not affect HA concentration. When isoliquiritin was mixed with A. carmichaelii, the shape of the UV spectra changed. Formation of a supramolecular structure was suggested, which was found to possess a defined stoichiometry, binding constant, and molecular structure. Binding constants of HA with liquiritin in different D 2 O/MeOD mixtures were determined for the first time using 1 H-NMR titration experiment. A detailed study of the effect of decoction time on reducing AC alkaloids toxicity was conducted using HPLC. A. carmichaelii roots were processed according to the Chinese Pharmacopeia [28]. First, the roots were washed with water and soaked in edible mother liquor of mineral salts for several days. Second, the mixture was boiled and rinsed with water. Third, the roots were peeled, sliced, and soaked and rinsed in water. After steaming and drying, P-ARs were obtained and named as processed Fuzi or Baifupian (BFP). The P-ARs were decocted over different time intervals (30 minutes, 60 minutes, or 120 minutes) forming three different decoctions (DBFP-30, DBFP-60, and DBFP-120, respectively). Each decoction was analyzed using HPLC, and the obtained chromatograms were compared to those of the raw root and BFP. The raw root was found to possess the highest concentrations of AC, MA, and HA. In BFP, concentrations of these three alkaloids were lower than those in raw Fuzi but higher than those in DBFP. The results showed a stepwise decrease in the concentration of each alkaloid with increasing decoction time. In DBFP-120, AC was undetectable and the concentrations of the other two alkaloids reached the lowest values. Interestingly, the content was almost similar to the total alkaloidal content in the three prepared decoctions, suggesting that the toxic alkaloids were successfully converted to nontoxic alkaloids. The toxic effect of the prepared decoction was evaluated using male and female Kunming mice. The median lethal dose (LD 50 ), maximal tolerance dose (MTD), minimal lethal dose (MLD), and no-observed-adverse-effect level (NOAEL) were determined for each decoction. The results indicated that with an increase in the decoction time, acute toxicity of the detoxified Fuzi decreased in the following order: DBFP-30 (LD 50 145.1 g/kg, MTD 70 g/kg, MLD 100 g/kg, NOAEL 70 g/kg) > DBFP-60 (very large LD 50 , MTD 160 g/kg, MLD 190 g/kg, NOAEL 100 g/ kg) > DBFP-120 (no LD 50 , unlimited MTD, unlimited MLD, NOAEL 130 g/kg). Additionally, adjuvant arthritis rats were used to assess the pharmacological effect of detoxified Fuzi roots. Adjuvant arthritis rats are special experimental models that develop rheumatoid arthritis symptoms including anorexia and body weight loss. Restoration of body weight can only be achieved using detoxified Fuzi roots. The results indicated no significant difference in the pharmacological effects of the three different decoctions. Based on these findings, the authors recommended the use of DBFP-120 compared to other aconite forms because it exhibited the same pharmacological effect without any acute toxicity. An RRLCeMS/MS method was developed for analyzing the components of an ancient TCM, Shen-Fu (Tables 1 and 2) [65]. The herbal formula is composed of Radix ginseng and Fuzi (Radix Aconiti Lateralis Preparata) at a ratio of 3:2, and it is prescribed for the treatment of diseases associated with the signs of Yangqi decline and Yang exhaustion. The effect of the mobile phase on the separation of alkaloids was evaluated, and the results showed that the best peak shape and resolution were obtained using a mixture of acetonitrile and an aqueous 0.05% formic acid solution. The LOD ranged from 0.01 ng/mL to 1.25 ng/mL, and the recovery percentage ranged from 91.13% to 111.97% for all components including AC, MA, and HA. The results indicated that AC was the least abundant component among all the analytes. A UPLCeESI/MS method was developed for the identification of the constituents of complex herbal preparations used in TCM including Sanhuang Xiexin Tang (SXT) and Fuzi Xiexin Tang (FXT). SXT is composed of Rhei Radix et Rhizoma (Polygonaceae family, rhizomes of Rheum officinale), Scutellariae Radix (Labiatae family, roots of Scutellaria baicalensis), and Coptidis Rhizoma (Ranunculaceae family, rhizomes of Coptis chinensis) (Tables 1 and 2) [43]. FXT possesses similar composition in addition to Aconiti Lateralis Radix Preparata (Ranunculaceae family, roots of A. carmichaelii). SXT and FXT are prepared either through maceration or through decoction. The developed method was applied for the analysis of Aconiti Lateralis Radix Preparata decoction, SXT, FXT, macerated SXT, macerated FXT, SXT decoction, and FXT decoction. The results indicated a significant variation in the compositions of the evaluated samples. Specifically, DDAs (AC, HA, and MA) were not detected in FXT decoction and macerated FXT. However, HA was detected in Aconiti Lateralis Radix Preparata decoction, but without AC and MA. In general, more constituents were found in the decoction products compared to the maceration products, which suggested more potent pharmacological activity of the decoction. The results also revealed possible drugedrug interaction due to the complexity of the herbal preparation and differences in their preparation procedures. A UHPLCeQ-TOF/MS method was developed to compare the efficiency of detoxification mechanisms, as described in Ayurveda and TCM, of the roots of Aconitum heterophyllum, A. carmichaelii, and AKR [31]. In Ayurveda, the detoxification mechanism or Shodhana is accomplished by treating the herbal products with cow urine or cow milk. In TCM, the most general protocol for the detoxification of herbal products containing Aconitum sp. is the use of water decoction. The developed method was validated, and the LODs of AC, MA, and HA were found to be 0.383 ng/mL, 0.438 ng/mL, and j o u r n a l o f f o o d a n d d r u g a n a l y s i s 2 4 ( 2 0 1 6 ) 2 9 e4 5 0.088 ng/mL, respectively. The LOQs for AC, MA, and HA were 1.15 ng/mL, 1.31 ng/mL, and 0.264 ng/mL, respectively. Samples treated with cow milk, cow urine, or water were analyzed using the developed method. The results demonstrated that the three detoxification mechanisms were effective in reducing the concentrations of DDAs in all samples. However, treating samples with cow urine was found to be less effective than the other two protocols. An efficient analytical method was developed for the determination of aminoalcohol-diterpenoid alkaloids in the lateral roots of A. carmichaelii (Fuzi) using solid-phase extraction followed by chromatographyetandem mass spectrometry [66]. For the solid-phase extraction experiment, a good recovery of aminoalcohol-terpenoid alkaloids was achieved using 8% ammonia in methanol as the elution solvent. The optimum mobile phase was found to be methanole0.1% formic acid in the ratio of 80:20 (v/v). This mobile phase yielded a good peak shape, acceptable separation, and a small tailing factor. Specificity of the developed protocol was evaluated by analyzing the blank solvent, mixed standard solution, reference solutions of the determined alkaloids, internal standard, and sample solution. No interference was detected among the solvent, alkaloids, or any other tested samples, suggesting the specificity of the method. The developed method was successfully applied for the determination of 13 aminoalcoholditerpenoid alkaloids in Fuzi samples obtained from different sources and subjected to different processing conditions. A UFLC/MS-IT-TOF method was developed for the analysis of alkaloidal constituents of Fuzi and FuzieGancaoherb pair (FG), which consists of A. carmichaelii (Fuzi) and Roast Radix Glycyrrhizae (Glycyrrhiza glabra, Gancao in Chinese) [47]. Diazepam was used as the internal standard in the developed protocol. The optimum mobile phase for the separation of alkaloids was found to be ammonium acetate and acetonitrile. Application of the developed protocol for the analysis of Fuzi and FG resulted in the detection of 60 common peaks in both samples. Among these peaks, those corresponding to 51 alkaloids were identified by accurate mass measurements and fragmentation pathways. A semiquantitative analysis of the samples, which was achieved through comparing the obtained alkaloidal peak areas with the IS peak areas, provided a clearer picture on the differences of alkaloidal contents between FG and Fuzi. It was found that the concentrations of DDAs and aminoalcohol-diterpenoid alkaloids were higher in FG decoction, while the concentrations of MDAs were lower. A quadrupole time-of-flight mass spectrometry (UPLCeQ-TOFeMS) method was developed for the analysis of a traditional herbal medicine, Wu-tou decoction, which is used to treat rheumatic arthritis [67]. It is composed of Aconiti Radix Cocta, Ephedrae Herba, Paeoniae Radix Alba, Astragali Radix, and Glycyrrhiza Radix Preparata, indicating the complexity of structures and difficulties in the determination of mixture components. The optimum mobile phase for the developed method was found to be acetonitrileewater with 0.1% formic acid. Application of the developed method led to the identification of 74 components, including alkaloids, monoterpene glycosides, triterpene saponins, flavones, and flavone glycosides. The components were confirmed by comparing the obtained chromatogram with the chromatograms of individual herbs and standard available components. Among the detected compounds, 43 alkaloids, including Ephedra alkaloids as well as aconite alkaloids, were detected in the positive ion mode. Fragmentation pattern of the identified alkaloids was studied and compared with that of the previous literature, which aided in their identification. The developed method can be added to the quality control tools that might be used for the analysis of Wu-tou decoction. A microcalorimetric method along with UPLC was developed for the analysis of Fuzi and three different P-AR products including Yanfuzi, Heishunpian, and Paofupian [68]. A microcalorimetric method can provide valuable information on the growth and metabolic status of cells under the effect of the tested compounds. Under certain growth conditions, cellular heat production and growth progress of cells produce unique poweretime curves. From these curves, the effect of certain compounds can be evaluated, suggesting the potential action of these compounds. Using UPLC, the fingerprints of Fuzi and its three P-AR products were developed and compared. The effect of these samples on the metabolism of rat mitochondria was investigated using the developed microcalorimetric method. Finally, the results of the UPLC and calorimetric method were correlated using a canonical correlation analysis model. A study of the metabolism of mitochondrial rats suggested that the metabolic cycle can be divided into four stages: Stage I, Stage II, Stage III, and Stage IV. Certain thermokinetic parameters were also calculated, including k, Q, P max , t max , t lag , and P av . Among these parameters, the most important one was "k", the exponential growth rate of Stage II. It was found that metabolism reached its peak when a sample concentration of 4.0 mg/mL was used, and any increase in the concentration led to a decrease in metabolism. The use of 4.0 mg/mL of the samples did not produce any significant difference in the "k" values among Heisunpian and Paofupianor Yanfuzi, which suggested lower efficiency compared to the unprocessed Fuzi but higher safety. By comparing the chromatograms of the analyzed samples, 26 common peaks were detected in the four samples. The effect of these alkaloids on metabolism could be studied by the use of the canonical correlation analysis model. It was found that benzoylhypaconitine, MA, and HA had a positive effect on the promotion of mitochondrial metabolism. On the other hand, benzoylaconitine had a negative effect on metabolism. An HPLCeQ-TOFeMS method was developed for the analysis of Shen-Fu injection, which is a widely used Chinese herbal formulation for cardiac diseases (Tables 1 and 2) [46]. It is composed of red ginseng and P-ARs. The developed method was highly sensitive and did not require sample pretreatment. The LOQs ranged from 0.4 ng/mL to 18 ng/mL for DDAs. The method was applied to nine batches of Shen-Fu injection, and it showed high batch-to-batch reproducibility. Application of the developed method led to the identification of 44 compounds and quantification of 24 major alkaloids and ginsenosides. A functionalized analytical method was developed for the analysis of Fuzilizhong pills. These pills are a modified form of j o u r n a l o f f o o d a n d d r u g a n a l y s i s 2 4 ( 2 0 1 6 ) 2 9 e4 5 a famous TCM Lizhong Wan described in Treatise on Febrile Diseases. It consists of Panax ginseng (Ren Shen), A. carmichaelii Debx. (Fu Zi, Zhi), G. uralensis, Glycyrrhiza inflata or G. glabra (Gan Cao), Atractylodes macrocephala (Bai Zhu), and Z. officinale (Gan Jiang) [45]. This medication is prescribed for dyspnea and pulmonary edema. For the analysis of Fuzilizhong pills, a UPLCeMS method and a luciferase reporter assay system to simultaneously screen nuclear factor kappa beta (NF-kB) inhibitors and Beta-2 Adrenergic Receptor (b2AR) agonists were applied. Inhibition of NF-kB is related to lower inflammatory reaction. Stimulation of b2AR is related to the activation of adenylyl cyclase, cyclic adenosine monophosphate (cAMP), and protein kinase A, and phosphorylation of numerous effector proteins, which lead to the relaxation of tracheal smooth muscles. The use of 250 mg/mL or 1000 mg/mL of Fuzilizhong pills led to the inhibition of tumor necrosis factor alpha-induced NF-kB production and activation of b2AR signal. The results indicated that MA, flaconitine, BMA, AC, and HA are potent NF-kB inhibitors. Conclusion Aconitum and its products have played an important role in human history. It was one of the most feared plants in all ancient civilizations. Despite its extreme toxicity, it was sought for its healing and mysterious power. Indian and Chinese civilizations studied this plant extensively and introduced detailed treatises on how to prepare nontoxic decoctions and extracts from this plant. More than 70 different preparation methods were reported in ancient scriptures, and most of these methods are still used today. The Chinese Pharmacopeia established certain criteria for the total contents of toxic alkaloids, which should be referenced by any producer preparing herbal products containing Aconitum. Despite these measurements, a wide variation in concentrations of toxic alkaloids has been recorded in the marketed preparations, exposing the public to unnecessary risks. The recent advances in analytical techniques, especially HLPC, MS, and CE, allowed the accurate determination of each alkaloid in herbal mixtures, thus establishing more precise limits for the toxic alkaloids in aconite preparations intended for human consumption. Theses analytical techniques are available in quality control laboratories and can easily be applied for assessing the quality of aconite products. Based on the summary presented in this review, it is highly recommended to change the original measures set in pharmacopeias that focused only on the determination of AC and replace the tests with advanced analytical methods establishing the accurate concentrations of every aconite alkaloids. Conflicts of interest The authors declare that there are no conflicts of interest.
2018-04-03T03:58:38.985Z
2015-10-16T00:00:00.000
{ "year": 2015, "sha1": "ef47eff54b6cb407e4e39eaab024f50e63c89bd7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jfda.2015.09.001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "71a64958189f25e9ae2dfeb599a51da73bf59e69", "s2fieldsofstudy": [ "Chemistry", "History", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
10696155
pes2o/s2orc
v3-fos-license
Influence of premolar extractions on the facial profile evaluated by the Holdaway analysis Methods: For this study, 87 patients (31 boys and 56 girls) were selected from the private practices of three dentists certified by the Brazilian Board of Orthodontics and Facial Orthopedics. These patients were treated with fixed edgewise appliance and divided into three groups, according to the sequence in which premolars were extracted “Group 40” comprised 22 patients treated with extractions of the first two superior premolars, adopted as the control group; “Group 44” comprised 43 patients treated with extractions of the four first premolars; and “Group 45” comprised 22 patients treated with extractions of first superior premolars and second inferior premolars. The Holdaway analysis was used to quantify and compare the group profiles before and after treatment. Introduction In recent years, there has been a noticeable increase in awareness and interest in facial aesthetics (1).The aesthetic benefits are among the main goals of orthodontic treatment, and clinicians are often asked about possible changes in the profile caused by certain treatment plan.The fact that dental extractions may cause a "flat face" (2) due to excessive retraction has discouraged this type of treatment protocol.However, extractions can benefit the profile when properly indicated (3).To protect the lip and the facial profile, Nance (4,5) suggested the extraction of first superior premolars and second inferior premolars.The choice among the possible sequences of premolar extraction is based on clinical observations, with little scientific support (6). The study of beauty and harmony in the facial profile has long been a priority in orthodontic practice (7).Treatment mechanics have become more effective, thereby increasing the importance of soft tissues in both the diagnosis and treatment results.Holdaway (8) and Burstone (9,10) are among the many scholars who have emphasized the importance of soft tissues in diagnosis (7). There is general agreement that orthodontic treatment can influence the soft tissue profile of the face, but there is still disagreement on the magnitude of soft tissue response regarding changes in tooth position and alveolar process.Based on these points and using the Holdaway soft tissue analysis, this study was designed to evaluate the effects of three prescribed premolar extraction sequences (G 40, G 44 and G 45) on the lateral facial profile. Methods The sample was retrospectively selected from the private practice of three orthodontists certified by the Brazilian Board of Orthodontics and Dentofacial Orthopedics.The initial (T1) and final (T2) profile teleradiographies of 87 patients treated orthodontically with fixed edgewise appliances were divided into the following three groups according to the sequence in which premolars were extracted: 22 patients treated with extractions of the first two superior premolars, adopted as the control group (Group 40); 43 patients treated with extractions of the four first premolars (Group 44) and 22 patients treated with extractions of first superior premolars and second inferior premolars (Group 45).The mean age was 15 years (ranging from 11 to 18 years of age), with 31 boys and 56 girls.Treatment time was 3 years (ranging from 2 to 5 years). The sample was selected based solely on the premolar extraction sequence, regardless of other dentoalveolar or skeletal characteristics.Additional inclusion criteria for this study were (1) all patients had their premolars extracted as part of their consented treatment plan; (2) all patients were Caucasian, without congenitally missing teeth or previous extractions; (3) all permanent teeth were present up to the second molars; (4) good quality of the pre-and posttreatment radiographs, taken with the lips relaxed, teeth in occlusion, and using the same cephalosta (5) no prior use of functional appliances or orthognathic surgery between the two radiographs; (6) fully closed gaps at the end of treatment; (7) gaps closed with 0.019"×0.025"steel arches; and (8) where possible, maintenance of the intercanine and intermolar distances. The radiographs were taken in centric occlusion, according Broadbent's technique (12), with lips at rest, as defined by Burstone (10).The cephalometric tracings in each profile teleradiography were performed manually by the same investigator and the cephalometric points were digitized into the Dentofacial Planner software (2.0 Toronto, Ontario, Canada) to obtain the cephalometric measurements.Nine linear and two angular measurements demarcated as per Holdaway (8) and defined by Basciftci et al. (13) were analyzed (Fig. 1 and 2). Error Assessment To assess the intra-examiner error, 30 lateral teleradiographies were randomly selected and traced again after a 3-week interval.To evaluate the agreement between the first and second measurements, we used Student's t test for paired samples at a 5% significance level.None of the measurements presented significant differences, which confirmed the calibration of the examiner. Comparative analysis among the groups in the sample The changes from T1 to T2 were evaluated to determine statistically significant variations that occurred separately in each of the groups. Statistical Analysis Normal distribuition of the data was confirmed by the nonparametric Kolmogorov-Smirnov test, and parametric tests were therefore applied.Initial (T1) and follow-up (T2) measurements were compared by Student's t-test for paired samples.Comparison among groups, was performed with analysis of variance (ANOVA).The results were considered statistically significant at a level of 5%.All statistical tests used to process and analyze the data were performed with SPSS software package (SPSS 10.0., Chicago, III). Intragroup analysis (Table 1) In all groups there was an increase in Nose prominence (P<0.05) and a decrease in Upper lip sulcus depth, subnasale to H line, Skeletal profile convexity, H angle, and (P<0.05).Upper lip base thickness, Upper lip strain increased in groups 40 and 44 (P<0.05).Inferior sulcus to the H line increased in groups 44 and 45 (P<0.05).Lower lip to H line decreased in groups 44 and 45 (P<0.05). Intergroup analysis (Table 2) The results of the ANOVA test indicate that only the measurement Inferior sulcus to the H line showed a significant difference among the study groups (P<0.05).There was a greater increase in Inferior sulcus to H line for Group 44. Discussion The growth process creates, by itself, facial changes.Therefore, the goal of many studies has been to establish a prognosis of the changes that will occur in the faces of patients under the cumulative effect of growth, development, and orthodontic treatment (14).This work was performed with patients in an active growth age group, which was part of the study inclusion criteria.Additionally, the study sample was compatible in terms of age at the beginning and end of treatment and in the duration of treatment (3,15).According to Talass (16), growth is associated with minimal changes in the soft tissues when the treatment period does not exceed 36 months. 1 and 3) Intragroup analysis (Tables The nose prominence increased significantly in all three groups.This increase was favorable because the values were all sub-standard at the beginning of treatment.Hoffelder and Lima (17) highlighted the aesthetic implications of the size and shape of the nose, which changes up to 18 years of age.Subtenly (18) recommends that treatment during adolescence should be completed with more prominent lips due to the large expected increase in the nose and chin.Castro (5) comments that the nose "is individual", i.e., it is difficult to predict its growth because it varies significantly during the treatment period and from one patient to another.A number of studies have suggested evaluating the posture of the lips and the aesthetics (19,20) but most are influenced by nose growth.Holdaway (8) removed nasal influence from the labial posture assessment (7).The Holdaway's soft tissue analysis is the only one that determined values for the upper lip sulcus depth (2,8).There is a need to consider the upper lip curve during the treatment plan to reduce the potential for undesirable expressions in this region, apparently as a result of an excessive retraction of the upper and lower teeth during treatment.The upper lip sulcus depth and the extent of soft tissue subnasale to H were, on average, significantly reduced after treatment in all groups.Wholley and Woods (6) and Moseling and Woods (11) found large individual variation, with both increases and decreases in the two measures.In this study, only Group 40 showed both increases and decreases in the upper lip sulcus depth (Fig. 3).Holdaway (8) indicated that skeletal profile convexity is not really a measure of soft tissue; instead, convexity is directly related to the harmonious position of the lips, being a reference of the dental relationship necessary to create balanced facial features.In this study, a significant reduction during the treatment period established favorable changes in the aesthetics of these patients in all groups, approximating to the ideal values. Measurement Burstone (10) stated that one of the goals of orthodontic treatment is to minimize the stretching of the lips upon sealing in patients with dentofacial disharmonies.In the present study, the basic upper lip and upper lip strain increased in the three groups.The increase was statistically significant in Groups 40 and 44.When comparing Groups 44 and 45, there was a significant increase only in Group 44, probably where the maximum anchorage is required for greater retraction of the incisors.Other factors are associated with lip response in addition to the kind of extraction that the patient has undergone: the complex anatomy of the lip, which often has an intrinsic response property (7,14,15,21), and tension at the time of radiography (6,7,16).The lip tension varies among individuals and among time periods with the same person. The H angle measures the prominence of the upper lip in relation to the overall soft-tissue profile.This measure was significantly reduced in all groups during the orthodontic treatment and approximated to the norm without matching it.This is in agreement with Cappeli (22), who reported similar findings, suggesting that the differences could be attributed to variability among the Caucasians evaluated in that study. According to Burstone (9), support for the inferior incisors and extrusion of superior incisors project the lower lip the same way a flaccid lower lip or an abnormal lip morphology affect the lower lip inclination.The lower lip was significantly reduced from T1 to T2 in Groups 44 and 45.Group 44 had more retraction of the lower lip (1.33) than Group 45 (0.94), which is consistent with some previous studies (4,23).In the lower arch, the extraction of second premolars is a strategy to camouflage Class II maxillary relationships.The mesial movement of inferior molars is stimulated, thereby closing the inferior gaps, correcting the molar Class II relationship.The degree of mesial movement of molars depends on the amount of space left after the alignment of lower incisors and canine retraction.Therefore, the extraction of first superior premolars and second inferior premolars are indicated in cases with an absence of severe crowding or excessive protrusion of the lower incisors (so that the extraction spaces are available for anteroposterior tooth movement and not for the correct alignment of the incisors).The molar moves more forward (5,23-25) when the second premolar is extracted and less retraction of the lower lip is expected when this tooth is extracted (4,23,24).However, the lower lip retraction was not statistically significant different between Groups 44 and 45.Hershey (14) and Wisth (21) emphasize that individual variations make lip retraction impossible to predict. According to Holdaway (8), the contour of the inferior sulcus to the H line should be in harmony with the shape of the upper lip groove.This measure is an indicator to produce a good handling of the axial position of the lower incisors.Groups 44 and 45 had an increase in the sulcus of 1.23 and 0.62, respectively, with no statistical difference between them.This change approximated these values to the standard values.In Group 40, there were no changes (0.02).In contrast, Wholley and Woods (6) found a significant reduction of the sulcus in their group 45.Our lower lip groove observations also contrast with those of Wholley and Woods (6) and Moseling and Woods (11), who found large individual variation, with both increases and decreases in the measures.In the present study, only Group 40 showed both increases and decreases in the sulcus (Fig. 4).Holdaway (8) and Hershey ( 14) suggest that there is more variation in the lower lip area compared to the upper lip area; yet Moseling and Woods (11) found more correlation and more predictability in the inferior sulcus than the superior. Intergroup analysis of incremental changes (Table 2) In this study, the inferior sulcus to the H line was the only variable that showed a significant difference among groups.This suggested that, when extracting the first inferior premolars in Group 44, it was desirable a greater retraction of the lip than in Group 45, where second premolars were extracted and there was a greater mesial movement of the molars.Group 40 did not exhibit changes in the measurements of the inferior sulcus, which was expected as no extraction was performed in the lower arch.The measure was in accordance with the recommended 5 mm, and changes were not desired (Table 3).As the changes caused by treatment to all other variables were similar between groups, it may be suggested that treatment protocols produce equivalent results in patients. Conclusions The facial profile results after treatment with various extraction protocols were similar using the Holdaway soft tissue analysis. Fig. 4 . Fig. 4. Changes in the inferior sulcus to the H line. Table 1 . Results of the comparison between the times T2 and T1. * Significant difference, P<0.05.* Significant difference, P<0.05.Means values followed by the same letter do not differ. Table 2 . Comparison of the T2-T1 differences among the study groups (incremental changes between groups). Table 3 . Descriptive comparison of the T1 and T2 values of the sample using Holdaway Norms.
2017-08-30T23:08:24.339Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "285aacdb911ebc0ad4eee5ca881029461b626d7a", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/roc/a/ChrXyjVrkvwNLbjR6hF8YFN/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "285aacdb911ebc0ad4eee5ca881029461b626d7a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5679527
pes2o/s2orc
v3-fos-license
Study of Salmonella Typhimurium Infection in Laying Hens Members of Salmonella enterica are frequently involved in egg and egg product related human food poisoning outbreaks worldwide. In Australia, Salmonella Typhimurium is frequently involved in egg and egg product related foodborne illness and Salmonella Mbandaka has also been found to be a contaminant of the layer farm environment. The ability possessed by Salmonella Enteritidis to colonize reproductive organs and contaminate developing eggs has been well-described. However, there are few studies investigating this ability for Salmonella Typhimurium. The hypothesis of this study was that the Salmonella Typhimurium can colonize the gut for a prolonged period of time and that horizontal infection through feces is the main route of egg contamination. At 14 weeks of age hens were orally infected with either S. Typhimurium PT 9 or S. Typhimurium PT 9 and Salmonella Mbandaka. Salmonella shedding in feces and eggs was monitored for 15 weeks post-infection. Egg shell surface and internal contents of eggs laid by infected hens were cultured independently for detection of Salmonella spp. The mean Salmonella load in feces ranged from 1.54 to 63.35 and 0.31 to 98.38 most probable number/g (MPN/g) in the S. Typhimurium and S. Typhimurium + S. Mbandaka group, respectively. No correlation was found between mean fecal Salmonella load and frequency of egg shell contamination. Egg shell contamination was higher in S. Typhimurium + S. Mbandaka infected group (7.2% S. Typhimurium, 14.1% S. Mbandaka) compared to birds infected with S. Typhimurium (5.66%) however, co-infection had no significant impact on egg contamination by S. Typhimurium. Throughout the study Salmonella was not recovered from internal contents of eggs laid by hens. Salmonella was isolated from different segments of oviduct of hens from both the groups, however pathology was not observed on microscopic examination. This study investigated Salmonella shedding for up to 15 weeks p.i which is a longer period of time compared to previously published studies. The findings of current study demonstrated intermittent but persistent fecal shedding of Salmonella after oral infection for up to 15 weeks p.i. Further, egg shell contamination, with lack of internal egg content contamination and the low frequency of reproductive organ infection suggested that horizontal infection through contaminated feces is the main route of egg contamination with S. Typhimurium in laying hens. INTRODUCTION Foodborne gastric infections due to Salmonella enterica are of major concern worldwide. Typically contaminated eggs and egg related products are primary vehicles for human salmonellosis. Globally, S. Enteritidis represents a dominant serotype in commercial poultry isolated from eggs and is frequently involved in egg related food poisoning in humans (Foley et al., 2011). S. Enteritidis, however, is not endemic in Australian poultry flocks (OzFoodNet Working Group, 2009). This niche has been filled by S. Typhimurium, which is a leading cause of foodborne outbreaks linked to contaminated egg and egg related products (OzFoodNet Working Group, 2009). In 2010, S. Typhimurium was the most commonly notified Salmonella serotype accounting for 5241 (44%) cases of all Salmonella notified infections in Australia (OzFoodNet Working Group, 2012). The external and internal egg contamination by Salmonella during poultry production is a complex issue, influenced by many variables. As a result, implementation of appropriate control measures is extremely difficult (Whiley and Ross, 2015). Egg contamination can occur by two routes, vertical or horizontal. Vertical transmission is a result of reproductive organ colonization (ovary and oviduct) before shell formation, whereas horizontal transmission occurs due to external egg shell contamination (De Reu et al., 2006). Oral challenge of both S. Enteritidis and S. Typhimurium has the potential to invade the reproductive organs. However, only S. Enteritidis has been recovered from egg contents (Keller et al., 1997;Okamura et al., 2001a;Gast et al., 2004Gast et al., , 2007Gast et al., , 2013; Gantois et al., 2008). The intrinsic properties and resistance to antibacterial compounds enabling S. Enteritidis to colonize the oviduct and contaminate egg internal contents are well-known (Gantois et al., 2009). There is, however, limited information on the long term shedding, colonization of reproductive organs and egg contamination by S. Typhimurium. Previous studies have examined reproductive organ colonization and egg contamination by S. Typhimurium in laying hens. Results from these experiments, however, are inconsistent due to variation in experimental design, route of inoculation, inoculum dose as well as the strain of S. Typhimurium selected (Baker et al., 1980;Williams et al., 1998;Leach et al., 1999;Okamura et al., 2001aOkamura et al., ,b, 2010. Moreover, the majority of these previous studies examined the capability of S. Typhimurium to colonize reproductive organs and/or egg contamination frequency up to 3 weeks post-infection, which could fail to unveil the ability of S. Typhimurium to cause egg contamination over a prolonged period (Wales and Davies, 2011). Altogether, there is a lack of published data arising from long term experiments aimed at fecal shedding, reproductive organ colonization and egg contamination by S. Typhimurium in laying hens. On commercial layer farms environmental contamination with multiple Salmonella serovars is common and represents a serious concern for poultry industries world-wide (Gole et al., 2014c;Im et al., 2015). A recent epidemiological survey examining the prevalence of Salmonella spp. on layer farms demonstrated that S. Mbandaka (54.40%, 68/125) was the most frequently recovered serovar along with S. Typhimurium (11.54%, 15/130) (Gole et al., 2014a,c). S. Mbandaka has also been isolated from egg shell, animals, feed, and sporadic cases of human salmonellosis (Hoszowski and Wasyl, 2001;Little et al., 2007;Im et al., 2015). Given the diversity of poultry associated Salmonella serovars, there are few reports on how the presence of commonly isolated serovars from layer farm environments (such as S. Mbandaka) might influence the shedding patterns of S. Typhimurium. In addition, how two Salmonella serovars have an effect upon organ invasion and egg contamination in vivo is still unclear. Given the potential public health threat by S. Typhimurium associated with consumption of contaminated egg and egg products, this study sought to investigate the dynamics of egg contamination over an extended time course. In this study the duration of fecal shedding, its relation to frequency of egg contamination and reproductive organ colonization after oral infection with S. Typhimurium alone and in combination with S. Mbandaka was investigated in commercial layer hens. To our knowledge, this is a first report of a Salmonella oral challenge model conducted in controlled environment employing strict biosecurity measures for up to 30 weeks of age. Experimental Animals Fertile eggs were obtained from a commercial layer parent flock. Eggs were fumigated using formaldehyde as previously described (Samberg and Meroz, 1995) and incubated for 21 days at 37.7 • C. Relative humidity was maintained at 45-55% until day 18 and increased to 55-65% up to hatching. A total of 32 birds were hatched, raised in pens until week 10 and then shifted in cages contained within positive pressure rooms at Roseworthy Campus of The University of Adelaide, until the end of experiment (week 30). Sample size for this study was calculated using Openepi-Tool (Dean et al., 2011). This tool along with the sample size determines the power of the experimental trial. For sample size calculation, assumed percent with outcome in S. Typhimurium and S. Typhimurium + S. Mbandaka infected group was 20% and 70% respectively with the confidence interval of 95%. This gave an 80% chance of detecting differences between treatment groups with normal approximation. Prior to experiments all animal rooms and equipment were fumigated with formaldehyde and cleaned with commercial disinfectants (Chemtel, Australia). Throughout the experiment, feed was sterilized by fumigation (Samberg and Meroz, 1995) and water purification tablets (Aquatabs, Ireland) were added to drinking water. Feed and water was provided ad libitum. The recommended lighting program specified in the commercial management guide of Hy-Line Australia Pty Ltd was followed in this study. Feces, feed, and water samples were tested at fortnightly intervals for detection of Salmonella spp. by the culture method as described previously (Gole et al., 2014a). All experiments were conducted according to the protocol approved by the institutional animal ethics committee of The University of Adelaide (Protocol No. S-2014-008) and in compliance with the Australian code for the care and use of animals for scientific purposes. Bacterial Strains, Culture, and Inoculum Preparation Salmonella isolates used for oral infection in this study were recovered previously from layer hen fecal samples (Gole et al., 2014a,c). S. Typhimurium PT 9 has been frequently implicated in egg product related human Salmonellosis in Australia (OzFoodNet Working Group, 2009, 2012. Hence, this strain was selected. The antimicrobial resistance profile of Salmonella isolates was characterized earlier (Pande et al., 2015). S. Typhimurium PT 9 isolate used in this study was resistant to amoxicillin, ampicillin, and tetracycline. This isolate was susceptible to trimethoprim, cefotaxime, cephalothin, chloramphenicol, gentamycin, neomycin, and streptomycin. On other hand, S. Mbandaka isolate used in this study was resistant to amoxicillin, ampicillin, and trimethoprim and susceptible to cefotaxime, cephalothin, chloramphenicol, gentamycin, neomycin, streptomycin, and tetracycline (Pande et al., 2015). For oral inoculation, stocks of bacterial strains were cultured overnight at 37 • C on nutrient agar. Twenty-four hours prior to infection, a single colony of each Salmonella strain was added to a separate tube containing 5 ml of Luria Bertani (LB) broth (Oxoid, Australia) and incubated 6 h with shaking (110 rpm). From this LB culture, 10 µl was transferred to 5 ml of LB and grown overnight at 37 • C with shaking. Bacterial suspensions were diluted to 10 9 bacteria per ml for oral inoculation. Bacterial cell counts (CFU) were determined by plating 10-fold serial dilutions of the inoculum on nutrient agar to confirm dose. Experimental Design At week 10 after hatch, birds were divided in three treatment groups and housed in separate rooms in individual cages. At the age of 14 weeks, birds were orally challenged with either 10 9 CFU of S. Typhimurium PT 9 (T group, n = 14) or 10 9 CFU of S. Typhimurium PT 9 and S. Mbandaka (TM group, n = 14). Control birds (C group, n = 4) received only sterile LB broth. Following infection, all experimental birds were monitored twice a day for clinical signs of infection. All hens were humanely euthanized at the age of week 30. Ovary and segments of the oviduct (infundibulum, magnum, isthmus, uterus (shell gland) and vagina) were removed aseptically and processed for bacteriological and histopathological analysis. Throughout the study, all eggs laid (n = 1004) during 5, 7, 9, 11, 13, 15 weeks post-infection (p.i.) were tested for presence of Salmonella spp. Enumeration and Isolation of Salmonella from Feces Fecal samples were aseptically collected from individual hens in Whirl-Pack plastic bags (Thermo Fisher Scientific, Australia) on days 0, 1, 3, 6, 9, and 12 followed by weeks 3, 5, 7, 9, 11, 13, and 15 p.i. Fecal samples were processed for enumeration of Salmonella by three tube most probable number (MPN) method (Santos et al., 2005;Pavic et al., 2010). Briefly, 10 g of fecal sample were weighed in sterile Whirl-Pack plastic bag (Thermo scientific, Australia) followed by the addition of 90 ml Buffered peptone water (BPW, (Oxoid, Australia) (1:10); bags were then homogenized for 1 min. From this bag 10 ml of homogenate was placed into three different sterile tubes (10 0 dilution). Then 1 ml of homogenate sample was transferred to three different tubes containing 9 ml of BPW, and then serially diluted in triplicate tubes of BPW. The tubes were incubated overnight at 37 • C. After incubation, 10 µl of BPW from each MPN tube was plated on modified semisolid Rappaport-Vassiliadis (MSRV, Oxoid, Australia) agar plates and incubated at 42 • C for 24 h. A loopful of media from the leading edge of white zones from MRSV plate was streaked onto XLD and or Salmonella Brilliance agar plates (Oxoid, Australia) for confirmation of Salmonella. Bacteriological Analysis of Egg Shell and Internal Contents Eggs from both control and Salmonella infected hens were collected aseptically in individual Whirl-Pack plastic bags. Each egg was processed for the presence of Salmonella on the egg shell and in the internal contents. Briefly, an individual egg was immersed in 10 ml of BPW in Whirl-Pack plastic bag, massaged for 2 min and then removed. The egg shell rinse was then processed for Salmonella isolation as previously described (Gole et al., 2014a). Each egg was dipped in 70% ethanol for 2 min to prevent internal content contamination from the egg shell surface. Each egg was then broken aseptically and contents emptied into a Whirl-Pack plastic bag. The egg contents were homogenized thoroughly. Five ml of internal egg contents were mixed with 45 ml of BPW (1:10) and incubated at 37 • C overnight. Salmonella enrichment and isolation from egg shell and internal content samples was carried out as described previously (Gole et al., 2014a). Salmonella positive egg shell wash enriched in Rappaport-Vassiliadis (RVS) broth was stored in 20% glycerol at −80 • C for further PCR testing. Bacteriological Analysis of Reproductive Organs Samples (0.1-0.2 g) of the ovary, infundibulum, magnum, isthmus, uterus (shell gland), and vagina were collected in sterile tubes. The tissue samples were homogenized using a Bullet Blender R (Next Advance Inc. USA) at full speed for 2 min and serial 10-fold dilutions were prepared in phosphate buffer saline (PBS). From each dilution 100 µl was spread directly onto XLD agar plates (Oxoid, Australia) and incubated overnight at 37 • C. After 24 h the number of colonies was enumerated and concentration of Salmonella in tissues was expressed as mean log 10 CFU/g of tissue. DNA Extractions from Fecal Samples, Egg Shell Wash, and Reproductive Organs DNA was extracted from all fecal samples of control, T and TM groups using QIAamp DNA Stool Mini Kit (Qiagen, Australia) according to manufacturer instructions. DNA extraction from all Salmonella isolates recovered from egg shell washes of T and TM hens was performed as previously described (Pande et al., 2015). Briefly, the frozen stock of RVS broth was thawed and 50 µl of broth was mixed with 450 µl of LB broth and incubated overnight at 37 • C. One hundred microliter of overnight bacterial culture was mixed to 1 ml of sterile water and centrifuged at 14,000 g for 2 min. After decanting the supernatant, the bacterial pellet was re-suspended in 200 µl of 6% Chelex R (Bio-Rad, Sydney, NSW, Australia) prepared in TE (10 mM Tris and 1 mM EDTA). Tubes were incubated at 56 • C for 20 min, vortexed and further incubated at 100 • C for 8 min. Samples were placed on ice for 5 min and centrifuged at 14,000 g for 10 min. Supernatants were recovered from each sample and used as a DNA template for PCR. DNA was extracted from reproductive organs using DNeasy Blood & Tissue Kit (Qiagen, Australia) as per manufacturer instructions. PCR Detection of S. Typhimurium and S. Mbandaka Salmonella positive egg shell wash samples from T and TM group, all fecal samples and culture positive reproductive organs from T and TM groups were screened for Salmonella specific invA gene and S. Typhimurium serovar specific genomic region TSR3 (Akiba et al., 2011) by multiplex PCR to detect S. Typhimurium PT9. TSR3 gene was not amplified in S. Mbandaka isolates (Akiba et al., 2011). Further, to differentiate S. Mbandaka from S. Typhimurium PT 9 in the TM group, DNA extracted from feces, egg shell wash and reproductive organs were tested for dhfrV gene that confers resistance to trimethoprim (Pande et al., 2015). Samples from T infected group were also tested for dhfrV gene. S. Typhimurium PT9 used in this study was sensitive to trimethoprim and negative for dhfrV gene (Pande et al., 2015). PCR reactions for invA and TSR3 gene were performed in a total reaction volume of 20 µl including 2 µl DNA template. PCR reaction mixture consisted of final concentration of 1.5 mM MgCl 2 , 2.5 µM of each dNTP (Bioline, Australia), 0.5 µM each forward and reverse primer and 2.5 U of Taq polymerase (Bioline, Australia). DNA amplification was carried out in T100 thermal cycler (Bio-Rad, Australia) using the following protocol: 2 min initial denaturation at 94 • C, following 30 cycles of denaturation at 95 • C for 30 s, annealing at 60 • C for 30 s, extension at 68 • C for 30 s and a final extension at 72 • C for 5 min. PCR reactions for dhfrV gene were performed in a total reaction volume of 25 µl including 2 µl DNA templates. Each PCR reaction mixture consisted of final concentration of 1.5 mM MgCl2, 2.5 µM of each dNTP (Bioline, Australia), 0.28 µM of each primer and 2.5U of Taq polymerase (Bioline, Australia) using the following PCR cycle conditions: 2 min initial denaturation at 95 • C, following 30 cycles of denaturation at 94 • C for 30 s, annealing at 64 • C for 30 s, extension at 72 • C for 30 s, and a final extension at 72 • C for 5 min. PCR products were electrophoresed at 60 V for 1.5 h on 1.5% agarose gel in 0.5X Tris borate EDTA buffer and stained with GelRed ™ nucleic acid gel stain (Biotium, USA). The size of PCR products was determined by comparing with standard 100 bp ladders (Thermo Fisher, Australia). Negative and positive controls were used in each PCR reaction for all the samples. In order to investigate the detection limit of S. Typhimurium by multiplex PCR, Salmonella negative fecal samples were spiked with S. Typhimurium or S. Typhimurium + S. Mbandaka at doses ranging from 10 1 to 10 9 CFU/ml. Following DNA extractions from spiked samples using QIAamp DNA Stool Mini Kit (Qiagen, Australia), multiplex PCR was performed as described above. Histopathology of Reproductive Organs Infundibulum, magnum, isthmus, uterus, and vagina were collected individually to evaluate histomorphological alterations in response to Salmonella infection. Tissue samples of reproductive organs were fixed in 10% neutral buffered formalin, embedded in paraffin wax and 5 µm sections were stained with Haematoxylin and Eosin stain (H &E). Statistical Analysis Significant differences between groups in the isolation rate of Salmonella from feces and eggs were determined by Fisher's exact probability test. MPN data was analyzed by two way analysis of variance. The relationship between recovery of Salmonella (MPN/g) from feces and isolation of Salmonella from egg shell was determined by Pearson correlation test (R 2 -value). All data generated in this study was analyzed statistically either using GraphPad Prism version 6 software or IBM R SPSS Statistics R version 21. P < 0.05 were considered statistically significant. Clinical Symptoms and Mortality During the first week p.i., mucoid and blood tinged feces were observed in two birds from each treatment group. No mortality was recorded in any of the treatment groups throughout this study. Fecal Shedding of Salmonella at Different p.i. Intervals All fecal, water and feed samples collected from experimental birds before oral challenge were negative for Salmonella spp. The number of Salmonella positive fecal samples for both T (S. Typhimurium) and TM (S. Typhimurium and S. Mbandaka) groups over the course of the experiment is presented in Table 1. No significant difference (p = 0.848) was observed in number of fecal samples positive for Salmonella between T (152/168, 90.47%) and TM groups (154/168, 91.66%). There were more fecal positive samples until week 5 p.i. An overall decline after week 5 in the number of birds shedding Salmonella in feces was observed in both groups. Overall, persistent Salmonella shedding in feces was observed in both groups throughout the experimental period after oral infection. Salmonella spp. was not isolated from any bird in the control group (data not shown). Enumeration of Salmonella from Feces by MPN Method The viable counts of Salmonella (MPN/g) in feces over the course of the experiment are presented in Figure 1. Throughout the experimental period viable counts of Salmonella were detected in the feces with a mean frequency ranging from 1.54 to 63.35 and 0.31 to 98.38 MPN/g in T and TM groups, respectively. The mean Salmonella load peaked at week 5 p.i. and thereafter, a decline in the viable Salmonella load was observed in both groups (Figure 1). Mean Salmonella counts were variable between days p.i. and group over the course of the experiment. Mean load of Salmonella was significantly higher (p = 0.0001) in the TM group compared to the T group at day 12 p.i. Variables such as group and days p.i. revealed significant differences in viable Salmonella count recovered from feces of orally infected birds (p = 0.0004). Analysis of Salmonella from Egg Shell All eggs tested (n = 136) from control hens were negative for Salmonella. The frequency of egg shell contamination after oral infection ranged from 0 to 16.67 and 16.67 to 21.11% in T and TM group, respectively. Overall the frequency of egg shell contamination was significantly higher (p = 0.001) in the TM group (18.69%, 83/444) as compared to the T group (5.66%, 24/424) ( Table 2). In order to determine the effect of co-infection (TM group) on the recovery rate of S. Typhimurium on egg shell surface, multiplex PCR that specifically differentiates S. Typhimurium from S. Mbandaka was carried out. Overall the frequency of recovery of S. Typhimurium from egg shells of TM group (7.20%, 32/444) did not differ significantly from T group (5.66%, 24/424). PCR results indicated that overall, 14.1% (63/444) egg shell samples were positive for S. Mbandaka (Table 2). Correlation between Salmonella shedding in feces (MPN/g) and subsequent egg shell contamination was analyzed using a Pearson correlation test. No correlation was evident between mean fecal Salmonella load and observed frequency of contaminated eggs laid by orally infected birds of T and TM group (p = 0.624, R 2 = 0.002 T group; p = 0.177, R 2 = 0.022 TM group). Fecal Shedding and egg contamination data per bird/egg over time is presented in Supplementary Table 1. In TM group, Salmonella shedding in feces and eggs was variable in individual birds across 15 weeks p.i. Comparison between Culture and PCR Based Detection of S. Typhimurium The sensitivity of multiplex PCR for invA and TSR3 gene to detect S. Typhimurium was determined by spiking fecal samples with various doses of S. Typhimurium or S. Typhimurium + S. Mbandaka. The PCR detection limit for S. Typhimurium Analysis of Salmonella from Internal Egg Contents Over the course of the experiment, Salmonella was not isolated from the internal content of eggs (n = 1004) laid by either control or infected hens. Bacteriological and Histopathological Analysis of Reproductive Organs The recovery rate of Salmonella from reproductive organs is summarized in Table 3). Mean concentration of Salmonella (mean log 10 CFU/g) was highest in vagina (3.54 ± 0.64) and uterus (3.00 ± 0.45) of the T and TM group birds, respectively ( Table 3). Colonization of reproductive organs was not frequent and only 0-3 hens of the 14 hens for each of the groups showed Salmonella in the bacteriological analysis of their reproductive organs, and no histopathological lesions were detected in any case. Groups T group TM group Reproductive organs n a Mean log 10 CFU/g b n a Mean log 10 CFU/g b Detection of S. Typhimurium in Reproductive Tissues by PCR The reproductive organs from the T and TM groups found positive for Salmonella by culture method were analyzed by multiplex PCR to detect S. Typhimurium. Only one reproductive tissue (uterus) from T group was found positive for S. Typhimurium by multiplex PCR assay (data not shown). All other samples from T group tested negative by PCR for Salmonella spp. DISCUSSION The present experiment was designed to study the long term shedding, egg contamination and colonization of oviduct by S. Typhimurium. It is considered that adult birds are more resistant to salmonellae than young chicks due to the developed gut microflora (Gast, 2008). Continued harboring of the organism and intermittent fecal shedding has also been noted for up to 1 year following infection of day old chicks (Gast, 2008) however, in our study older birds (14 wk) were infected with Salmonella. Previous studies reported low colonization of S. Typhimurium in adult birds (Groves, 2011), however, the results of the current study demonstrate that S. Typhimurium can colonize the gut and shed bacteria up to 15 weeks p.i. In this study, intermittent but prolonged fecal shedding of bacteria was observed in both infected groups. A significant difference between the T and TM group at day 12 p.i. could be due to the intermittent Salmonella shedding. The magnitude of Salmonella shedding was higher up to 5 week p.i. Thereafter, the level of Salmonella in feces dropped but persisted for 15 weeks p.i. The increased Salmonella shedding in feces observed up to 5 week p.i. in this study could be attributed to the stress associated with the onset of lay. In layer birds, the stress occurring as a result of lay could negatively impact their immunity (El-Lethey et al., 2003;Humphrey, 2006) consequently resulting in higher shedding of Salmonella. Higher rate of fecal Salmonella shedding at the early onset of lay has also been reported previously (Gole et al., 2014a). The decrease in Salmonella load in feces after 5 weeks p.i. in both treatment groups could be the result of recovery from laying stress or development of effective humoral response. In addition, previous studies have reported that gastrointestinal microflora of older birds was responsible for protection against food poisoning Salmonella serovars (Barrow et al., 1988;Gast, 2008). Fecal Salmonella counts from this study could not be compared with previous reports because the majority of these studies have examined post-infection fecal shedding of Salmonella for a shorter duration. A field survey investigating the prevalence of Salmonella shedding on commercial layer farms found significant variability in Salmonella prevalence at various stages of lay (Gole et al., 2014a). On farm, shedding of S. Typhimurium from the known positive laying hens can be intermittent and remain undetected for several weeks (Gole et al., 2014c). Such results suggest that Salmonella spp. can remain in the caeca for long periods of time and persistently infected hens could transmit the infection to unexposed and susceptible birds thereby maintaining the Salmonella infection cycle in the flock (Lister and Barrow, 2008). Hence, it is essential to frequently monitor the Salmonella free status of the birds used for the infection trials. No correlation between fecal Salmonella counts and the recovery of bacteria from egg shell surface in experimentally infected birds was observed in this study. A recent longitudinal survey on two commercial layer farms found a significant relationship between Salmonella fecal contamination and egg shells testing positive for Salmonella (odds ratio 91.8; p < 0.001) (Gole et al., 2014c). In contrast, egg shells were found negative for S. Typhimurium in experimental infections although the bacterium was excreted in the feces (Baker et al., 1980;Okamura et al., 2001a,b). In the present study, though the egg shell contamination failed to positively relate with fecal shedding of Salmonella, fecal carriage of Salmonella was observed throughout the experimental period. The egg shell surface contamination observed in this study stresses the importance of proper egg handling and hygienic practices in food preparation and processing premises to avoid cross contamination of other food products. The multiplex PCR was validated to detect S. Typhimurium positive samples in T and TM groups. In experimentally spiked fecal samples, the multiplex PCR demonstrated a good sensitivity and was able to detect 10 2 CFU/reaction of S. Typhimurium. On the other hand, PCR assay was able to detect 10 4 CFU/reaction of S. Typhimurium and S. Mbandaka in the fecal samples spiked with S. Typhimurium + S. Mbandaka. The poor detection limit observed in the feces experimentally spiked with S. Typhimurium + S. Mbandaka may under-represent the positive samples detected in the TM group using the PCR assay. The poor PCR sensitivity compared with the standard culture method to detect S. Typhimurium in fecal samples is similar to previous studies and could be attributed to the gradual reduction in Salmonella in feces, presence of PCR inhibitors and other abundant microflora DNA interfering with the PCR assays (Wilson, 1997;Gole et al., 2014a,c). This study has examined egg shell contamination following oral infection with Salmonella for a prolonged period (15 weeks p.i.) compared to previous short term experimental infection studies (up to 3 weeks) and our results demonstrated that egg shell contamination by Salmonella occurred for longer p.i. intervals. Egg shell contamination following oral infection of S. Typhimurium observed in this study has also been reported previously (Cox et al., 1973). In the current study, the overall rate of egg shell contamination by Salmonella was significantly higher in the co-infected group (TM group) compared to the T group. However, the effect of co-infection on egg shell contamination analyzed by PCR demonstrated no significant difference in number of S. Typhimurium positive egg shells between T (S. Typhimurium) and TM groups (S. Typhimurium + S. Mbandaka). There is a little literature indicating the effect of mixed Salmonella infection on egg contamination after oral infection. The high experimental infection doses used in our study does not mimic field situations and had non-significant effects on the recovery rate of S. Typhimurium from the egg shell in the coinfected group. To compare these results with the field scenario further experiments using different routes and doses of multiple Salmonella serotypes are needed. In the present study internal egg contents laid down by birds infected with S. Typhimurium alone or in combination with S. Mbandaka were negative for Salmonella up to week 15 p.i. The results of this study are also in agreement with the field surveys in Australia (Daughtry et al., 2005;Gole et al., 2013Gole et al., , 2014c and previous reports in which oral or crop infection with S. Typhimurium was not associated with the contamination of egg contents (Cox et al., 1973;Baker et al., 1980;Keller et al., 1997;Okamura et al., 2010). On the other hand, contamination of egg internal contents with S. Typhimurium has been documented after experimental infection of hens at the onset of lay via oral and aerosol routes (Williams et al., 1998;Leach et al., 1999;Okamura et al., 2010). Altogether, the possibility of egg content contamination with S. Typhimurium seems to be a rare event. However, in those studies where experimental infection has caused internal contamination, sexual maturity, or the onset of lay was found to be an important factor for internal egg contamination. It is well-known that colonization of reproductive organs with S. Enteritidis results in the deposition of bacteria within the egg contents of developing eggs in experimentally infected laying hens (Thiagarajan et al., 1994;Keller et al., 1995). However, the frequency of S. Typhimurium isolation from reproductive organs and corresponding frequency of internal egg content contamination is unclear. The present study determined that colonization of reproductive organs of S. Typhimurium infected (T group) hens and coinfected (TM group) hens varied after oral infection. The magnitude of S. Typhimurium recovery from each section of oviduct except for uterus was higher in the T group than TM group where Salmonella was localized to certain parts of the oviduct. To assess the effect of mixed infection, reproductive tissues from T and TM groups found positive for Salmonella by culture method were also analyzed by multiplex PCR to detect S. Typhimurium. In spite of positive culture results, S. Typhimurium was recovered from only one reproductive tissue (uterus) by multiplex PCR assay. This finding suggests that culture methods are more sensitive than multiplex PCR in detecting S. Typhimurium. The lack of additional stand-alone S. Mbandaka group and sacrifice of birds at regular intervals are some of the limitations of this study. However, it is interesting to note that despite the low Salmonella colonization in the oviduct of hens from TM group, frequency of egg shell contamination was significantly higher in the TM group (particularly for S. Mbandaka) as compared to the T group. The results of prolonged Salmonella fecal shedding observed in this study indicated that colonization was present somewhere within the animal after several weeks p.i. However, though the persistence of Salmonella in the reproductive tissues of very few infected birds was evident after a long p.i. interval, the internal egg contents were negative throughout the experimental period in both T and TM groups. Moreover, this study demonstrates that the mere presence of S. Typhimurium in reproductive tissues would not give rise to the production of internally contaminated eggs. The observations of the present study also support the previous findings which concluded that S. Typhimurium has the potential to colonize both the reproductive organs and developing eggs prior to oviposition but cannot be recovered from internal egg contents after oviposition (Keller et al., 1997;Okamura et al., 2001a;Gantois et al., 2008). Overall, the results of the present and previous studies demonstrate that S. Typhimurium was found to colonize the reproductive organs of laying hens. However, why S. Typhimurium is not associated with contamination of laid eggs is still unclear. S. Typhimurium is able to penetrate and survive in the egg albumin and the yolk at 20 or 25 • C (Gantois et al., 2008;Gole et al., 2014b). In addition, the S. Typhimurium genome possesses virulence associated genes involved in cellular adhesion, invasion and survival of S. Typhimurium . Therefore, it could be possible that S. Typhimurium is unable to survive and proliferate in egg contents during egg formation at host body temperature (42 • C) or there could be down regulation of genes critical to colonization of S. Typhimurium. This could partly explain why S. Typhimurium despite their colonization in reproductive organs was never isolated from egg contents in this study. Salmonella pathogenicity islands (SPIs) are the gene clusters that encode virulence factors present in Salmonella genome (Foley et al., 2013). It has been observed that SPI-1 and SPI-2 contribute to the colonization of caecum, liver, and spleen in chickens (Dieye et al., 2009;Rychlik et al., 2009). A recent study also demonstrated that poultry body temperature may regulate systemic colonization (Troxell et al., 2015). However, the role of these pathogenicity islands in reproductive organ colonization in laying hens is less understood and needs further research. In addition, the possible role of several factors such as immunoglobulins, iron sequestering, and proteins inhibiting bacterial protease and antibacterial enzymes present in the egg yolk and albumin have been identified to inhibit the growth of Salmonella before shell formation is complete and eggs are laid (Keller et al., 1995;Gantois et al., 2009;Bedrani et al., 2013). In order to determine the course of reproductive organ invasion after oral Salmonella infection, histopathology of reproductive tissues was carried out in this experiment. The regions of reproductive tract which were found positive after cultural analysis did not show lesions suggestive of bacterial infection. As there is lack of published information on histopathological alterations in oviduct tissue after prolonged infection interval, these findings could not be compared to previous studies. In addition, examination of infected birds at periodic intervals was not a part of this study but may have identified a time window for establishment of oviduct lesions as a result of bacterial infection. The possible explanation for the absence of inflammatory lesions after a long p.i. interval in response to oral Salmonella infection in this study could be related to either the low level of tissue colonization or development of strong immune response to clear the infection. Further, research examining the localization of Salmonella at different time intervals, cellular involvement and why Salmonella clearance from reproductive tissues does not take place is warranted. In summary, intermittent but persistent fecal shedding of Salmonella after oral infection was observed up to 15 weeks p.i. Further, egg shell contamination together with lack of internal egg contents contamination and the low frequency of reproductive organ infection suggested that horizontal infection through contaminated feces is the main route of egg contamination with S. Typhimurium during lay. Previously, it has been hypothesized that effective and more immune response generated by S. Typhimurium compared to S. Enteritidis is likely to limit the disease progression and quickly clears the S. Typhimurium infection from birds (Wales and Davies, 2011). The egg shell contamination observed in this study also stresses the importance of proper egg handling and hygienic practices in food preparation and processing premises to avoid cross contamination of other food products. Considering the productive life span of commercial laying hens (75-80 weeks) further studies are required to study the shedding of S. Typhimurium beyond 15 weeks p.i.
2016-06-17T07:23:13.718Z
2016-02-25T00:00:00.000
{ "year": 2016, "sha1": "5f80451873205b96fba6776fce15d9ae5004810c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2016.00203/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5f80451873205b96fba6776fce15d9ae5004810c", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
119650205
pes2o/s2orc
v3-fos-license
SUSY Unparticle and Conformal Sequestering We investigate unparticle physics with supersymmetry (SUSY). The SUSY breaking effects due to the gravity mediation induce soft masses for the SUSY unparticles and hence break the conformal invariance. The unparticle physics observable in near future experiments is only consistent if the SUSY breaking effects from the hidden sector to the standard model sector are dominated by the gauge mediation, or if the SUSY breaking effects to the unparticle sector is sufficiently sequestered. We argue that the natural realization of the latter possibility is the conformal sequestering scenario. Introduction Recently, there has been a much popularity in unparticle theories [1] (see [2,3,4,5] for further works on the subject), where the unparticle sector shows conformal invariance 1 below a certain high energy scale, and it couples to our standard model sector by higher dimensional interactions. The nontrivial conformally invariant unparticle sector leads to a novel type of observable effects in the standard model sector, which may be accessible in near future experiments at TeV scale. On the other hand, since one of the most appealing new physics at TeV scale is the supersymmetry (SUSY), it is very natural to consider the supersymmetric extension of the unparticle sector. The first aim of this paper is to investigate the supersymmetric extension of the unparticle sector based on the superconformal field theory. Technically, the superconformal field theory in four-dimension [6] is powerful enough to obtain some crucial dynamical information about the unparticle physics. For example, the relation between the R-charge and the conformal dimension determines the conformal dimensions of the chiral operators beyond the perturbation theory. We also have severer inequalities for conformal dimensions that are not available in non-supersymmetric theories. In this sense, the introduction of the SUSY in the unparticle sector is theoretically appealing. Physically, however, the SUSY, in particular the SUSY breaking effect, introduces an extra constraint on the unparticle physics. The original studies on the unparticle physics [1] assume that the unparticle sector remains conformal at least down to the weak scale, at which experimental evidence for the unparticle physics is expected. On the other hand, as we will discuss in the main part of this paper, the gravity mediation of the SUSY breaking necessarily gives rise to soft masses for the unparticle sector at the gravitino mass scale. If the gravitino mass is as large as the weak scale, the supersymmetric unparticle physics cannot be realized at the weak scale. We propose two ways to solve this tension between the SUSY breaking and the un-broken conformal invariance in the unparticle sector. One possibility is to use the gauge mediation [7], where the gravitino mass scale can be low enough to avoid the problem. The other possibility is to tune the Kahler potential so that the gravity mediation does not occur. The latter "tuning" can be naturally realized through the conformal dynamics by using the conformal sequestering mechanism [8,9] (see also [10,11,12,13,14]) between the SUSY breaking hidden sector and the unparticle sector. The organization of the paper is as follows. In section 2, we present a concrete example of the SUSY unparticle sector based on the SQCD in the conformal window. In section 3, we discuss the conformal symmetry breaking in the unparticle sector. In section 4, we investigate the conformal symmetry breaking induced by the SUSY breaking mediation from the hidden sector. We propose two ways to avoid the high energy breaking of the conformal symmetry by using the gauge mediation or the conformal sequestering. In section 5, we summarize the paper and give some concluding remarks. SQCD as SUSY unparticle sector Our set up is based on three sectors: the first sector is the SUSY standard model sector, the second sector is the SUSY breaking hidden sector, and the last sector is the SUSY unparticle sector. They are weakly interacting with each other through higher dimensional operators. In this section, we discuss some important properties of the SUSY unparticle sector by presenting a concrete example. One of the simplest examples of the SUSY unparticle sector is given by the SU(N c ) SQCD with N f flavors [15,3], which is a natural SUSY extension of the non-SUSY Banks-Zaks model [16]. We take 3 2 N c ≤ N f ≤ 3N c so that the unparticle SQCD is in the conformal window. We denote the chiral superfields for the N f flavors (and their scalar top components) by Q i andQ j (i, j = 1 . . . N f ). Q i transforms as a fundamental representation of SU(N c ) andQ j transforms as an anti-fundamental representation. In the unparticle SQCD in the conformal window, we have three possible scalar operators with the ultraviolet (UV) dimension two. 2 In the unparticle physics, these operators will play dominant roles because they are the lowest dimensional gauge invariant scalar operators in the unparticle sector. The first one is given by the mesonic (anti-)chiral operators Their conformal dimensions are fixed by the R-symmetry: The second one appears as a scalar component of the conserved current supermultiplets where we only take the SU(N f ) part. Since they are part of the conserved current supermultiplet, they are not renormalized. As a consequence, the dimension of these operators are not renormalized: Similarly the scalar part of the baryon current supermultiplet is not renormalized as well: The final one is given by a scalar component of the Konishi supermultiplet, which is not conserved due to the anomaly: It has a dimension d J A = 2 + γ A , where γ A > 0 is the anomalous dimension of the Konishi supermultiplet. 3 This anomalous dimension is related to the slope of the beta function at seems overlooked in some unparticle literatures. For a hypothetical scale invariant field theory that is not conformally invariant, this bound might be violated, and observation of such a violation would be of great interest theoretically. 3 The equality γ A > 0 is guaranteed from the superconformal algebra. the fixed point as γ A = β ′ (α * ) (see [18]; the same formula is re-derived in the context of the conformal sequestering in [9,11]). In the lowest Banks-Zaks approximation, we have Even in the strongly coupled regime, we know that the dimensions of J ij , J B and J A are bounded from below by two as a consequence of the superconformal symmetry, so the most relevant operators to us in the unparticle physics is given by the (anti-)chiral mesonic operators M ij . It is interesting to note that the unparticle SQCD in the conformal window has a Seiberg-dual description, where the UV dimension of M ij is given by 1 instead of 2. Since the unparticle physics does depend on the UV dimension of the operator, we have a chance to distinguish the electric and magnetic UV completions of the low energy effective field theory. For later purposes, we discuss some properties of the SQCD with soft SUSY breaking terms. As we will see, the SUSY breaking in the hidden sector induces soft parameters in the unparticle SQCD sector as well. We typically have two options. The first option is that the gaugino in the unparticle SQCD and the squarks in the unparticle SQCD have comparable soft masses. 4 In this case, the low energy effective field theory below the mass scale of the unparticle sector is given by the non-SUSY SU(N c ) QCD with N f fundamental quarks. The unparticle QCD is supposedly in the confining phase, 5 and we do not have conformal invariance in the low energy physics. Consequently, we conclude that the unparticle sector shows usual particle physics below the soft mass scale. The second option is that the unparticle gaugino is much lighter than the unparticle squarks. Then, below the mass scale of the unparticle squarks, we have SU(N c ) QCD with one adjoint Majorana fermion with N f fundamental quarks. In this case, there is a narrow chance of having a conformally invariant theory (above the unparticle gaugino mass scale) when we take N f very close to 3N c . However, since the lower bound for the conformally invariant QCD is a delicate issue, we will not pursue this possibility further in this paper. 4 We assume that the soft mass squared for the unparticle squarks is positive. 5 The conformal fixed point would be obtained when 4N c N f ≤ 11 2 N c . Recent estimations of the lower bound can be found e.g. in [19]. In summary, with the SUSY breaking, the conformal invariance of the unparticle sector is broken at the soft mass scale of the unparticle squarks and gauginos. Below the mass scale, special properties of the unparticle physics is lost and will not be observed experimentally as we will discuss in the following. Unparticle physics and conformal breaking scale We assume that the unparticle sector and our standard model sector interact by the effective operator Below the scale Λ U , where the unparticle sector becomes (approximately) conformally invariant, the interaction (6) can be replaced by In general, the unparticle sector cannot remain conformal below the scale of the weak interaction without any further assumption or fine-tuning. This is because we can take O SM as the Higgs mass operator |H| 2 [3,4]. Then the conformal invariance of the unparticle sector is spontaneously broken by the vacuum expectation value of the Higgs field: v 2 = |H| 2 . The conformal breaking scale is given by [3] Λ Unparticle physics can be observed in the energy range Λ / U < E < Λ U . In order to obtain a reasonable hierarchy between Λ / U and Λ U , we have to assume either c ′ or v Λ U is small (for d U V = 2). The latter possibility suggests that the unparticle scale M U , which is larger than Λ U from theoretical consistency, is too high to yield any experimentally accessible unparticle effects. Therefore, we have to assume c ′ is sufficiently small in order to have experimentally observable unparticle physics. Such a possibility can be (technically-)naturally realized by using the (unbroken) SUSY (see also [5]). Still, we have to face the breaking of the conformal symmetry due to the SUSY breaking and its mediation to the unparticle sector, which will be the main scope of the next section. Conformal breaking from SUSY breaking From the discussion in section 2, if one realizes the SUSY unparticle sector by the SQCD in the conformal window, the soft mass scale m 0 for the unparticle SQCD sets another conformally breaking scale Λ / U ∼ m 0 in addition to the Higgs mass scale. In order for the unparticle physics to be accessible in near future experiments, the condition Λ / U ≪ 1TeV should be satisfied. In this section, we investigate the effects of SUSY breaking mediated from the SUSY breaking hidden sector. First of all, we have to consider (almost) model-independent universal effects from the anomaly mediation [20]. There is none, however. Since the unparticle sector is conformally invariant, there is no conformal anomaly, and hence there is no soft parameters induced by the anomaly mediation. In the following, we discuss model-dependent effects to the soft masses for the unparticle sector one by one. Let us begin with the case where the hidden sector SUSY breaking effects are mediated to the standard model sector by the gravity mediation. In this scenario, the gravitino mass m 3/2 and the standard model sfermion masses m 0 are of the same order 6 and given by where F denotes the SUSY breaking F -term of the hidden sector and M pl denotes the Planck scale. When the SUSY breaking mediation to the unparticle sector is also given by the gravity mediation, the soft masses for the unparticle SQCD are also given by F M pl . Given that the standard model sfermion masses are above the weak interaction scale, this scenario leads to no near-future-observable unparticle physics. Alternatively, when the SUSY breaking mediation to the unparticle sector is given by the gauge mediation, where the "messenger" is charged under the unparticle SQCD, the soft mass for the unparticle SQCD is much higher than the gravitino mass. Therefore, this scenario also does not lead to observable unparticle physics. Let us move on to the case where the hidden sector SUSY breaking effects are mediated to the standard model sector by the gauge mediation. In this scenario, the gravitino mass is much smaller than the standard model sfermion/gaugino masses. Furthermore, when the SUSY breaking mediation to the unparticle sector is given by the gravity mediation, the unparticle SQCD acquires soft masses of order of the gravitino mass. Then the light unparticle soft mass is available. Alternatively, when the SUSY breaking mediation to the unparticle sector is also given by the gauge mediation, the unparticle sector may have a soft mass comparable to the standard model SUSY particles. 7 Finally let us consider the case where the hidden sector SUSY breaking effects are mediated to the standard model sector by the anomaly mediation. In this scenario, the gravitino mass is heavier than the standard model sfermion/gaugino masses as m 1/2 ∼ βg g m 3/2 , where g is the coupling constant of the standard model gauge interaction and β g is its beta function. When the SUSY breaking mediation to the unparticle sector is given by the gravity mediation, the unparticle sector obtains soft masses of order of the gravitino mass. Then the unparticle physics near the weak scale is unavailable. The situation with the gauge mediation is even worse, and there would be no observable unparticle physics near future experiments. In summary, without any fine-tuning of the Kahler potential, only compatible SUSY breaking scenario with the unparticle physics is the gauge mediation between the SUSY breaking hidden sector and the standard model sector. The effect of the SUSY breaking is mediated to the unparticle sector by the gravity mediation. So far, we have assumed that the Kahler potential is generic (except for the anomaly mediation case, where the direct coupling between the hidden sector and the standard model sector should be sequestered 8 ), but since the unparticle sector is in the conformal 7 If we assume that the mass scales of the standard model -hidden sector messengers and those of the unparticle -hidden sector are of the same order, the unparticle soft mass tends to be larger than the standard model soft masses due to the large loop factor near the strongly coupled conformal fixed point. However, there is no fundamental reason to take the same masses for messengers living in different sectors, and it is possible to obtain a small soft mass for the unparticle sector. 8 We stress, however, that we will focus on the sequestering between the hidden SUSY breaking sector window, it is natural to study the possibility of the conformal sequestering in the gravity (or anomaly) mediation scenario. For a success of the sequestering in the coupling between the hidden sector and the unparticle sector, we should demand that the Kahler potential in the Einstein frame is in the sequestered form: where Φ represents the standard model chiral superfield and S represents the SUSY breaking hidden sector chiral superfield. In the conformal sequestering scenario, we first assume that the global symmetry forbids the coupling between the conserved current and the hidden sector superfields such as S † SJ ij , S † SJ ij and S † SJ B in the (conformal frame) Kahler potential, which can be achieved by demanding symmetry or their discrete subgroup. This is necessary because the conserved current supermultiplets are not renormalized and the conformal sequestering does not apply for such interactions. We furthermore demand that the interaction such as S † SM ij must be forbidden e.g. by requiring (discrete) R-symmetry. This is because the dimension of M ij is less than 2, so the conformal dynamics would rather enhance the coupling. With these symmetry assumptions, we can see that the remaining interaction S † SJ A that cannot be forbidden by the symmetry argument is indeed conformally sequestered in our unparticle SQCD model. This is due to the anomalous dimension of the Konishi supermultiplet. More precisely, the coefficient c appearing in the conformal frame Kahler at the conformal scale Λ U reduces to at the scale Λ. In this way, for large enough γ A , by taking N f ∼ 2N c in our example, we could obtain a reasonably low energy conformal breaking in the gravity mediation scenario while keeping and the unparticle sector. Conceptually the sequestering between the hidden SUSY breaking sector and the standard model sector is an independent issue. 9 The Einstein frame Kahler potential and the conformal frame Kahler potential is related by K E = −3 log(1 − 1 3 K c ). the unparticle energy scale M U low enough for the unparticle physics to be observed in near future experiments. In the anomaly mediation scenario, the situation seems still harder because we have to compensate the loop factor as well: we have to take large Λ U and hence M U for large conformal sequestering and the unparticle physics tends to be unaccessible in future experiments. 10 Conclusion In this paper, we have investigated the SUSY unparticle physics. The SUSY breaking effects induce soft masses in the SUSY unparticle sector, and these masses should be small enough for the unparticle physics to be observed in near future experiments. The gravity mediation, however, typically gives rise to the soft masses of the order of the gravitino mass. We have proposed two solutions to this problem. The one is to use the gauge mediation to yield the low scale gravitino mass. The other is to use the conformal sequestering to suppress the effect of the gravity mediation. In this sense, the observation of the unparticle physics together with the discovery of SUSY will highly restrict possible mediation scenarios of the SUSY breaking, which could be regarded as a probe of the high energy physics beyond the TeV scale. It would be very interesting to pursue this direction to invent a novel probe for the SUSY breaking pattern. It would be also intriguing to realize the SUSY unparticle sector in the superstring theory. These issues are left for future studies.
2007-07-18T13:08:56.000Z
2007-07-17T00:00:00.000
{ "year": 2007, "sha1": "1b7d4bce8d144289ed87b3124607f19e8243d49d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0707.2451", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1b7d4bce8d144289ed87b3124607f19e8243d49d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
252888926
pes2o/s2orc
v3-fos-license
Label-free imaging and biomarker analysis of exosomes with plasmonic scattering microscopy Exosome analysis is a promising tool for clinical and biological research applications. However, detection and biomarker quantification of exosomes is technically challenging because they are small and highly heterogeneous. Here, we report an optical approach for imaging exosomes and quantifying their protein markers without labels using plasmonic scattering microscopy (PSM). PSM can provide improved spatial resolution and distortion-free image compared to conventional surface plasmon resonance (SPR) microscopy, with the signal-to-noise ratio similar to objective coupled surface plasmon resonance (SPR) microscopy, and millimeter-scale field of view as a prism-coupled SPR system, thus allowing exosome size distribution analysis with high throughput. In addition, PSM retains the high specificity and surface sensitivity of the SPR sensors and thus allows selection of exosomes from extracellular vesicles with antibody-modified sensor surfaces and in situ analyzing binding kinetics between antibody and the surface protein biomarkers on the captured exosomes. Finally, the PSM can be easily constructed on a popular prism-coupled SPR system with commercially available components. Thus, it may provide an economical and powerful tool for clinical exosome analysis and exploration of fundamental issues such as exosome biomarker binding properties. a, Bright field, PSM and SPR images of the A431 cells in the prism coupled PSM system. Because the laser is used in this system to achieve high incident intensity for PSM, the interference effect covers the SPR images, making it only suitable for ensemble SPR measurement. b, SPR images of the A431 cells in another optimized prism coupled SPR imaging system. It can be seen that the PSM provides a high spatial resolution to image the cell adhesion sites, which has been revealed by total internal reflection fluorescence microscopy (Journal of cell science 2010, 123(21), 3621-3628). These results show that the PSM can provide higher spatial resolution than traditional SPR imaging systems. Fig. S3 a, Experimental SPR curve and PSM scattering curve. The intensity is achieved by averaging the intensities of all pixels in the raw image. b, PSM intensity response during changing the PBS buffer to 80% PBS buffer in water, where the PSM intensity variation is ~5.18 grayscales. The standard deviation σ of PSM intensity measuring PBS buffer is ~0.063 grayscale. The refractive index variations between PBS buffer and 80% PBS buffer can be estimated to be (46 mDeg)/(130 Deg/RIU) ~ 3.54 × 10 -4 RIU, where 46 mDeg is the ensemble SPR intensity difference between 100% and 80% PBS buffer (Nat Methods 2020, 17, 1010-1017, 130 Deg/RIU is the ensemble SPR sensitivity factor, and RIU represents refractive index unit. Then, the sensitivity factor (SF) of PSM channel can be estimated by (5.18 grayscales)/( 3.54 × 10 -4 RIU) ~ 1.46 × 10 4 grayscales/RIU. Finally, the refractive index resolution of PSM for ensemble measurements can be determined to be σ /SF = (0.063 grayscale)/( 1.46 × 10 4 grayscales/RIU) ~ 4.3 × 10 -6 RIU, which is comparable to most ensemble SPR sensors (Chemical Reviews 2008, 108 (2), 462-493). c, SPR intensity response during changing the PBS buffer to 80% PBS buffer in water, where the PSM intensity variation is ~6.02 grayscales. The standard deviation σ of SPR intensity measuring PBS buffer is ~0.11 grayscale. Using the same protocol as Fig. S3b, the sensitivity factor SF of SPR channel can be estimated by ~ 1.70 × 10 4 grayscales/RIU, and the refractive index resolution of SPR channel can be estimated to be ~ 6.4 × 10 -6 RIU, which is comparable to most ensemble SPR sensors (Chemical Reviews 2008, 108 (2), 462-493). Fig. S4 Size distributions of extracellular vesicles from different cells measured by nanoparticle tracking analysis instrument (NanoSight NS300, Malvern Panalytical, Malvern, UK). The solid lines are Gaussian fittings. The EV sample was diluted before measurement. The dilution factor for achieving ~50 vesicles in one frame and the mean diameter of the vesicles are marked in the figures. Fig. S5 Ensemble PSM and SPR measurement of 5 × 10 10 mL -1 HeLa EVs binding to the goat anti mouse IgG antibody. Compared with Fig. 2, we can see that the nonspecific binding is very weak. Fig. S6 Ensemble PSM and SPR measurement of 5 × 10 7 mL -1 HeLa EVs binding to the anti-CD63 antibody. The curve is hard to fit because no obvious dissociation was observed within the measurement period. Fig. S7 Ensemble measurement of 5 × 10 10 mL -1 HeLa EVs binding to the low-density anti-CD63 antibody. The low-density anti-CD63 modified sensor surface by incubating the gold surface with the solution mixing 20 nM BSA with 20 nM anti-CD63antibodies. Note S1. Effective diameter correction The surface plasmon field decreases exponentially from the surface (z-direction) into the solution. In other words, the scattering of the evanescent field by a finite-size object depends on the distance (z) from the surface. The effective scattering diameter D eff and volume V eff of the analyte can be given by where z is the distance from the gold surface, R is the radius of the analyte, D is diameter of analyte, and l = 100 nm is the decay length of the evanescent field ( Fig. S4; Note S2. Signal-to-noise ratio analysis To estimate the theoretical singal-to-noise ratio (SNR) limit for the PSM system, the total Rayleigh scattering intensity I total of one small object can be estimated by where n s and n m are the refractive indices of analyte and medium, λ is the incident wavelength, d is the analyte diameter, P is the incident light intensity, t is the average period. For the polystyrene nanoparticles measured by PSM here, P = 4 W cm -2 , n s = 1.58, n m = 1.33, λ = 660 nm, and t = 0.1 s. Considering the single photon energy of ~1.2398/(0.66 µm) eV and the 30 x intensity enhancement of surface plasmon field, the total scattering intensity of one object in the PSM system can be expressed as = 2 × 10 -5 × ( ( )) 6 ℎ . ( 3) The objective collects the scattering photons in perpendicular to the propagation direction of surface plasmon wave, and the collection efficiency can be calculated with the equation in spherical coordinate system of where θ and φ are the polar angle and azimuthal angle, respectively. The objective collection angle for the PSM can be calculated by = n ( = 1.33 ) , where NA is the objective numerical aperture, and n m = 1.33 is used to correct the effect of water refraction on the scattering light collection. For the objective with NA of 0.28, the collection efficiency is calculated to be ~1.1 %. For one polystyrene nanoparticle with effective diameter of 80.7 nm, the objective can collect ~60766 scattering photons, which can be converted to 32814 electrons after considering the camera quantum efficiency of 54% at incident wavelength of 660 nm. The camera sensitivity is ~2.4 electrons/grayscale. Thus, one polystyrene nanoparticle with real diameter of 93.7 nm and effective diameter of 80.7 nm can produce the total intensity of ~13672 grayscales in the image, agreeing with the experimental results of 13040 grayscales. The standard deviation of intensities of images recorded in the absence of nanoparticles is ~90 grayscales. Thus, the SNR of our system measuring 93.7 nm polystyrene nanoparticle is determined to be ~145. Note S3. Refractive index correction The extracellular vesicles, including the exosomes, own the refractive index of 1.39 (Journal of Extracellular Vesicles 2014, 3, 25361), and polystyrene nanoparticles own the refractive index of 1.58 (refractiveindex.info). Based on the equation S2, the extracellular vesicles will have ~16 times smaller intensity than the polystyrene nanoparticles with the same diameter. Note S4. Exosome concentration estimation On the basis of Fick's law of diffusion, one could expect the binding frequency of analytes decreased with time, as described in the classical Cottrell equation where f(t) is the binding frequency as a function of time, A is the area of the observation region, D and C are the diffusion coefficient and the concentration of nanoparticles, respectively (Anal. Chem. 2016Chem. , 88, 2380Chem. -2385. After integration, one can estimate the number of total binding events F(t) to be ( ) = 2 0.5 0.5 0.5 .#( 7) The analyte diffusion coefficient can be estimated with Stokes-Einstein equation where k B is the Bolzmann constant, T is temperature, η is the viscosity of the liquid, r is the hydrodynamic radius. For exosomes with diameter of ~100 nm, the diffusion coefficient can be estimated to be 4.9 × 10 -12 m 2 /s.
2022-10-14T15:22:42.081Z
2022-10-12T00:00:00.000
{ "year": 2022, "sha1": "d7d2de857a3ea8b5173a3c7842d25bd572fe016f", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/sc/d2sc05191e", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8612db1036c0b9309ed89de23af03e145881c165", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119251397
pes2o/s2orc
v3-fos-license
Clock Auto-synchronizing Method for BES III ETOF Upgrade An automatic clock synchronizing method implemented in field programmable gate array (FPGA) is proposed in this paper. It is developed for the clock system which will be applied in the end-cap time of flight (ETOF) upgrade of the Beijing Spectrometer (BESIII). In this design, an FPGA is used to automatically monitor the synchronization circuit and deal with signals coming from external clock synchronization circuit. By testing different delay time of the detection signal and analyzing state signals returned, the synchronization windows will be found automatically in FPGA. The new clock system not only retains low clock jitter which is less than 20ps root mean square (RMS), but also demonstrates automatic synchronization to the beam bunches. So far, the clock auto-synchronizing function has been working successfully under a series of tests. It will greatly simplify the system initialization and maintenance in the future. Introduction 1 The Beijing Electron Positron Collider (BEPC) and the Beijing Spectrometer (BES) [1], [2] are upgraded to BEPC Ⅱ and BES Ⅲ [3]- [5] respectively since the summer of 2008. The time-of-flight (TOF) system , with the physical goal of particle identification (PID), is a very important part of the BES Ⅲ. To improve the time resolution of PID, the newly developed gaseous and widely used detector, multi-gap resistive plate chamber (MRPC) [6][7][8], is chosen for the upgrade ETOF detector instead of plastic scintillator bars read out by fast fine mesh photomultiplier tubes (PMT). After upgrade, the total time resolution will be improved significantly from 138ps in total to better than 80ps, and among which only 25ps is limited to be caused by electronics [9]. To ensure the 25ps time resolution of the TOF electronics, the clock jitter must be less than 20ps RMS and the clock phase should be highly synchronized to the beam collision time [10]. As MRPC detectors are utilized in upgrade system, there is a significant increase in the number of electronic channels and two more VME64xP crates as well as another two clock modules will be needed to be dedicated for ETOF electronic system [11]. Thus, the original clock system will be unable to meet the needs of the upgraded system other than upgrading it with more clock output channels. On the other hand, each time the original TOF system is powered on, it requires a series of manual operations to configure the clock synchronization which is not very intelligent. As the TOF clock system is under upgrade, the configuration operations will be simplified with the help of a new algorithm implemented in FPGA. Automatic clock synchronizing method 2.1 Proposed Clock System To meet the needs of TOF electronics, the achieved clock system consists of two parts -one is the transmission of the RF signal, and the other is VME clock modules which are responsible for providing multi-channel high quality clocks as well as synchronization and phase-control among them. To minimize the effects of temperature change, the optical transmitters and receivers from Ortel company and the Phase-Stabilized Optical Fiber (PSOF) from Furukawa company have been utilized. At last, the RF clock signal is transmitted to VME clock modules for clock generation and fan-out. Considering the structure of the upgraded TOF read-out system, the new system will be made up of four clock modules-one is master module and the others are slave modules. In addition, the master module is also required to be able to generate clocks and make them synchronized for the whole TOF read-out electronic system. To simplify design, both master and slave clock modules share the same circuit design, of which the scheme is shown in Fig.2. FEEs & Fast Control Slave Clock Modules ... VME 64xP BUS VME Interface Fig. 2 Scheme of the clock module By selecting different clock sources of the clock fan-out chip, clock module will work under master or slave mode. In master mode, system clock is the 499.8MHz RF clock divided by 12 from the accelerator; in slave mode, it is a 41.67MHz optical signal output from the master clock module. Besides, every clock module has an 83.3MHz crystal oscillator onboard for clock generation by itself under the off-line mode. The clock fan-out chip SY89829 has 20 channels output, of which five channels are transformed to optical signals for trigger systems and slave modules, and the other fifteen channels are transmitted in the form of LVPECL electrical signals to other VME modules which are located in the same VME64xP crate. An FPGA is also used for system control. It supports communication between computer and electronic system via VME interface protocol. Clock Synchronizing and Monitoring The period of beam bunches in accelerator is 8ns and the synchronized RF signal is 499.8MHz. As mentioned before, TOF clock is generated by a simple divider so that there are four possible phases between the beam bunches and the TOF clock. To get a constant phase of TOF clock, a BSYNC signal is derived from accelerator for phase adjustment of which leading edge contains the phase information of beam bunches. Therefore, clock synchronization will be achieved once the time interval between the leading edge of BSYNC and that of the TOF clock is determined. The simplified diagram of synchronization control and clock generation circuit is designed as Fig.3 shown. Once the clock phase is determined, we can get a synchronization window by adjusting the delay value of the BSYNC signal. Theoretically the window width should be 2ns, but considering the instability of the edge of signals output from the DFFs, the actual window width will be slightly less than 2ns. Corresponding to the different phases of TOF clock, there are more than one synchronization windows and all of them are connected one by one on the delay-time axis. However, the final synchronization window that we choose is determined by the phase of the TOF clock which we desired. Algorithm in FPGA In the previous TOF clock system, a lot of VME read and write operations need to be done manually for measuring synchronization windows, as well as the calculation of the center value. Even a small change in system, for instance, a replacement of transmission cables, will require the synchronization windows to be re-measured. To simplify the operation of synchronization, a method of automatic clock synchronizing implemented in FPGA has been processed. As mentioned above, the BSYNC signal is delayed by a SY89295 chip. The chip is a programmable delay line that delays the input signal using a 10-bits-long digital control signal. Then, the synchronization state will be adjustable by changing different configuration data from FPGA to the delay chip. The automatic synchronization starts with a reset signal of dividers according to an initial delay data. If the feedback synchronization flag (SynFlag) is '10', then the rest operations can be continued; otherwise, the initial delay data should be a slight increase in value to make the system working in a steady state of some specific synchronization window. Fig.5 shows a flowchart for automatic synchronization logic which is mainly consist of two parts -Step 1 and Step 2. The purposes of them are to measure the maximum and the minimum values of synchronization window respectively. By changing the delay value from coarse count to fine count and testing whether the value of SynFlag is '10', however, the boundary values of the synchronization interval will be found. Finally, the center value can be easily calculated by averaging the boundary values, then can be used as the delay value for synchronization calibration that make sure the whole TOF system working under the same clock phase after powered on. On both sides of the boundary of the synchronization window, there is a short interval unstable and the state of which cannot be determined whether it is actually in the range of synchronization window just by a single detection. Thus during measurement of the maximum and minimum values, the result will be considered correct only if the synchronization flag we got is "10" for more than 8 times. Figure 6 shows a schematic of TOF clock synchronization window. Fig. 6 Schematic of synchronization window This procedure can be called for controlling the TOF clock synchronization every time after system power-on. With the power keeping supplying, we can also call this logic module to re-measure the synchronization window automatically by VME reading and writing. Meanwhile, the new logic can not only test the representative values of the synchronization window automatically, but also remained the functions of manual operation and detection as the old TOF clock system does, which means that we can get the information in both ways and verify if the results we got are reliable. Test Results For the new TOF clock module testing, two optical fibers in different length are used for transmitting the Pickup signal in previous TOF system. The synchronization information is shown as Table 1.There are significant differences in the positions of two synchronization windows which are caused by the different initial phases between pickup signal and RF clock. 1 bit in delay chip corresponds to 9ps approximately so that the actual delay values in table are obtained by data in decimal number multiplied by 9ps. To get all synchronization windows, different initial delay values are provided. As mentioned before, there are unstable intervals in the edge and that may cause overlaps of adjacent windows. After improving the correction algorithm by measuring repeatedly near the edge of synchronization windows, the overlaps are all gone, as shown in Table 2. Table 2 Detection of synchronization windows Register Function Before correction After correction Window1 Window2 Window3 Window1 Window2 Window3 0xf040 Initial 0x100 0x1c0 0x290 0x100 0x1a0 0x280 0xf050 Center 0x11d 0x1fc 0x2d5 0x10d 0x1e3 0x2b9 0xf0c0 Minimum 0xb0 0x18c 0x268 0xa4 0x178 0x250 0xf0d0 Maximum 0x18b 0x26c 0x343 0x177 0x24e 0x322 When the synchronization configuration is done, the rising edge of the delayed pickup signal will decide the system clock phase. The specific waveform of pickup_delay signal and 1/12 40MHz Clock signal is illustrated in Fig.7. The system clock has been demonstrated to be well synchronized to the beam bunches. Since there is almost no change in the clock module circuit, the system clock jitter has retained to be less than 20ps RMS. It has been already reconfirmed by new tests. Conclusions In this paper, a method of automatic clock synchronizing implemented in FPGA is proposed for clock system during BESIII ETOF upgrade. It combines both FPGA algorithm and external off-the shelf devices. The synchronization intervals can be calculated automatically by adjusting external delay chip and analyzing the returned synchronization flags. According to test results, the function of clock automatic synchronizing to the beam collision time has been achieved without any influences to the system clock quality.
2019-04-13T11:07:20.876Z
2015-03-09T00:00:00.000
{ "year": 2015, "sha1": "901980b5ce5fdb0c526f167498cce428fce1b768", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d6c1b5a1d4d89c0a1a276c6fa2db536e8cffd6b8", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
13999185
pes2o/s2orc
v3-fos-license
IL17/IL17RA as a Novel Signaling Axis Driving Mesenchymal Stem Cell Therapeutic Function in Experimental Autoimmune Encephalomyelitis The therapeutic effect of mesenchymal stem cells (MSCs) in multiple sclerosis (MS) and the experimental autoimmune encephalomyelitis (EAE) model has been well described. This effect is, in part, mediated through the inhibition of IL17-producing cells and the generation of regulatory T cells. While proinflammatory cytokines such as IFNγ, TNFα, and IL1β have been shown to enhance MSCs immunosuppressive function, the role of IL17 remains poorly elucidated. The aim of this study was, therefore, to investigate the role of the IL17/IL17R pathway on MSCs immunoregulatory effects focusing on Th17 cell generation in vitro and on Th17-mediated EAE pathogenesis in vivo. In vitro, we showed that the immunosuppressive effect of MSCs on Th17 cell proliferation and differentiation is partially dependent on IL17RA expression. This was associated with a reduced expression level of MSCs immunosuppressive mediators such as VCAM1, ICAM1, and PD-L1 in IL17RA−/− MSCs as compared to wild-type (WT) MSCs. In the EAE model, we demonstrated that while WT MSCs significantly reduced the clinical scores of the disease, IL17RA−/− MSCs injected mice exhibited a clinical worsening of the disease. The disability of IL17RA−/− MSCs to reduce the progression of the disease paralleled the inability of these cells to reduce the frequency of Th17 cells in the draining lymph node of the mice as compared to WT MSCs. Moreover, we showed that the therapeutic effect of MSCs was correlated with the generation of classical Treg bearing the CD4+CD25+Foxp3+ signature in an IL17RA-dependent manner. Our findings reveal a novel role of IL17RA on MSCs immunosuppressive and therapeutic potential in EAE and suggest that the modulation of IL17RA in MSCs could represent a novel method to enhance their therapeutic effect in MS. The therapeutic effect of mesenchymal stem cells (MSCs) in multiple sclerosis (MS) and the experimental autoimmune encephalomyelitis (EAE) model has been well described. This effect is, in part, mediated through the inhibition of IL17-producing cells and the generation of regulatory T cells. While proinflammatory cytokines such as IFNγ, TNFα, and IL1β have been shown to enhance MSCs immunosuppressive function, the role of IL17 remains poorly elucidated. The aim of this study was, therefore, to investigate the role of the IL17/IL17R pathway on MSCs immunoregulatory effects focusing on Th17 cell generation in vitro and on Th17-mediated EAE pathogenesis in vivo. In vitro, we showed that the immunosuppressive effect of MSCs on Th17 cell proliferation and differentiation is partially dependent on IL17RA expression. This was associated with a reduced expression level of MSCs immunosuppressive mediators such as VCAM1, ICAM1, and PD-L1 in IL17RA −/− MSCs as compared to wild-type (WT) MSCs. In the EAE model, we demonstrated that while WT MSCs significantly reduced the clinical scores of the disease, IL17RA −/− MSCs injected mice exhibited a clinical worsening of the disease. The disability of IL17RA −/− MSCs to reduce the progression of the disease paralleled the inability of these cells to reduce the frequency of Th17 cells in the draining lymph node of the mice as compared to WT MSCs. Moreover, we showed that the therapeutic effect of MSCs was correlated with the generation of classical Treg bearing the CD4 + CD25 + Foxp3 + signature in an IL17RA-dependent manner. Our findings reveal a novel role of IL17RA on MSCs immunosuppressive and therapeutic potential in EAE and suggest that the modulation of IL17RA in MSCs could represent a novel method to enhance their therapeutic effect in MS. Keywords: cellular therapy, experimental autoimmune encephalomyelitis, mesenchymal stem cells, il17, il17ra inTrODUcTiOn The critical role of IL17 in the development of autoimmune and inflammatory disorders such as multiple sclerosis (MS) has been largely investigated (1)(2)(3)(4). The pathogenesis of experimental autoimmune encephalomyelitis (EAE), a rodent model of human MS, is characterized by the central nervous system inflammation associated with inflammatory cell infiltration and demyelination (5,6). IL17/IL17RA Axis Govern MSCs Effect Frontiers in Immunology | www.frontiersin.org April 2018 | Volume 9 | Article 802 An increased production of IL17 has been found in the brain lesions and blood of patients with MS (1,7). In EAE-induced mice, lymphocytes produce significantly higher levels of IL17 as compared to healthy mice (8). Mice deficient for IL17 are protected from the development of EAE (2) and anti-IL17 treatment reduces the clinical score of EAE mice (9). The principal receptor of IL17A is IL17RA that also binds IL17F with a lower affinity (10). IL17RA is ubiquitously expressed; however, the main responses to IL17A have been reported to occur in fibroblasts and endothelial cells (11)(12)(13). This widespread expression of IL17RA is dynamically modulated on cells according to the stimuli they receive. For instance, in CD8 + T cells, cytokines such as IL15 and IL21 increase the expression of IL17RA while phosphoinositide 3-kinase was shown to restrain its expression on T cells (14,15). In contrast to most interleukin receptors, the expression levels of IL17RA are functionally significant since high levels of IL17RA receptor are required for an effective response (16,17). Another particularity of IL17RA resides in its capacity to limit the signaling pathway via the internalization of its ligand. Indeed, after IL17 binding, it is internalized and removed from the milieu in parallel with a decrease of IL17RA expression level at the cell surface (15). Mesenchymal stem cells (MSCs) exert potent antiinflammatory and immunomodulatory effects via the suppression or the regulation of different immune cell subset function and proliferation both in vitro and in vivo (18)(19)(20)(21). Using activated mouse CD4 + T cells under Th17 skewing conditions in vitro, we previously reported that MSCs inhibit the proliferation of Th17 cells as well as their cytokine production (19). Additionally, in an experimental model of arthritis, we have shown that MSCs therapeutic effect is associated with a decreased frequency of pathogenic Th17 cells and the generation of Treg cells bearing the CD4 + RORγT + IL17 + IL10 + signature (22). These immunomodulatory properties of MSCs and their ability to expand in vitro without losing their phenotype, multi-lineage, and immunomodulatory potential have generated an increased interest for MSCs as a therapeutic cell of choice for immune-mediated diseases (18,23). Despite of evidence for a therapeutic potential of MSCs, the underlying mechanisms are not completely understood. MSCs immunoregulatory functions are mediated by the secretion of soluble factors and/or direct cell-to-cell contacts (18,24,25). Proinflammatory cytokines such as IFNγ, alone or in combination with TNFα, IL1α, or IL1β have been shown to enhance MSCs immunosuppressive functions (26)(27)(28). Indeed, these cytokines alone or in combination trigger the expression of suppressive factors involved in MSC-mediated immunosuppression, such as Programmed Death-Ligand 1 (PD-L1), hepatocyte growth factor, transforming growth factor β1 (TGF-β1), inducible nitric oxide synthase (iNOS), and prostaglandin E2 (PGE2) as well as the expression of adhesion molecules such as VCAM1 and ICAM1 (19,(29)(30)(31)(32). More recently, IL17 has been shown to further enhance the immunosuppressive effect of MSCs induced by IFNγ and TNFα, by promoting the expression of iNOS, revealing an unexpected role of IL17 (33). In accordance with these observations, we have shown that IL17 in presence of IFNγ and TNF-α significantly increases the expression of nitric oxide (NO2) and cyclooxygenase 2 expression in MSCs (19). Furthermore, Sivanathan et al. have shown that MSCs pretreated with IL17A enhanced their T cell suppressive effect as well as their capacity to generate regulatory T cells (34). However, inconsistent effects have also been described for IL17-stimulated MSCs. Indeed, IL17 has also been described to reduce the immunosuppressive capacity of olfactory ecto-mesenchymal stem cells (OE-MSCs), mainly by downregulating the levels of inhibitory factors produced by OE-MSCs, such as NO, IL10, TGF-β, as well as PD-L1 (35). Thus, the exact role of IL17 regarding the immunosuppressive effect of MSCs remains to be clarified. Despite the evidence in favor of an enhancing effect of IL17 treatment on MSC-suppressive actions, the involvement and the role of its receptor, IL17RA, has not yet been investigated. The aim of this study was, therefore, to establish whether the IL17RA is involved in the triggering of the MSC-suppressive effects of Th17 cell function in vitro, and in the therapeutic potential of MSCs in the EAE model in vivo. IL17RA −/− mice were generated by homologous recombination in ES cells as described (36) and the long bones kindly donated by Wim B. Van Der Berg from Radboud University, Nijmegen were the source for growing these bone marrow-derived cells. Mscs culture and characterization Wild-type (WT) and IL17RA −/− MSCs isolation, culture, and characterization were performed as we previously described (20) and according to the criteria described for the International Society of cell therapy (37). For adipogenic differentiation, cells were cultured at 2 × 10 4 cells/cm 2 in culture medium supplemented with 1 µM dexamethasone, 60 µM indomethacin, and 10 µg/mL insulin (Sigma-Aldrich, Germany). After 3 weeks, cell differentiation into adipocytes was confirmed by staining of intracellular lipid inclusions with Oil Red O (Sigma-Aldrich). For chondrogenesis, cells were suspended in 15 mL polypropylene conical tubes at the number of 2.5 × 10 5 cells, centrifuged for 5 min at 600 g. The resulting pellets were cultured in 500 µL of the chondrogenic medium containing 10 ng/mL of recombinant TGF-β3 (R&D Systems) for 21 days. Chondrogenic differentiation was confirmed by staining with Safranin O (Merck). Finally, to induce osteogenic differentiation, MSCs were seeded at 3.5 × 10 4 cells/cm 2 in culture medium supplemented with 0.1 µM dexamethasone, 0.05 mM ascorbic acid, and 10 mM β-glycerophosphate (Sigma-Aldrich). After 21 days of culture, Alizarin Red staining (Sigma-Aldrich) detected calcium deposits. For the three differentiation processes, medium was changed every 3 days for 3 weeks. MSCs phenotype were assessed by flow cytometry based on the positivity for CD29, CD44, and Sca-1 and in the absence of CD34 and CD45 antigens as we previously described (20). inhibition of il17ra expression Using a rna interference Four sequences (GCGCCGAUCAAGAGAAACA; CUGCUUU GAUGUCGUUAAA; CGUAAGCGGUGGCGGUUUU CCGA CUGGUUCGAGCGUGA), that in parallel, recognized the mRNA encoding sequence of the IL17RA and the siRNA Control (UAAGGCUAUGAAGAGAUACTT) were obtained from Dharmacon (Dharmacon, England). Briefly, cells were seeded at 70% confluence and transfected with oligofectamine, with IL17RA or control siRNA in opti-MEM ® medium following the manufacturer's instructions. After 12 h, cells were washed and complete DMEM medium (DMEM, 10% FBS, glutamine, and penicillin/streptomicin), and 48 h later, the effectiveness of siRNA was assessed by measuring IL17RA expression by quantitative real-time PCR (qRT-PCR). Th17 cells Differentiation and coculture with Mscs Purified CD4 + T cells were stained with CellTrace™ Violet probe (CTV) (Molecular Probes, USA) and activated with antibodies against CD3/CD28 (BD Pharmingen, USA) and cultured in complete RPMI media as mentioned above. Th17 cells differentiation was induced with 2.5 ng/mL TGF-β1 (PeproTech, USA), 20 ng/mL IL6 (R&D Systems, USA), and 2.5 µg/mL of both anti-IFNγ, and anti-IL4 capture antibodies (BD Pharmingen, USA). WT or IL17RA −/− MSCs were cocultured with CD4 + T cells at day 0 of the differentiation process toward Th17 cell lineage at a 1:10 MSCs:T cells ratio. After 5 days of coculture, proliferation and differentiation of T cells was performed by intracellular staining using FACS analysis. Flow cytometry analysis Lymphocytes obtained from the spleen and cocultured with MSCs in vitro or from lymph nodes of EAE mice were stimulated ex vivo for 4 h with 50 ng/mL phorbolmyristate acetate (Sigma-Aldrich), 1 µg/mL ionomycin (Sigma-Aldrich), and 10 µg/mL brefeldin A (Biolegend, USA). Then, cells were washed in PBS and analyzed for intracellular cytokines. For surface antigen staining, cells were first incubated for 20 min at 4°C in the dark, with antibodies against CD4-PERCP 5.5 and CD25-APC (Miltenyi USA) in the presence of LIVE/DEADR Fixable near-IR stain (Molecular Probes, USA) to discard dead cells. Then, they were fixed for 30 min at 4°C with the FoxP3 staining buffer set (eBioscience, USA) in order to perform intracellular staining following manufacturer's instructions. Specific antibodies against Foxp3-PE (Miltenyi, USA), IFNγ (FITC), and IL17-PE (BD Pharmingen, USA) were used. Mesenchymal stem cells were stimulated with TNFα at 10 ng/mL, IFNγ at 20 ng/mL, and IL17A at 10 ng/mL for 24 h in order to study the phenotype of activated MSCs in response to proinflammatory cytokines. To that end, specific antibodies against VCAM1, ICAM1, and PD-L1 (eBiolegend, USA) were used. Acquisition was performed with a FACS Canto II flow cytometer (BD, Pharmingen) and analyzed with Flow Jo software (Tree Star, USA). cytokine Quantification Plasma concentrations for a panel of cytokines were measured with the Milliplex mouse Th17 magnetic bead panel Kit (Millipore, USA). Plasma samples were obtained by centrifugation (300 × g, 20 min) at day 22 after EAE induction. Plasma samples from 3-4 mice in each group were pooled and analysed by triplicate according to the manufacturer's instructions. The enzyme-linked immunosorbent assay (ELISA) for GM-CSF and the Enzyme Immunoassay kit for PGE2 (R&D Systems, USA) were used following manufacturer's instructions with MSCs supernatants from cells activated or not with proinflammatory cytokines. NO2 production was quantified in the supernatants of MSCs cocultured with Th17 cells for 5 days, using a modified Griess reagent (Sigma-Aldrich) as previously described by our group (38,39). reverse Transcription-Polymerase chain reaction Total RNA from cell cultures was isolated using Trizol (Invitrogen) and then treated with DNase I before reverse transcription processing to remove genomic DNA contamination. (40) and normalized according to the expression of WT MSCs in basal conditions. Differences between two groups were assessed using the Mann-Whitney test. Differences between groups were assessed using Kruskal-Wallis test. statistical analysis Statistical analyses were performed using GraphPad Prism 5.0 software (San Diego, CA, USA). Data were expressed as mean ± SD, except for the clinical EAE score, which was expressed as mean. Differences between two groups were analyzed by a non-parametric two-tailed Mann-Whitney test. In order to compare difference of data from more than two groups, the non-parametric Kruskal-Wallis test was used. For the analysis of draining lymph nodes (dLNs), since these data follow the Gaussian distribution, we used ANOVA. All P-values <0.05 were considered statistically significant. resUlTs il17ra Deficiency impairs Mscs immunosuppressive effect on Th17 Proliferation and Differentiation In Vitro To determine the role of the IL17/IL17R axis in MSC-mediated immunomodulatory effect, we first evaluated if MSCs express the different subunits of the IL17R (A and C) that bind to the IL17A isoform. MSCs express both subunits of the IL17R (A and C) (Figures 1A-C), but only the subunit A is induced in MSCs when cocultured with Th17 cells (Figures 1B, C). Next, we inhibited the IL17RA subunit using siRNA technology to study the role of this receptor on the suppressive effect of MSCs on Th17 cells ( Figure 1D). As compared to MSCs transfected with a control siRNA (siRNA ctl), MSCs silenced for IL17RA (siRNA IL17RA) exhibited a significantly lower inhibitory effect on Th17 cells. Indeed, no significant difference in the percentage of CD4 + IL17 + cells was observed comparing non cocultured Th17 cells with Th17 cells cultured with MSCs that were silenced for IL17RA. However, substantial suppression was observed in the cocultures with control MSCs (Figure 1E). Of note, the loss of MSCs immunosuppressive effect on Th17 cells upon IL17RA silencing was associated with a significant decrease of NO2 production. This was one of the main mediators of MSC-suppressive function, in the cultures with IL17RA silenced MSCs when compared to control MSCs ( Figure 1F). Finally, we showed in IL17RA-/-MSCs did not expressed IL17RA subunit, but expressed IL17RC subunit, as revealed by qRT-PCR ( Figure S1C and S1D, respectively in Supplementary Material). Then, we studied, in vitro, the immunoregulatory potential of WT and IL17RA −/− MSCs in a proliferation assay with ConAstimulated splenocytes. Our results showed that after 3 days of coculture, IL17RA −/− MSCs exhibited lower suppressive potential on splenocyte proliferation as compared to WT MSCs (Figure 2A). Finally, we analyzed the effect of IL17RA-deficient MSCs on naïve CD4 + T cells cultured in Th17-inducing condition (CD4-Th17) ( Figure 2B). Our results showed that the expression of IL17RA in MSCs did not significantly affect the proliferation of CD4-Th17 ( Figure 2B). However we observed that IL17RA −/− MSCs, partially, but significantly, impaired their suppressive effect on CD4+ cell on the differentiation into CD4-Th17 cells, when compared to WT MSCs ( Figure 2C). These results reveal that the immunomodulatory effects of MSCs on Th17 cell differentiation and proliferation depend, in part, on IL17RA expression. The Production of Msc-Derived immunosuppressive Mediators Depends on il17ra expression The enhanced suppressive effect of MSCs has been shown to depend on their activation with proinflammatory molecules such as IFNγ and TNFα. However, the role of IL17, which is mainly produced by Th17 cells has been poorly studied in the context of MSCs immunoregulatory functions. We thus assessed the expression profile of the mediators of MSCs immunosuppressive properties after IFNγ and TNFα activation for 24 h in the presence (F) Quantification of NO2 production in the supernatant of Th17 cells cultured alone or with either MSCs siRNA ctl or MSCs siRNA IL17RA. Bars in the plots represent the mean ± SD of n = 4 different mice. Statistical differences on cocultures were calculated using the Kruskal-Wallis test. *P < 0.05, **P < 0.005 and for PCR Mann-Whitney test was used. (Figures 3A, B). After TNFα and IFNγ activation, the levels of iNOS were significantly increased in both MSCs, although to a greater extent in WT MSCs. TGF-β1 expression level was significantly lower both in non-activated or TNFα and IFNγ activated IL17RA −/− MSCs as compared to WT MSCs ( Figure 3B). Interestingly, IL6 expression was significantly increased on the IL17RA −/− MSCs compared to the WT MSCs under TNFα and IFNγ activation in the presence or absence of IL17A ( Figure 3C). Then, we studied the production of PGE2 and GM-CSF by ELISA and found that MSCs deficient for IL17RA produce lower levels of PGE2 than WT MSCs, in basal conditions or after treatment with TNFα and IFNγ with or without IL17A (Figure 3D). No significant difference between the two MSCs was observed in terms of PGE2 secretion under IL17A treatment ( Figure 3D). While no significant differences were observed on GM-CSF secretion under any stimulation (data not shown). The increase in the production of PGE2 by the IL17RA −/− MSCs when stimulated with IL17 alone and their high production of TGF-β1 under proinflammatory cytokines plus IL17 could be explained by the presence of the other IL17 receptor subunit RC that could supply in part the role of the IL17RA activity ( Figure S1D in Supplementary Material). Finally, we performed FACS analysis on WT and IL17RA −/− MSCs to evaluate the expression of PD-L1, VCAM1, and ICAM1 on both types of MSCs. Our results showed that the cytokineinduced (TNFα plus IFNγ or TNFα, IFNγ, and IL17A) expression of PD-L1, VCAM1, and ICAM1, triggered by MSCs stimulation was significantly lower with IL17RA −/− MSCs as compared to WT MSCs (Figures 3E-G). Of note, IL17 activation alone did not induce the expression of neither VCAM1, ICAM1, or PD-L1 (Figures 3E-G). The lower expression level of VCAM1, ICAM1, and PD-L1 on IL17RA −/− MSCs as compared to WT MSCs could be associated with their reduced immunosuppressive capacities, since the PD-L1 molecular pathway has been described to be critical for the modulating effect of MSCs on Th17 cells (19). These results indicate that the IL17RA subunit is essential for the expression of key mediators of the MSC-suppressive activity on Th17 cells. il17ra Deficiency restrains the effect of Mscs in eae To assess the therapeutic potential of IL17RA-deficient MSCs, we induced EAE in C57BL/6 mice with MOG35-55 immunization. Four conditions were tested: the control group of healthy mice (Control), the EAE-induced mice (EAE), the EAE mice treated with WT MSCs, and the group treated with IL17RA −/− MSCs. All MSCs were injected i.p. (1 million of cells per mouse) 5 days after EAE induction (Figure 4A). Clinical scores and weight were evaluated daily for 22 days. Consistent with previous reports (38,41), WT MSCs injected during the first days of EAE induction, before the onset of the disease, induced a significant improvement of the clinical scores in EAE animals ( Figure 4A). In contrast, IL17RA −/− MSCs treatment did not exert any beneficial effect on EAE development or progression. Indeed, the administration of IL17RA −/− MSCs induced a worsening of disease characterized by increased clinical scores of EAE (Figures 4A,B). Finally, in order to study the role of the IL17A activation on MSCs, we pre-incubated WT MSCs for 48 h with 30 µg/mL of IL17A, prior to i.p. injection in EAE mice. Our results showed that both non-activated and IL17A-activated MSCs induced a significant decrease of the EAE symptoms. MSCs treated with IL17A exhibited an improved therapeutic effect between days 17 and 24 of disease induction although control MSCs only at days 18 and 24. Area under the curve (AUC) analysis showed that both control MSCs and MSCs il17ra Deficiency in Mscs impairs Their capacity to inhibit Th17 cell response and regulatory T cell generation in the eae Model Experimental autoimmune encephalomyelitis model is associated with a pathological T cell response leading, in part, to an unbalance of proinflammatory Th cells and classical regulatory T cells (Treg) (42). Therefore, we analyzed the frequency of Th1, Th17, and Treg cells in the dLN of mice of our experimental groups at day of euthanasia. While both WT and IL17RA −/− MSCs were able to reduce the percentage of Th1 cells (CD4 + IFNγ + ) (Figures 5A,B), only WT MSCs were able to decrease the percentage of Th17 cells (CD4 + IL17 + ) ( Figure 5C). The percentage of CD4 + IFNγ + IL17 + producing cells was also reduced in response to the treatment with WT MSCs (*P < 0.05) ( Figure 5D). No effect was observed upon the injection of IL17RA −/− MSCs on this latter T cell subset. Since, the balance between Th17 cells and Treg cells is crucial for the progression of the disease, we also studied the percentage of classical Treg cells. Our results showed that only WT MSCs injection was able to increase the percentage of classical Treg CD4 + CD25 high Foxp3 + (*P < 0.05) (Figures 5E,F). In order to evaluate the effect of MSCs injection on cytokine plasma levels in EAE mice, we used a Milliplex mouse magnetic bead panel Kit to measure a large panel of cytokines produced by Th1, Th17, and Treg cells. We found that the plasma levels of IL6 and IL27 in the EAE mice were different (Figures 5G,H). Although no significant difference was detected between the groups, we found that IL6 concentration in the plasma of mice treated with IL17RA −/− MSCs tended to be higher than in control, EAE, and WT MSC-treated mice ( Figure 5G). On the other hand, we found that IL27 concentration tended to be higher in the plasma of mice treated with WT MSCs as compared to the mice of the three other groups ( Figure 5H). We detected no differences regarding other cytokines such as IFNγ, TNFα, TNFβ, IL12P40, IL10, IL17A, IL17F, and IL17E (data not shown). DiscUssiOn In the present study, we evaluated the role of the IL17/L17RA pathway in the immunomodulatory capacities of MSCs in vitro and in a Th17-mediated disease model such as EAE. Our results demonstrated both that the expression of the IL17RA subunit by MSCs is crucial for their Th17 suppressive functions and that the IL17/ IL17RA axis contributes significantly to the activation of MSCs. It is well known that the activation or "priming" of MSCs with proinflammatory cytokines is required to trigger their immunosuppressive function. In particular, it has been widely shown that IFNγ together with TNFα, IL1α, or IL1β triggers the expression of critical suppressive factors that mediate MSCs immunomodulatory properties (29). More recently, the effects of IL17A on MSCs function have also been investigated. IL17A strongly promotes the proliferation and the generation of colony-forming unit-fibroblast of both human and murine MSCs (43). Furthermore, MSCs treatment with IL17A significantly modulates their migration, motility, and osteoblasts differentiation potential, confirming an important role of IL17A on MSCs function (44). Regarding the effect of IL17A on MSC-mediated immunoregulatory properties, conflicting results have been published. IL17 alone or together with IFNγ and TNFα on MSCs has been shown to have a positive effect on their immunosuppressive functions since IL17A addition considerably enhances their immunosuppressive potential (33,34). However, it has been shown that IL17 could also reduce the suppressive capacity of OE-MSCs, mainly by downregulating their production of immunosuppressive factors including NO2, IL10, TGF-β1, as well as PD-L1 (35). In the present study, we demonstrate that the IL17/IL17RA axis plays a key role on MSCs immunomodulatory properties since the inhibition of IL17A receptor significantly impairs the capacity of MSCs to inhibit the proliferation and generation of proinflammatory Th17 cells. This loss of MSCs immunoregulatory function in the presence of IL17RA deficiency is associated with a reduced capacity of such MSCs to produce many of the well-established mediators of MSC-associated immunosuppressive properties. In addition, in our study, we have evaluated the effect of the IL17/L17RA axis in a Th17 immune-mediated disease such as the EAE model (45). It has been widely demonstrated that MSCs exert a significant therapeutic effect in EAE (38,46,47). Thus, in order to validate our in vitro results, we compared the treatment of EAE mice with either WT or IL17RA −/− MSCs. We showed that while WT MSCs significantly improved EAE symptoms, MSCs deficient for IL17RA exacerbated the progression of disease, providing the first evidence that the expression of the IL17RA subunit is critical for the therapeutic effect of MSCs in EAE. Moreover, when we pretreated MSCs with IL17A prior to their injection into diseased mice, we enhanced their therapeutic effect. Further analysis of the treated and control EAE mice indicated that the IL17RA deficiency of MSCs would suppresses their capacity to restrain pathogenic Th17 cells as well as their ability to induce Tregs in regional lymph nodes. Of note, while IL17RA −/− MSCs administration exacerbated the clinical symptoms of EAE mice, we did not observe any increase in the number of pro-inflammatory T cells (Th1 or Th17), or a decrease in the number of regulatory T cells as compared to untreated mice. This might be because the analysis of these different T cell subsets was performed at day 22, day of EAE induction. This might have been distant to the time-point at which we observed the changes in the clinical score associated with each MSCs treatment. The lack of IL17RA −/− MSCs therapeutic effect could be in part due to the reduced level of expression of VCAM1, ICAM1, and PD-L1 on IL17RA −/− MSCs as opposed to WT MSCs. This could result in an impaired capacity to migrate and/or interact with proinflammatory Th17 cells and thus to suppress their activity or eventual Treg conversion. Moreover, in the absence of an inflammatory environment (characterized by low levels of pro-inflammatory cytokines), the MSCs, sensor, and switcher of inflammation (48), adopt a pro-inflammatory phenotype and activate T cell responses through the release of chemokines that recruit lymphocytes to the inflammation site (29,49). We could thus reasonably conceive that IL17RA −/− MSCs are less responsive to inflammatory cytokines and thus more prone to adopt a pro-inflammatory phenotype, which could explain that, in the EAE model, they exacerbate the symptoms of the disease. Finally, this pro-inflammatory phenotype adopted by IL17RA −/− MSCs, derived from a global (not cell-type specific) knockout, could also reflect the altered immunological environment in which they have developed, i.e., in the presence of immune cells lacking the IL17 receptor. cOnclUsiOn These results demonstrate that the expression of IL17RA on MSCs allows them to efficiently respond to Th17-related proinflammatory microenvironment enhancing their immunosuppressive properties on Th17 cells in vitro and, in vivo, in Th17-mediated disorders. Altogether, our data propose that the IL17/IL17RA axis plays a significant role in the process of MSCs "licensing" that is at the core of their immunosuppressive and therapeutic potential. The area under the curve (AUC) of the clinical score for each treatment was calculated and compared. Line curves of the clinical score analysis represent the mean and the analysis was performed daily. Bars represent the mean ± SD. *Symbol represents the comparison between EAE group compared to EAE animals treated with MSCs (WT MSC or IL17RA −/− ). The AUC of clinical score was calculated for each experimental group. Statistical differences were calculated using Kruskal-Wallis test. Statistical differences were declared significant at P < 0.05 level (*P < 0.05, **P < 0.01). The results represent two independent experiments considering 12-20 mice per experimental group. reFerences
2018-04-30T13:03:13.986Z
2018-04-30T00:00:00.000
{ "year": 2018, "sha1": "bfd7e4ba5339ff4192e7d7f58d4c8f7b379e98d1", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2018.00802/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "cf30b74a2e3f86e282b1a77862bc0b9aca3c135d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
115179721
pes2o/s2orc
v3-fos-license
Bounds for mixing time of quantum walks on finite graphs Several inequalities are proved for the mixing time of discrete-time quantum walks on finite graphs. The mixing time is defined differently than in Aharonov, Ambainis, Kempe and Vazirani (2001) and it is found that for particular examples of walks on a cycle, a hypercube and a complete graph, quantum walks provide no speed-up in mixing over the classical counterparts. In addition, non-unitary quantum walks (i.e., walks with decoherence) are considered and a criterion for their convergence to the unique stationary distribution is derived. INTRODUCTION The origin of the concept of quantum walk lies in quantum computation theory, where a quantum version of the classical random walk was invented in an attempt to improve over classical computational algorithms. The early papers that formulated the main ideas of quantum walk are [2] and [11]. Among numerous later papers, we would like to point out [3] where the continuous-time quantum walk was defined and [1] which defined and studied the discrete-time quantum walk on finite graphs. An introductory review of quantum walks written from the prospective of quantum computation can be found in [4]. For recent developments the reader can also consult [8]. From the beginning, it became clear that quantum walks on both finite and infinite graphs have many differences from the classical walk. For example, the probability to find a particle at a particular vertex of a finite graph does not converge to a limit but in general oscillates forever. However, the average of this probability over time does converge to a limit, which can be interpreted as follows. We start quantum walk in a certain state and measure the particle at a random time t, which is distributed uniformly over interval [0, T ] . This measurement finds the particle at a particular vertex v with a probability p (v, T ) , which converges to a limit as T → ∞. How large should T be if we want to make sure that p (v, T ) is close to its limit? Let us introduce some definitions to make this question more precise. The quantum walk on a finite graph is a 4-tuple (G, S, ψ, U ) , where G = (V, E) is a finite graph, S is a finite set, ψ is a function in L 2 C (V × S), and U is a unitary operator on L 2 (V × S) . It is assumed that ψ = 1. Elements of S are called chiralities and the function ψ t = U t ψ is the wave function at time t ∈ Z. If a measurement is performed over the system at time t, then the walking particle is found at vertex v in state s with probability |ψ t (v, s)| 2 . We assume that the quantum walk is local. That is, let x and x ′ denote pairs (v, s) and (v ′ , s ′ ) , respectively. A quantum walk is local if U x ′ x ≡ δ x ′ , U δ x = 0 implies that v ∼ v ′ , that is, vertices v and v ′ are connected to each other. A local quantum walk is called the general quantum walk in [1]. A special case of the general quantum walk is the coined quantum walk ( [1]). Here is how it is defined. Let G be a d-regular graph and let S = {1, . . . , d} . Assume that the neighbors of each vertex v are labelled as v i where i = 1, . . . , d. In addition, assume that if v = w and v i = w j then i = j. (Such a labelling L always exists on Cayley graphs of finitely-generated groups, where we can identify elements of S with generators and inverses of generators of the group and write v g = vg and v g −1 = vg −1 . In this case the choice of labelling is equivalent to the choice of ordering of generators and their inverses.) Define U as follows. Let x and x ′ denote pairs (v, s) and (v ′ , s ′ ) , respectively. where C is a unitary matrix acting on L 2 (S) , which is called the coin of the quantum walk. It is easy to check that matrix U is unitary. Intuitively, let the particle be at vertex v in state |s . Then, at the next moment the particle will be at vertex v s in the superposition state C |s . This is the coined quantum walk on G corresponding to labelling L and coin C. A typical example of the coined quantum walk is the Hadamard quantum walk on the cycle Z n . In this case, the coin is the Hadamard transformation: Another popular choice of the coin is Grover's transformation: That is, the state s remains unchanged with amplitude 1 − 2 d and moves to s ′ = s with amplitude −2/d. We will call walks with this coin the Grover quantum walks. A generalization of this concept is the non-unitary quantum walk [1]. A nonunitary quantum walk is specified by 4-tuple (G, S, ρ, T ) , where G and S are as before a finite graph and a finite set, ρ is a density matrix (i.e., a positive unit-trace operator on L 2 C (V × S)), and T is a completely-positive trace preserving operator acting on density matrices. In the literature, T is called a superoperator [14], or a quantum channel [14], or a trace-preserving quantum operation [13]. We will use these terms as synonyms. Let x denote a pair (v, s). The probability to find a particle at vertex v in state s at time t is given by (The concept of locality is more complicated in the non-unitary case and this definition is different from the definition in [1].) An example of a non-unitary quantum walk is given by a weighted sum of unitary quantum walks. In this example, Intuitively, an operator U i is used at each step of the walk with probability p i . If all U i are local, then T is also local. Another where P i are projections and k i=1 P i = I. This is a walk in which with probability p the particle is measured and with probability 1 − p it is evolved according to the unitary operator U . First, let us consider the case of unitary quantum walks. The probability distribution |ψ t (x)| 2 in general does not converge to any particular limit. Indeed, all eigenvalues of the matrix U have unit absolute value. As a consequence, every eigenvector of U corresponds to a stationary probability distribution. If the initial wave function ψ is a non-trivial superposition of the eigenvectors with different eigenvalues, then ψ t continues to oscillate indefinitely. In the classical case this phenomenon occurs only when the random walk corresponds to a periodic Markov chain, and this case is not typical. The time averages of the probabilities |ψ t (x)| 2 do converge, and the limit exists although may depend on the initial function ψ. We will call this limit the time-averaged probability distribution of the particle. In order to quantify the convergence of the initial distribution to this limit, let us define the distance of the initial distribution from its time-average by the formula This is the total variation distance between the averaged probability distribution at time T and its limit. By analogy with the classical case, the mixing time of a general quantum walk is defined as follows: That is, this is the minimal time which is needed to reduce the distance between the worst initial distribution and its time-averaged limit to a quantity less than ε. Another definition of the mixing time restricts the choice of initial wave functions. Namely, where B is the set of basis states, that is, ψ ∈ B if |ψ| 2 is a delta-function concentrated at (v, s) This is the definition used in [1]. Clearly, t mix (ε) ≤ t mix (ε). In the case of classical random walks, these two mixing times are always equal to each other. In the the case of quantum walks, they can be different. We intend to estimate the mixing time in terms of the distance between eigenvalues of U. Let λ k = e iβ k , k = 1, . . . , m, be the distinct eigenvalues of U . We define the distance between λ k and λ l as the smallest distance along the unit circle: The relaxation time of operator U is defined as Finally, let us define the overlap of two functions ϕ and ψ by the formula Note that if ϕ and ψ are two wave functions, then 0 ≤ Q ≤ 1 by the Cauchy-Schwartz inequality. (The proofs of all theorems are in Appendix.) It is interesting to compare this bound with the corresponding result for the classical random walk, where t mix (ε) ≤ c log 1 επ min t rel , where π min is the smallest probability in the limit distribution (see for example Theorems 12.3 and 12.4 on p. 155 in [10]). In many cases the limit distribution is uniform and this bound can be written as t mix (ε) ≤ c log (n/ε) t rel , where n is the number of vertices in the graph. Note, however, that t rel have a different meaning in the classical case where it denotes the inverse of the difference between 1 (the largest eigenvalue) and the second largest eigenvalue (i.e., the inverse of the "spectral gap"). Another significant difference in the formulas for the mixing time is that ε enters as log ε and ε −1 in the classical and quantum cases, respectively. This is due to the fact that the convergence is exponentially fast in the classical case and polynomial (even linear) in the quantum case. Finally, it is worthwhile to note that in many cases the classical bound t mix ≤ c log (n) t rel is not optimal, and a large literature is devoted to improvement of this result to t mix ≤ ct rel with a sharp constant c. For the lower bound we prove the following result. Theorem 1.2. Let U be the unitary transformation on L 2 (V × S) associated with a discrete-time quantum walk (not necessarily coined). Suppose that U has only real eigenvectors. Let λ and λ ′ be two distinct eigenvalues with the corresponding eigenvectors ψ and ψ ′ . Assume that d (λ, λ ′ ) ≤ 2 and ε ≤ Q (ψ, ψ ′ ) /80. Then, In particular, if λ and λ ′ are two eigenvalues with the smallest distance between them along the circle, then d (λ, λ ′ ) −1 = t rel and we obtain the estimate The main message of Theorems 1.1 and 1.2 is that the relation of the mixing and relaxation times in the quantum case is similar to the analogous relation in the classical case. However, the relaxation time is defined differently in the quantum case. It is not the inverse of the difference between the largest and the second largest eigenvalue, but the inverse of the minimal distance between all distinct eigenvalues. Previously, the speed of convergence of (unitary) discrete-time quantum walks was investigated in [1]. The upper bound for the quantum walks that we obtain in Theorem 1.1 is similar to the bound in Theorem 6.1 of [1]. The mixing time is O (t rel log (m)) where t rel is the inverse of the minimal distance between the distinct eigenvalues of the matrix U . The main difference of our result from the result in [1] is that we have log (m) instead of log(n), where m and n are the numbers of distinct and all eigenvalues, respectively. This difference is significant for the case of the discrete walk on the hypercube, where the number of eigenvalues is 2 d and the number of distinct eigenvalues is d+1. In particular, we show that the mixing time on the hypercube is O(n log n/ε) and not exponential as was suggested in [12] based on previous estimates in [1]. The lower bound that we obtain is in terms of the relaxation time t rel . It essentially says that the mixing time is Ω (t rel ) . This bound is different from the bound obtained in [1], which is formulated in terms of a geometrical property of the underlying graph. In addition, the mixing time is defined differently in [1]. As a result, the mixing time of the Hadamard walk on the cycle is of order O n log n/ε 3 in Theorem 4.2 of [1], and of order O n 2 log n/ε in our Example 1. In the classical case, the mixing time is of the order O n 2 log ε −1 . Now let us consider non-unitary quantum walks. The study of these walks helps us to understand how the decoherence affects performance of quantum algorithms. It was noted (see [7]) that decoherence in quantum walks can be useful for quantum algorithms. In particular, it appears that a small amount of decoherence can speed up the mixing of the walk. Numeric evidence in [7] was later corroborated by analytical estimates in [15]. More information about decoherence in quantum walks and additional references can be found in the review article [6]. Let M denote the linear space of Hermitian linear operators acting on L 2 C (V × S) . The space M is a Hilbert space with respect to the norm ρ 2 = Tr ρ 2 1/2 (which we call L 2 -norm). Other useful norms on M are ρ 1 = Tr (|ρ|) where |ρ| = ρ 2 and ρ L 2 (µ) = Tr µρ 2 1/2 , where µ is a density matrix. We call these norms the trace and L 2 (µ) norms, respectively. Superoperators are operators on M which possess some additional properties. Some well-known properties of superoperators are summarized in the proposition below. This proposition is an immediate consequence of Theorem 9.2 and Exercise 9.9 in [13]. Note that in many cases T is not self-adjoint in L 2 norm. Moreover, recall that in the classical case the stochastic matrix of a random walk is always self-adjoint with respect to the norm L 2 (µ) , where µ is the stationary probability distribution. (This result can be traced to the fact that every random walk is a reversible Markov chain.) In contrast, the superoperator of a non-unitary quantum walk T is not necessarily self-adjoint with respect to the norm L 2 (ρ st ). In fact, it appears that T is not even a normal operator (i.e., T * T = T T * ) in many situations of interest. Proposition 1.3 establishes the existence of the stationary density matrix. However, it does not say anything about the uniqueness or convergence properties, and we cannot expect that these properties hold in general. For example, a unitary quantum walk typically has many stationary density matrices and the convergence fails unless we average density matrices over time. The following theorem establishes the uniqueness and convergence properties provided that the quantum walk satisfies a certain condition. Let us call a density matrix ρ strictly positive and write ρ > 0, if x, ρx = 0 implies that x = 0. Next, let T be a linear operator acting on M. We will call T strongly positive if for every density matrix ρ there exists an integer n > 0 such that T n ρ > 0. (This definition is similar to a corresponding definition in the theory of Markov chains, in which it is shown that a stochastic matrix of a Markov chain is strongly positive if and only if the Markov chain is ergodic, that is, aperiodic and irreducible.) The multiplicity of an eigenvalue λ is defined as dim ker (λI − T ) . The rank of λ is sup p>0 dim ker (λI − T ) p . The eigenvalue is called simple if its rank equals 1. Proof is in Appendix. After the convergence to the stationary distribution is established, it is natural to ask for an estimate on the mixing time. First, let us define the mixing time for a non-unitary quantum walks. The definition is different from the definition for the unitary walks since no time-averaging is necessary. The measurement at time t finds the walking particle at the vertex v in state s with probability p t (x, ρ) = x|T t (ρ) |x , where x denote the pair (v, s) and ρ is the initial density matrix. If T is strongly positive, then these probabilities converge to a limit p (x) = x|ρ st |x , which does not depend on the initial density matrix. Hence, we can define the total variation distance as d (t, ρ) = x∈V ×S x|T t (ρ) |x − p (x) . The corresponding mixing time can be defined as Unfortunately, while it is easy to see that the asymptotic behavior of T t is governed by the spectral radius of T , it is difficult to estimate the mixing time because of the non-normality of operator T . The essential difficulty is that for such operators it is hard to estimate the duration of the transient behavior. It is the same problem that makes it difficult to estimate the mixing time for non-reversible Markov chains. (In one particular example of a non-unitary continuous-time walk on cycle this difficulty has been overcome and an estimate on the mixing time has been derived in [15]. ) We consider several examples of unitary walks in this paper. The table summarizes results for unitary quantum walks on a complete graph, a cycle, and a hypercube. Mixing time Complete graph with ε n log n It appears from this table that the mixing time for quantum walks is of similar order as that for the corresponding classical random walks. In particular, the unitary quantum walks do not allow a quadratic speedup over classical walks, in contrast to the results for the mixing time in [1]. The reason for this difference is that the mixing time defined in [1] restricts the initial distributions of the particle to the class of distributions concentrated on a particular vertex of a graph, while we allow for arbitrary initial distributions. Note that this result does not rule out that the quadratic speedup can be achieved by non-unitary quantum walks. Some evidence in favour of this conjecture can be found in [7] and [15]. The rest of the paper is organized as follows. In the next section, we apply bounds on mixing times to particular examples of quantum walks on the cycle, hypercube, and complete graph. The proofs of the theorems are relegated to Appendix. Example 1. (Cycle) . Proposition 2.1. The mixing time for the Hadamard quantum walk on the n-cycle satisfies the following inequalities: where c 1 and c 2 are positive constants. Proof: The eigenvalues of the Hadamard walk on the cycle with n vertices were found in [1]. They are where k = 0, . . . , n−1. In order to describe the eigenvectors, let χ k , 0 ≤ k ≤ n−1, be functions in L 2 (Z n ) defined by the formula χ k = n−1 r=0 exp 2πi kr n δ r . Then all eigenvectors have the form v ⊗ χ k where v is a 2-vector that depends on k. Indeed, if S and S * are the left and right shift operator on L 2 (Z n ), respectively, then we can write U as a 2-by-2 block matrix, with blocks U 11 and U 12 equal S/ √ 2, and blocks U 21 and U 22 equal −S * / √ 2 and S * / √ 2, respectively. It follows Then, eigenvectors of A k can be written as v Note that The smallest difference between β k occurs when k = 0 and 1 and it can be estimated by c/n 2 for a suitable constant c. It follows that the relaxation time is t rel ∼ cn 2 , and by Theorem 1.1 the mixing time is with a certain constant c 2 > 0. It is easy to estimate the overlap of eigenvectors that correspond to eigenvalues β 0 and β 1 . It is greater than 0.97 for all n. By Theorem 1.2, we have c 1 ε n 2 ≤ t mix (ε) . QED. Example 2. (Hypercube) The mixing time of the quantum walk on a hypercube was previously studied in [12], and we use their setup in the definition of quantum walk. The quantum walk on the hypercube is also analyzed in [5] with emphasis on the hitting time of the walk. Consider a hypercube graph (Z 2 ) n with 2 n vertices. We think about vertices as indexed by numbers from 0 to 2 n − 1 in the binary representation with n digits. The edges of the graph are put between numbers that are different in one bit only. The set of states S consists of n elements. We consider the Grover quantum walk. That is, a particle at vertex v in state s goes to the vertex w which is different from vertex v only in the bit s. It remains in state s with amplitude 2/n − 1 and goes to state s ′ with amplitude 2/n. Proposition 2.2. The mixing time for the Grover quantum walk on the n-dimensional hypercube satisfies the following inequalities: Proof: The eigenvalues and eigenvectors of the Grover quantum walk on the ndimensional hypercube were found by Moore and Russell in [12]. The eigenvalues are where k = 0, . . . , n. We describe eigenvectors below. For the convenience of the reader, we also give a short verification of the result . For each sequence t = (t 1 , t 2 , . . . , t n ) of 0 and 1, define χ t ∈ L 2 (Z n 2 ) by the formula χ t (x 1 , . . . , x n ) = 2 −n/2 (−1) t i x i . All eigenvectors of the matrix U have the form v is an n-vector that depends on t, and j = 1, ..., n. Indeed, the unitary matrix U can be written as a n-by-n block matrix, in which the ij-th block is bS j if i = j and aS j if i = j. Here a = 2/n − 1, b = 2/n and S k : L 2 (Z n 2 ) → L 2 (Z n 2 ) is the shift operator which acts as follows: It is easy to verify that the following vectors are eigenvectors of A. Let k be the number of non-zero entries in vector t. First, assume that n > k ≥ 1 and define x = ±i k n−k and v r = x 1−tr . Then v = (v 1 , . . . , ν n ) is an eigenvector of A with In addition, note that every non-zero vector v such that v r = 0 if t r = 0 and n r=1 v r = 0, is an eigenvector of A with eigenvalue 1. The set of such vectors form an eigenspace of dimension k − 1. Similarly, every non-zero v such that v r = 0 if t r = 1 and n r=1 v r = 0 is an eigenvector of A with eigenvalue −1. The set of such vectors forms an eigenspace of dimension n − k − 1. By counting dimensions of eigenspaces, it is clear that these are all eigenvalues of matrix A. Since there are 2 n different choices of vector t, we also found all eigenvalues of matrix U. It follows that these eigenvalues are ±1, and λ ± k for k = 1, . . . , n − 1. From the formula for eigenvalues, the distance between distinct eigenvalues can be estimated from below as ∆ > 2 n . Hence, t rel < n 2 . By applying Theorem 1.1, we find For the lower bound, consider for simplicity the case of even n = 2m. (The case of odd n is similar.) Let v x,k denote the value of function v ∈ L 2 ((Z 2 ) n × Z n ) on vertex x and state k, and consider the eigenvectors that correspond to eigenvalues λ + m and λ + m+1 respectively. Then, it is easy to compute the overlap of these eigenvectors as 1 − (m + 1) / (2m 3 ) ∼ 1 for large n. The distance between arguments of eigenvalues λ + m and λ − m+1 is approximately 2/n. Hence, by Theorem 1.2 we have the inequality t mix (ε) n 2ε . QED. Example 3. (Complete graph) There are several ways to define a discrete-time walk on the complete graph with n vertices. We will consider the following variant. Let |S| = n. Define the entries of the unitary matrix as follows. In words, let the particle start at vertex v in state s. Then at the next moment of time it will be at vertex w = s. The particle moves to state s ′ with amplitude (−2/n) for s ′ = v. If s ′ = v, then the amplitude of the transition is (1 − 2/n) . Proposition 2.3. The mixing time for the quantum walk on the complete graph satisfies inequalities: where c is a positive constant. Proof: We will show that the eigenvalues of U are 1, −1, i and −i with multiplicities n(n − 1)/2, 1 + (n − 1)(n − 2)/2, n − 1, and n − 1, respectively. Let X vs = ψ (v, s) . Then, the action of U can be written as follows: where X T is the transposed matrix X, 1 n is the column n-by-1 vector that consists of all ones, and 1 T n is the corresponding row vector. If X = X T and all columns of X sum to 0, then U (X) = X. This gives us an eigenspace of operator U with eigenvalue 1 and dimension n(n − 1)/2. Similarly if X T = −X and all columns of X sum to 0, then U (X) = −X. This gives us an eigenspace of U with eigenvalue −1. In addition, U (I) = −I. Hence, the dimension of the eigenspace with eigenvalue −1 is 1 + (n − 1)(n − 2)/2. In order to find the eigenspaces with eigenvalues ±i, consider U 2 . It acts as follows: Let c 1 , . . . , c n and r 1 , . . . , r n be arbitrary numbers satisfying the conditions c i = r j = 0, and define X ij = r i + c j . Then U 2 (X) = −X. Hence, these matrices belong to the eigenspace of U 2 with eigenvalue −1. The dimension of this space is 2n − 2. It follows that U has two eigenspaces of dimension n − 1 which correspond to eigenvalues i and −i, respectively. By counting dimensions we confirm that we have found all eigenvalues and eigenspaces of matrix U. QED. Proof of Theorem 1.4: The proof of (i) and (ii) is an application of results by Krein and Rutman from [9]. In this paper a cone K in a Banach space X is fixed and operator A ∈ L (X) is called strongly positive if for every non-zero x ∈ K, there is an integer n > 0, such that A n x is in the interior of K. Theorem 6.3 of this paper (on page 70 of the English translation) shows that if A is compact and strongly positive, then there exists one and only one eigenvector of A in the interior of K and the corresponding eigenvalue exceeds all others in absolute value. Moreover, the proof of the theorem shows that this eigenvalue is simple. The claim of our theorem follows if we apply the Krein-Rutman theorem to the cone of positivedefinite matrices. Indeed, by 1.3 there exists ρ st such that T ρ st = ρ st . Since T is strongly positive, hence ρ st is in the interior of K (i.e., strictly positive); by the Krein-Rutman theorem it is the only eigenvector in the interior of K, and its eigenvalue 1 is simple. For (iii), let Z be the space of Hermitian matrices with zero trace, Z = {ρ : tr (ρ) = 0} . Then, T Z ⊂ Z and all the eigenvalues of T | Z are less than 1 in absolute value, because 1 is a simple eigenvalue and ρ st / ∈ Z. It follows that the spectral radius η of T | Z is smaller than 1. Hence, lim n→∞ (T | Z ) n 1/n = η < 1, which implies that T n z → 0 for every z ∈ Z. Since ρ 0 − ρ st belongs to Z for every density matrix ρ 0 , we conclude that T n ρ 0 → ρ st for every ρ 0 . QED.
2010-07-21T23:33:53.000Z
2010-04-01T00:00:00.000
{ "year": 2010, "sha1": "21e006666fbb029209d874ddce7c3fa4852b4550", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1004.0188", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8cf7067aec322612d705e70f75a9cc73a8c7d0bb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
233476457
pes2o/s2orc
v3-fos-license
Experimental demonstration of conjugate-Franson interferometry Franson interferometry is a well-known quantum measurement technique for probing photon-pair frequency correlations that is often used to certify time-energy entanglement. We demonstrate the complementary technique in the time basis, called conjugate-Franson interferometry, that measures photon-pair arrival-time correlations, thus providing a valuable addition to the quantum toolbox. We obtain a conjugate-Franson interference visibility of $96\pm 1$% without background subtraction for entangled photon pairs generated by spontaneous parametric down-conversion. Our measured result surpasses the quantum-classical threshold by 25 standard deviations and validates the conjugate-Franson interferometer (CFI) as an alternative method for certifying time-energy entanglement. Moreover, the CFI visibility is a function of the biphoton's joint temporal intensity and is therefore sensitive to that state's spectral phase variation, something which is not the case for Franson interferometry or Hong-Ou-Mandel interferometry. We highlight the CFI's utility by measuring its visibilities for two different biphoton states, one without and the other with spectral phase variation, and observing a 21% reduction in the CFI visibility for the latter. The CFI is potentially useful for applications in areas of photonic entanglement, quantum communications, and quantum networking. Time-energy entanglement is the quintessential quantum resource for enabling next-generation quantum technologies such as one-way quantum computation [1], quantum-enhanced sensing [2][3][4], and quantum-secured communications [5,6]. Franson interferometry is a wellknown technique for measuring the nonlocal timing coincidence of photon pairs [7]. Because Franson interference visibility resembles the Clauser-Horne-Shimony-Holt (CHSH) inequality, it is often used to characterize the quality of a biphoton's time-energy entanglement [8]. Nevertheless, Franson interferometry only quantifies the photon pair's correlation in the frequency domain and does not provide correlation information in the time domain [9]. Without time-domain characterization, Franson interferometry by itself cannot reveal a full picture of the biphoton's nonclassical correlations. Characterization of entangled photon pairs in the time domain is challenging because there is no readily available experimental method to directly measure two-photon timing correlation. One can extract two-photon time correlation from their joint temporal intensity (JTI) measurements but they typically require sub-picosecond temporal gating and single-photon nonlinear conversion that tend to limit measurement efficiencies [10,11]. The conjugate-Franson interferometer (CFI) was proposed as a quantum measurement technique for probing two-photon correlation in the time domain in contrast to the Franson interferometer's frequency-domain probing [9]. The two interferometric techniques form a complementary quantum-measurement duo for quantifying biphotons' time-energy entanglement. Indeed, the CFI was proposed to work together with the Franson interferometer to provide a tighter bound on an eavesdropper's accessible information in high-dimensional quantum key distribution than is achievable with the Franson interferometer alone [9]. The addition of the CFI to the expanding quantum toolbox offers new or improved measurement capability in quantum photonic studies. Although biphoton spectral phase information can be obtained using frequencyresolved [12] or time-resolved [13] two-photon local interference, these techniques require nearly-degenerate photon pairs. The CFI, however, is a nonlocal two-photon measurement that is suitable for nondegenerate photon pairs. Other means to probe temporal correlations include the use of an electro-optic spectral shearing interferometer [14] with femtosecond pulse gating, and phasesensitive detection with a stable and well-characterized classical field [15]. The CFI, on the other hand, does not require a reference field and can work with photon pairs generated by pulsed or continuous-wave (cw) pumping. Recent studies on quantum frequency combs have underscored the inability of Hong-Ou-Mandel interference (HOMI) [16] or Franson interference to distinguish two frequency combs that differ only in their spectral phase content. Lingaraju et al. made HOMI measurements on biphoton frequency combs with different spectral phase variations and found identical HOMI signatures [17]. To understand what properties of biphoton frequency combs can be extracted by different interferometric measurements, Chang et al. argues that both HOMI and Franson interference are functions of the biphoton's joint spectral intensity (JSI), whereas the CFI measures the state's JTI [18]. Because spectral phase variation does not affect the JSI, it confirms the observation in [17] and suggests that the CFI is the appropriate measurement tool to distinguish combs with spectral phase variations. A classicaloptics analog is how linear dispersion of a transformlimited optical pulse imposes a phase chirp that results in pulse broadening which is detectable by time-domain but not frequency-domain measurements. In this Letter, we report implementing the CFI and obtaining a 96 ± 1% CFI fringe visibility without background subtraction for time-energy entangled photon pairs generated by cw pumped spontaneous parametric down-conversion (SPDC) Our measured visibility surpasses the quantum-classical threshold by ∼25 standard deviations, thus validating the CFI as a valuable tool for quantifying a biphoton's time-energy entanglement. Moreover, we demonstrate the CFI's unique capability by utilizing it to distinguish between two biphoton states that differ only in their spectral phase content, one having a uniform phase and the other with a nonuniform phase. Our CFI measurements show a visibility degradation of 21.2% for the biphoton state with a nonuniform spectral phase when compared to the visibility obtained with a uniform phase (which is transform limited), in agreement with our theoretical calculation. The visibility degradation indicates a decrease in timing correlation as the result of the presence of spectral phase, whose information cannot be obtained using standard tools for analyzing the joint properties of photon pairs, such as HOMI, Franson interference, and JSI measurements [17,19,20]. We expect that the addition of the CFI to the quantum toolbox provides a simpler way to characterize time-domain correlation and a new method to monitor spectral phase information of time-energy entangled photon pairs. Hence we believe the CFI will enhance future developments of entanglement systems for computing, communication, and sensing applications. The conjugate-Franson interferometer comprises two Mach-Zehnder interferometers (MZIs) that are separated in space with each MZI having equal-length arms. For time-energy entanglement characterization, signal (idler) photons of entangled signal-idler photon pairs are sent to one (the other) MZI, and their coincidence outputs are monitored to measure the conjugate-Franson interference. An optical frequency shifter is placed in one of the arms within each interferometer, implementing a ∆Ω frequency shift for the signal photons and a −∆Ω frequency shift for the idler photons, with ∆Ω large enough to rule out single-photon interference. The frequencyshifted and the frequency-unshifted paths interfere at a 50/50 beam splitter and acquire a phase difference of φ S (φ I ) within the signal (idler) interferometer. The outputs from both MZIs are sent to dispersive elements that impose second-order dispersions with equal magnitudes but opposite signs. The dispersed signal and idler photons are then detected by superconducting nanowire single-photon detectors (SNSPDs) and their timing coincidences are recorded. The second-order dispersions imposed by the dispersive elements correlates the frequency content of the inputs to their measured arrival times, thus effectively converting the performed timedomain measurement result to a frequency-domain measurement. The opposite signs of the two dispersive elements, together with nonlocal dispersion cancellation, recover the signal-idler frequency coincidences as signalidler timing coincidences and thus distinguish between different signal-idler sum frequencies. The biphoton for time-energy entangled photon pairs produced by cw pumped SPDC can be written in its timedomain representation as [21]: where t + = (t S + t I )/2 and t − = (t S − t I ), with t S (t I ) representing the time for the signal (idler) photon. ψ SI (t − ) is the joint temporal amplitude and its magnitude squared is the joint temporal intensity JTI(t − ) = |ψ SI (t − )| 2 . The CFI's coincidence probability is given by [21] where φ T = φ S +φ I is the sum of the signal and idler MZI phase differences in the CFI and η is the measurement efficiency in each MZI. The resulting visibility is This visibility result is similar to those obtained in Franson interferometry for time-energy entangled photons [7] and the CHSH test with polarization-entangled photons [8] in that the CFI is in the same class of quantum measurements for testing the violation of local hiddenvariable theory and quantifying the nonlocal feature of entanglement. To demonstrate conjugate-Franson interferometry, we built a CFI as shown in the experimental schematic of Fig. 1a with inputs of time-energy entangled photon pairs generated by cw pumped SPDC from a type-II phase-matched periodically-poled potassium titanyl phosphate (PPKTP) waveguide. The signal and idler photons were separated using a fiber polarizing beam splitter and sent to their respective MZIs. We repurposed two quadrature phase-shift keying (QPSK) modulators as the frequency shifters and set the frequency shift at ±∆Ω/2π = ±15.65 GHz [22]. We first characterized the frequency-shifted outputs from both frequency shifters using a narrowband cw laser at 1560 nm, as shown in Fig. 1b. Within the desired frequency range from −∆Ω to ∆Ω, a minimum of 25 dB carrier-to-sideband ratio was achieved for both blue and red frequency shifters. The outputs from the signal and idler MZIs were sent to fiber Bragg-grating dispersion modules that imposed equal magnitude but opposite sign dispersions of ±10 ns/nm, after which the photons were detected using SNSPDs and the detections were time-tagged electronically. Because the SPDC signal-idler photon pairs are timeenergy entangled, the imposed opposite dispersions cancel and their arrival times remain correlated [6]. Nevertheless, the existence of dispersion reveals the incoming photons' frequency information. The resolution of our frequency-domain measurement is 1.8 GHz, which is determined by the detectors' timing jitter and the amount of applied dispersion. A sample signal-idler coincidence measurement from the CFI is shown in Fig. 1c. The locations of the coincidence peaks correspond to the signalidler sum frequencies which in turn indicate the possible paths the signal and idler photon have traveled. There are four possible path configurations as signal and idler photons can travel along either the frequency-shifted or the frequency-unshifted arms. The two side peaks correspond to the case in which only one of the signal and idler photons has been frequency shifted such that the signal-idler frequency sum is detuned by ±∆Ω/2π = ±15.65 GHz. For the center peak the sum frequency remains unchanged, requiring that both photons travel along their frequency-unshifted arms or they both go through their respective frequency shifters. The two different paths are indistinguishable and they interfere as a function of the MZI phase sum φ T , producing the CFI's nonlocal coincidence interference similar to that of the Franson interferometer. We note that if the dispersion modules were not present, the three peaks could not be separated and the maximum interference visibility achievable would be limited to 50%. We observed that the center coincidence peak of Fig. 1c varied as a function of the phase sum φ T . The CFI was thermally insulated but we still observed that the center coincidence peak changed its magnitude due to residual thermal drift at an estimated rate of 0.3 rad/min for φ T . We recorded the signal-idler coincidences and plotted the coincidence counts of the center peak as a function of the accumulated phase sum φ T , as shown in Fig. 2a. The result shows a clear oscillatory signature as a function of the phase drift. To eliminate the possibility that the change of the coincidence counts was caused by changes of the photon flux, we also recorded the singles rates of both detectors at the same time during the coincidence measurement, as shown in Fig. 2b. The measured singles rates remain constant throughout the thermal drift duration and show that the oscillatory fringe is not a result of single-photon interference. To obtain an accurate value for the CFI's interference visibility, we attached a piezoelectric transducer (PZT) stack to the signal MZI's frequency-unshifted arm as a fiber stretcher to impose a controllable phase shift on φ S . We repeatedly scanned φ S from 0 to 2π while keeping φ I constant. The fringe visibility was calculated based on the observed minimum and maximum coincidence counts within each phase scan. We obtain a CFI visibility of 96 ± 1% based on 23 phase-scan measurements and an uncertainty of 1 standard deviation. We estimate that degradation of our CFI visibility measurements was due to phase fluctuations of the CFI (1.2%), modulators' extra sidebands (0.7%), modulator dispersion (0.5%), dark counts and noise background (0.5%), and SPDC multipair events (0.4%). The achieved visibility validates the quantum nonlocal correlation between our SPDC photon pairs, surpassing the quantum-classical threshold of 1/ √ 2 =70.7% by ∼25 standard deviations. Although our current measurement setup is affected by the postselection loophole, it can be modified to match the two side peaks temporally and eliminate the post-selection loophole [23]. Furthermore, our 96% measured visibility is on par with the 96% visibility threshold set by a modified inequality that patches the post-selection loophole [24]. As a result, the high CFI visibility confirms that our photon-pair source indeed produces time-energy entanglement and validates conjugate-Franson interferometry's being a promising quantum measurement technique for certifying time-energy entanglement. To show that the CFI brings new capability to the increasingly expanding photonic quantum toolbox, we demonstrate that the CFI visibility is sensitive to the spectral phase of a biphoton state, something which cannot be sensed by Franson or Hong-Ou-Mandel interferometers. First consider a cw pumped SPDC source generating a time-energy entangled biphoton state with a flat spectrum spanning 320 GHz and no spectral phase variation, i.e., its frequency-domain description is where Ψ SI (ω) = 1/ √ 2ω max is its joint spectral amplitude (JSA), ω S0 (ω I0 ) is the signal (idler) center frequency, and ω is the state's frequency detuning with a range of ±ω max where ω max /2π = 160 GHz. Now consider the state |ψ (2) SI whose JSA is where ω 1 /2π = 80 GHz. Although |ψ (2) SI differs from |ψ (1) SI when 0 < φ < 2π, these states cannot be distinguished by Franson or Hong-Ou-Mandel interference because |ψ (2) SI and |ψ (1) SI have identical JSIs, as shown in Fig. 3a, and those interferometers' interference patterns are determined by the JSI. On the other hand, the JTIs of |ψ (2) SI and |ψ (1) SI are different, because of JTI's spectral phase dependence. This difference is shown in Fig. 3b and c, which display the JTIs of |ψ (2) for φ = 0 and π, respectively., with the former also being the JTI of |ψ (1) SI . Equation 3 indicates that the CFI visibility is a function of the JTI and thus sensitive to spectral phase. Our theoretical calculation for the CFI visibility yields 95.1% for |ψ (1) SI and 75.5% for |ψ (2) SI with φ = π. This represents a ∼20% drop in CFI visibility that should be measurable experimentally. We used a type-0 phase-matched periodically poled lithium niobate (PPLN) crystal to generate time-energy entangled photon pairs with a flat spectrum across the telecommunication C band. We applied a programmable amplitude and phase spectral filter to shape the signal and idler spectra to be rectangular with a 320 GHz bandwidth and to impose an adjustable phase e iφ on both signal and idler light for frequency detuning |ω|/2π between 80 to 160 GHz, thus producing the biphoton state |ψ (2) SI . We measured the CFI visibility at φ = 0, π/2, π, 3π/2, and 2π and Fig. 4 displays our results along with the theoretically calculated values. Because φ = 0 or 2π makes |ψ (2) SI = |ψ (1) SI , the 93.2 ± 2.0% visibility we obtained for φ = 0 and the 91.4 ± 2.0% we got for φ = 2π, with the uncertainty value being the standard deviation of 3 measurements, are consistent with that equivalence. Figure 4 shows that the CFI visibility degrades when spectral phase variation was introduced, reaching a minimum visibility of 72.0 ± 3.1% for φ = π, in good agreement with our calculation. In this simple example, the substantial visibility reduction of 21.2% from φ = 0 to φ = π clearly confirms the ability of the CFI to distinguish between states with different spectral phase content. Our experimental results show that conjugate-Franson interferometry can be used not only for quantifying time-energy entanglement of biphotons but also for detecting their spectral phase differences, which is helpful in characterizing entangled systems with high-dimensional encoding [17,18]. In summary, we reported experimental realization of the conjugate-Franson interferometer, demonstrating a CFI visibility of 96±1% without any background subtraction for time-energy entangled photon pairs generated by cw pumped SPDC. The achieved visibility surpasses the quantum-classical threshold of ∼71% by 25 standard deviations and clearly validates the quantum entanglement feature between the SPDC signal and idler photons. To illustrate its application potential, we utilized the CFI as an enabling quantum measurement technique to distinguish two biphoton states with identical joint spectral intensities but different joint temporal intensities due to spectral phase variation. By introducing an adjustable spectral phase shift to a cw pumped SPDC biphoton state, we observed a significant CFI visibility drop of 21% between the two biphoton states, matching our theoretical calculations. Our results show that conjugate-Franson interferometry quantifies correlation in the time domain and is complementary to the well-known Franson interferometry. The CFI's dependence on the joint temporal intensity makes it a valuable addition to the suite of quantum measurement techniques for entanglement characterization and verification. The PPKTP waveguide was type-II phase-matched and pumped by a 780 nm cw laser. The orthogonallypolarized signal and idler photons were nondegenerate with ∼200 GHz offset between their center frequencies and each had a full-width at half-maximum (FWHM) bandwidth of 320 GHz. We used polarization control paddles and polarizing beam splitters to balance the flux between the frequency-shifted arm and the frequencyunshifted arm within each MZI of the CFI. The frequency shifters were dual-drive quadrature phase-shift keying (QPSK) modulators (Fujitsu FTM7961EX) operating in a configuration for single sideband generation [22]. The radio frequency (RF) electrical inputs to both modulators were derived from the same RF synthesizer. For each modulator, the 15.65 GHz RF signal was amplified and split by a 0 • -phase 50/50 power splitter to serve as inputs to the frequency shifters. The cables connecting the two power-splitter outputs to the modulator had a 10.96 cm length difference so that the split RF signals had a π/2 phase shift when they arrived at the modulator to satisfy the single-sideband generation condition. The signal's frequency shifter was configured to blue shift its input while the idler's shifter was configured to red shift its input. Tunable air gaps on translation stages were used to match the path lengths between the frequencyshifted and frequency-unshifted arms. We used a 50-nmbandwidth superluminescent diode (SLD) at 1560 nm to ensure the two path lengths were well matched. The path-length mismatch was upper bounded by the SLD source's 16 µm coherence length, which is much less than the expected ∼ 200 µm biphoton coherence length. The polarization between the two arms of each MZI was calibrated to be the same using classical light to ensure optimal interference at the 50/50 beam splitter. During operation, the signal's MZI had an 18.6 dB insertion loss and the idler's MZI had a 22.7 dB insertion loss. These high insertion losses were mainly due to the low conversion efficiencies of the frequency shifters [22]. The different insertion losses of the two MZIs was caused by performance difference of the two frequency shifters and tunable air gaps. The two fiber Bragg-grating dispersion modules had 3 dB insertion loss and passband from 1557.37 nm (192.50 THz) to 1562.74 nm (191.84 THz). After the dispersion modules, the photons were detected with WSi SNSPDs with ∼80% system efficiency and 120 ps timing jitter. The detected signal and idler spectral ranges were limited by the dispersion modules' 660 GHz passband. The detection events were timetagged using a time-tagger (Hydraharp 400) with 128 ps timing resolution. We placed the CFI in a custom-built two-stage thermal enclosure. Both the outer and inner layers were made from cardboard and thermal-isolation foam. This passive thermal enclosure slowed down the ambient thermal fluctuation and also restricted the inside air current flow so that the phase of the fiber interferometer was rela-tively stable for the duration of measurements. During measurement, the temperature outside the enclosure was kept reasonably stable. Nevertheless, we observed that residual environmental fluctuations imposed a phase drift on the CFI's φ T at a ∼0.3 rad/min rate, which we measured by monitoring the power drift of both MZIs simultaneously. The observed phase drift was relatively small so that the phase during measurement could be approximated by a constant. The signal-idler coincidences were recorded as the CFI underwent the thermal phase drift process and each coincidence data point was integrated for 30 seconds. For measuring the CFI visibility, we focused on obtaining the coincidence counts near the phase locations where the maximum and minimum coincidence rates would occur. We used a 150 V PZT stack to serve as a controllable phase shifter by stretching the fiber length on the frequency-unshifted arm of the signal MZI. The stretched fiber increased the optical path length and introduced more precise phase shifts. Using the PZT stack, we were able to apply phase shifts from 0 to 2π when we applied voltages from 0 to 120 V. We changed the phase from 0 to 2π adaptively to measure the coincidences near their maximum and minimum values. When the measured coincidence was within 10% of either the maximum or minimum, we applied phase-change steps of 0.15 rad. When the measured coincidence was outside of the 10% range of the target maximum or minimum counts, a larger phasechange step of 0.52 rad was used. This adaptive strategy allowed us to capture the maximum and minimum coincidences in a more efficient and controllable manner. For the measurement to distinguish between two biphoton states with different spectral phases, we used a PPLN crystal that was type-0 phase-matched and pumped by a 780 nm cw laser. A 50/50 beam splitter was used to separate the co-polarized signal and idler photons that incurred a 3 dB loss for postselected signalidler coincidence measurements. The signal and idler had flat spectra across the telecommunication C band. We used a Finisar waveshaper 1000S as the programmable spectral filter to control both the amplitude and phase of signal-idler joint spectral amplitude. I. INTRODUCTION The conjugate Franson interferometer (CFI) was introduced in Ref. [1] as a tool for securing high-dimensional quantum key distribution based on time-energy entangled biphotons. That paper's Supplemental Material includes a detailed derivation of the CFI's coincidence-count behavior. Thus we can content ourselves with a briefer presentation that gets at an essential feature of the CFI that was not made explicit in Ref. [1], i.e., the CFI's coincidence behavior is controlled by the biphoton state's joint temporal intensity (JTI). As such, the CFI complements the conventional Franson interferometer (FI), whose coincidence behavior is controlled by the biphoton state's joint spectral intensity (JSI). The biphoton's JSI is the squared magnitude of its joint spectral amplitude (JSA), which is the biphoton's properly normalized frequency-domain wave function. Similarly, the biphoton's JTI is the squared magnitude of its joint temporal amplitude (JTA), which is the biphoton's properly normalized time-domain wave function. It follows that knowing both the JSI and the JTI will allow the biphoton's full state to be determined by applying standard phase-retrieval techniques to recover the JSA's missing spectral phase, see, e.g., [2]. II. PRELIMINARIES We are interested in single spatial mode signal and idler fields produced by a type-II or type-0 phase matched spontaneous parametric downconverter. The scalar, photon-units, positive-frequency field operators for the relevant polarizations of the signal and idler will be taken to beÊ For what will follow it will be valuable to have these operators' frequency-domain decompositions [3,4], where ω S0 (ω I0 ) is the center frequency of the signal (idler). Our interest will be in biphoton states of these signal and idler, i.e., states of the form Here, |ω S0 + ω S S and |ω I0 − ω I I are signal and idler states consisting of single photons at detunings ω S and −ω I , respectively, from those field's center frequencies. The preceding states are properly normalized, viz., SI ψ|ψ SI = 1. Hence, taking we find that dω S dω I |Ψ SI (ω S , ω I )| 2 = 1. In other words, this biphoton's JSA is and its JSI is For a time-domain representation of this biphoton, we introduce signal and idler states |t S S and |t I I consisting of single photons at times t S and t I , respectively. These states are the Fourier duals of |ω S0 + ω S S and |ω I0 − ω I I , i.e., we have arXiv:2104.15084v1 [quant-ph] 30 Apr 2021 2 and |ω S S = dt S e −i(ω S 0 +ω S )t S |t S S and |ω I I = dt I e −i(ω I 0 −ω I )t I |t I I , from which we get Then, direct evaluation using Eqs. Later we shall employ the Gaussian biphoton wave functions, and where the root-mean-square (rms) coherence time, σ coh , is determined by the downconverter's pump linewidth and the rms correlation time, σ cor , is determined by the downconverter's phase-matching bandwidth. This biphoton can be realized by engineered phase matching of a periodically-poled nonlinear crystal, see, e.g., [5]. III. COINCIDENCE PROBABILITY OF CONJUGATE FRANSON INTERFEROMETRY To instantiate the CFI configuration shown in Fig. 1 of the main text, let the downconverter's signal beam undergo the ∆Ω > 0 frequency shift and normal dispersion, while the downconverter's idler beam undergoes the −∆Ω < 0 frequency shift and anomalous dispersion. Taking all the optics to be lossless, we then have that the positive-frequency field operators illuminating the single-photon detectors in Fig. 1 arê andÊ (+) 3 As shown in Ref. [1], for sufficiently high frequency shifts and a sufficiently high dispersion coefficient, β 2 , there will not be any second-order interference and, in the absence of dark counts, the probability of registering a coincidence from a biphoton emitted by the downconverter is where η is the detectors' quantum efficiency. To rewrite Eq. (19) in terms of ψ SI (t S , t I ), we first invert Eq. (12) to obtain Ψ SI (ω S , ω I ) = 1 2π dt S dt I ψ SI (t S , t I )e i[(ω S 0 +ω S )t S +(ω I 0 −ω I )t I ] . Using this result in Eq. (19) gives us the result we are seeking, P CFI (φ S , φ I ) = η 2 8 1 + dt S dt I |ψ SI (t S , t I )| 2 cos[∆Ω(t S − t I ) + (φ S + φ I )] where φ T ≡ φ S + φ I is the interferometer's phase sum. As an illustration of the coincidence probability's behavior, let us evaluate Eq. (22) using ψ(t S , t I ) from Eq. (16). The double integral is easily performed if we change to sum and difference coordinates, i.e., t + ≡ (t S + t I )/2 and t − ≡ t S − t I . The result we obtain is which implies an interference fringe visibility = e −∆Ω 2 σ 2 cor /2 ≈ 1, for ∆Ω 1/σ cor . The resulting visibility is 4 IV. CONJUGATE-FRANSON INTERFEROMETER PHASE STABILITY CHARACTERIZATION To characterize our CFI setup's phase stability, we simultaneously monitored the power variations in the signal and idler's balanced Mach-Zehnder interferometers (MZIs) with a 1560 nm cw laser. In this phase stability measurement, frequency shift was not implemented. The measured results are shown in Fig. 1. From the MZIs' output powers, we calculated their relative phase changes ∆φ S and ∆φ I . The change in the total phase shift is the sum of the two individual phase shift changes, i.e., ∆φ T = ∆φ S + ∆φ I . Overall, the calculated average drift rate of the sum phase was ∆φ T /∆t is ∼0.3 rad/min.
2021-05-03T01:16:17.531Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "3f54fc97514b746adebf52557d53ed9ec123af98", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.15084", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3f54fc97514b746adebf52557d53ed9ec123af98", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
1558295
pes2o/s2orc
v3-fos-license
Yang-Mills thermodynamics: The preconfining phase We summarize recent nonperturbative results obtained for the thermodynamics of an SU(2) and an SU(3) Yang-Mills theory being in its preconfining (magnetic) phase. We focus on an explanation of the involved concepts and derivations, and we avoid technical details. Introduction. This is the second one in a series of three papers summarizing the thermodynamics of an SU (2) and an SU (3) Yang-Mills theory as it is analysed nonperturbatively in [1]. The goal is to give a nontechnical presentation of the concepts involved in quantitatively describing the preconfining or magnetic phase. In the phase diagram of either theory this phase is sandwiched inbetween the deconfining phase (bosonic statistics) and the absolutely confining (fermionic statistics) phase. While fundamental, heavy, and fermionic test charges are confined by condensed magnetic monopoles dual gauge modes are massive but propagate. To derive the phenomenon of quark confinement from a microscopic Lagrangian based on an SU(3) gauge symmetry, that is, from Quantum Chromodynamics (QCD), is a major challenge to human thinking. This is due to the theory being strongly interacting at large distances which disqualifies a perturbative approach to this problem. The difficulties are overwhelming even in the somewhat simplified situation, where quarks are considered heavy and nondynamical test charges that are immersed into pure gluodynamics. Two main proposals for the confining mechanism are discussed in the literature: The dual Meissner effect, which takes place within a condensate of massless magnetic monopoles [2], and the condensation of infinitely mobile magnetic center-vortex loops [3] implying a dielectricity of the ground state which strongly increases with distance. In its electrically dual form the former mechanism is observed in Nature if a superconducting material is subjected to an external, static magnetic field. Within the Cooper pair condensate the field lines of the latter are forced into thin flux tubes [4]: an immediate consequence of infinitely mobile, that is, stiffly correlated electric charges. In the hypothetical situation, where the magnetic field is sourced by a pair of a heavy magnetic monopole and antimonopole at distance R the squeezing-in of flux lines leads to a linear potential at large R and thus to monopole confinement. A similar situation would hold if the condensate of Cooper pairs would be replaced by infinitely mobil magnetic dipoles. The magnetic dual of this phenomenon is that a condensate of electric dipoles confines a heavy electric charge and its anticharge. But an electric dipole originates from a circuit of magnetic flux. In an SU(N) Yang-Mills theory the latter is naturally provided by a magnetic center-vortex loop. The confinement mechanism involving a dual superconductor and a center-vortex condensate are mutually exclusive. One of the results in [1] is, however, that either of the two mechanisms belongs to one of the two separate phases with test-charge confinement in SU(2) and SU(3) gluodynamics. The objective of the present paper is the discussion of a phase of SU(2) and SU(3) gluodynamics whose ground state is a dual superconductor (magnetic phase). We focus on the SU(2) case and mention generalizations to the SU(3) case and the associated results in passing only. In discussing the properties of the ground state and those of its (noninteracting) quasiparticle excitations we pursue the following program: First, we remind the reader why pairs of nonrelativistic and screened Bogomol'nyi-Prasad-Sommerfield (BPS) [5] magnetic monopoles and antimonopoles arise as spa-tially isolated defects in the deconfining (electric) phase. Second, a continuous parameter, namely, the magnetic flux originating from a zero-momentum pair of a monopole and its antimonopole, both located at spatial infinity, is computed in the limit of total screening. The limit of total magnetic charge screening is dynamically reached at the electric-magnetic phase boundary [1,6]. Recall, that total screening means masslessness for each individual monopole or antimonopole and that the projection onto zero three-momentum corresponds to a spatial coarse-graining. The obtained magnetic flux, being an angular parameter, is identified with the euclidean time τ . Since only charge-modulus one monopoles and antimonopoles contribute to the flux the winding number, associated with the phase of a complex scalar field ϕ, is ± unity. In the (at this stage hypothetical but in a later, intermediate step shown to be selfconsistent) absence of interactions between the monopoles and antimonopoles in the condensate the field ϕ is energyand pressure-free. That is, its euclidean time dependence is BPS saturated. Assuming the existence of an externally provided mass scale Λ M , this determines ϕ's BPS equation. The latter, in turn, uniquely fixes ϕ's potential V M . Similar to the case of the adjoint scalar field φ in the electric phase the statistical and quantum mechanical inertness of the complex field ϕ is established by comparing V M 's curvature with the squares of temperature T and the maximal resolution |ϕ|. Notice that |ϕ| is the maximal spatial resolution after a coarse-graining up to a length scale |ϕ| −1 has been performed. Third, having derived the coarse-grained monopole physics in the absence of interactions the full effective theory is obtained by a minimal coupling of coarsegrained dual plane waves to the inert monopole sector. (The electric-magnetic transition is survived only by gauge fields transforming under U(1) D (SU(2)) and U(1) 2 D (SU(3)) [1]). A pure-gauge solution a D,bg µ to the dual gauge-field equations of motion emerges in the effective theory. This is the coarse-grained manifestation of monopole and antimonopole interactions mediated by plane-wave quantum fluctuations in the dual gauge fields. (The averaged-over quantum fluctuations are of off-shellnes larger than |ϕ| 2 .) By virtue of the pure-gauge configuration a bg D the vanishing pressure and the vanishing energy density of a (hypothetical) condensate of noninteracting monopoles and antimonopoles are shifted to ρ gs = −P gs = π Λ 3 M T (SU (2)). (For SU(3) two monopole condensates ϕ 1 , ϕ 1 and two pure-gauge configurations a D,bg µ,1 = a D,bg µ,2 exist, and one has ρ gs = −P gs = 2π Λ 3 M T .) The negative ground-state presssure can, on a microscopic level and at finite magnetic coupling g, be understood in terms of magnetic flux loops which collapse as soon as the are created. The collapse takes place under the influence of negative pressure occuring away from the vortex cores [1]. The coarse-grained manifestation of dual gauge modes scattering off the magnetic monopoles or antimonopoles along a zig-zag like path within the condensate is the abelian Higgs mechanism. The latter gives rise to quasiparticle masses. Fourth, the required invariance of the Legendre transformations for thermodynamical quantities under the applied coarse-graining yields a first-order differential equation which governs the evolution of the magnetic coupling g with temperature. This evolution allows to analyse the electric-magnetic phase transition and the transition to the absolutely confining (center) phase where center-vortex loops emerge as fermionic particles from the decaying ground state. Moreover, it determines the evolution of thermodynamical quantities. This evolution is exact on the one-loop level. Screened BPS magnetic monopoles. To understand the origin of isolated and screened magnetic monopoles and antimonopoles in the deconfining phase [7] we recall some properties of certain, topologically nontrivial, (anti)selfdual solutions to the euclidean Yang-Mills equations at finite temperature: calorons and anticalorons of nontrivial holonomy, topological charge modulus |Q| = 1, and no net magnetic charge [9]. As it turns out [1,8] only these configurations are relevant for a coarsegrained re-formulation of the fundamental theory at high temperatures. (The term holonomy refers to the behavior of the Polyakov loop when evaluated on a gauge-field configuration at spatial infinity. A configuration is associated with a trivial holonomy if its Polyakov loop is in the center of the group (Z 2 for SU (2) and Z 3 for SU (3)) and associated with a nontrivial holonomy otherwise.) On a microscopic level nontrivial holonomy is excited out of trivial holonomy by gluon exchanges between calorons and anticalorons. On the classical level the (anti)selfdual nontrivial-holonomy solution possesses a pair of a BPS monopole and its antimonopole, which do not interact, as constituents. Switching on one-loop quantum fluctuations, a situation investigated for an isolated (anti)caloron in [10], the monopole and its antimonopole either attract (small holonomy) or repulse (large holonomy) each other. The former process is much more likely than the latter and leads to a negative ground-state pressure after spatial coarse-graining: a monopole and its antimonopole attract each other so long until the annihilate and subsequently get re-created elsewhere. The repulsion between a monopole and its antimonopole, which both originate from a quantum blurred large-holonomy caloron, fades with an increasing number of small-holonomy caloron fluctuations taking place inbetween these particles. This facilitates the life of screened monopoles and antimonopoles in isolation. The screening of the magnetic charge g = 4π e by small-holonomy calorons is, on average, described by the gauge coupling e in the coarse-grained theory. A fact, important when considering the limit e → ∞ below, is that the sum of the masses of a screened monopole and its screened antimonopole, both originating from a dissociating large-holonomy caloron, is independent of the holonomy [9]. Notice that M m+a → 0 for e → ∞ and that this limit dynamically takes place at the electric-magnetic transition, where e ∼ − log(λ E − λ c,E ) (λ c,E ≡ 2πT Λ E ), through a total screening of the magnetic charge of an isolated monopole and its antimonopole by intermediate small-holonomy caloron fluctuations. The average needs to be performed in a physical (unitary) gauge where the magnetic flux emanating of each isolated monopole or antimonopole is compensated for by an associated Dirac string. A continuous dimensionless parameter, eventually to be identified with τ β , arises when considering the magnetic flux in the limit e → ∞ which belongs to a pair of noninteracting, stationary (with respect to the heat bath) monopole and antimonopole situated outside of an S 2 ∞ with infinite radius R = ∞. The latter acts as a heat bath to the pair. Notice that a monopole-antimonopole pair situated inside the S 2 ∞ does not contribute to the flux. (To readers having trouble distinguishing inside from outside for an S 2 ∞ we propose to consider R < ∞ first and then take the limit R → ∞.) More specifically, we are interested in the average flux through S 2 ∞ as a function of the angle δ between the monopole's and the antimonmopole's Dirac string. If there were no coupling to the heat bath of the monopole-antimonopole pair outside of S 2 ∞ the mean flux (average over the absolute orientation of the Dirac strings) would read [1] In the limit e → ∞ we haveF ± → 0 and no continuous parameter determining the τ dependence of ϕ ′ s phase (SU (2)) would arise. After the coupling to the heat bath is switched on and after performing a spatial average the mean occupation number for a massless monopole-antimonopole pair (with vanishing spatial momenta of its constituents) diverges in a such a way that the mean magnetic fluxF ±,th (δ) is finite. (The total spatial momentum of the monopole and antimonopole system vanishes such that each individual momentum is zero. Notice that in the absence of a dynamically generated scale Λ M the volume V , over which the spatial average is performed, is undetermined: The only available mass scale T , which could determine V , cancels in the Bose-distribution for p → 0 and e → ∞ since M m+a 's explicit T dependence is linear, see Eq. (1).) Let us show this in a more explicit way. The thermal average to be performed is [1] After setting p = 0 in exp β M 2 a+b + p 2 − 1 and by appealing to Eq. (1), the expansion of this term reads 8π 2 e 2 + · · · . (4) The limit e → ∞ can now safely be performed in Eq. (3), and we have This is finite and depends on the angular variable δ continuously. Now δ π is a (normalized) angular variable just like τ β is. Thus we may set δ π = τ β . Since ϕ (SU (2)) is spatially homogeneous (spatial average, p → 0!) its phaseφ ≡ ϕ |ϕ| depends on τ β only and this in a periodic way. Moreover, since the physical flux situation for the thermalized monopole-antimonopole pair does not repeat itself for 0 ≤ δ π ≤ 1 we conclude that this period is ± unity: To derive ϕ's modulus, which together with T determines the length scale |ϕ| −1 over which the spatial average is performed, we assume the existence of an (at this stage) externally provided mass scale Λ M . Since the weight for integrating out massless and noninteracting monopole-antimonopole systems in the partition function is T independent and since the cutoff in length for the spatial average defining |ϕ| is |ϕ| −1 an explicit T dependence ought not arise in any quantity being derived from such a coarse-graining. That is, in the effective action density any T dependence (still assuming the absence of interactions between massless monopoles and antimonopoles when performing the coarse-graining) must appear through ϕ only. Moreover, since integrating massless and momentum-free monopoles and antimonopoles into the field ϕ means that this field is energy-and pressure-free ϕ's τ dependence (residing in its phase) must be BPS saturated. On the right-hand side of ϕ's orφ's BPS equation the requirement of analyticity (because away from a phase transition the monopole condensate should exhibit a smooth T dependence) and linearity (because the τ dependence of ϕ's phase, see Eq. (6), needs to honoured) in ϕ orφ yields the following first-order equation of motion Notice that Eq. (7) is not invariant under Euclidean boosts: A manifestation of the existence of a singled-out rest frame -the spatially isotropic and spacetime homogeneous heat bath. Substituting ϕ = |ϕ|φ into Eq. (7) and appealing to Eq. (6), . Notice that the 'square' of the right-hand side in Eq. (7) uniquely defines ϕ's potential V M . (In contrast to a second-order equation of motion, following from an action by means of the variational principle, Eq. (7) does not allow for a shift V M → V M + const.) For the case SU(3) a BPS equation (7) arises for each of the two independent monopole condensates ϕ 1 and ϕ 2 . Finally, one shows by comparing the curvature of their potentials with the square of temperature and the squares of their moduli that the field ϕ (SU (2)) and the fields ϕ 1 , ϕ 2 (SU (3)) do neither fluctuate on-shell nor off-shell, respectively [1]: Spatial coarse-graining over nonfluctuating, classical configurations generates nonfluctuating macroscopic fields. Effective theory: Thermal ground state and dual quasiparticles. To obtain the full effective theory the spatially coarse-grained and topologically trivial (dual) gauge fields a D µ (SU (2)) and a D µ,1 , a D µ,2 (SU (3)) are minimally coupled (with a universal effective magnetic coupling g) to the inert fields ϕ (SU (2)) and ϕ 1 , ϕ 2 (SU (3)). Since the effective theory is abelian with (spontaneously broken) gauge group U(1) D (SU (2)) and U(1) 2 D (SU (3)) and since the monopole fields do not fluctuate thermodynamical quantities are exact on the one-loop level. Before we discuss the spectrum of quasiparticles running in the loop we need to derive the full ground-state dynamics in the effective theory. The classical equations of motion for the dual gauge field a D µ are where G D µν = ∂ µ a D ν − ∂ ν a D µ and D µ ≡ ∂ µ + ig a D µ . (For SU(3) the right-hand sides for the two equations for the dual gauge fields a D µ,1 , a D µ,2 can be obtained by the substitutions ϕ → ϕ 1 or ϕ → ϕ 2 in Eq. 8.) The pure-gauge solution to Eq. (8) with D µ ϕ ≡ 0 is given as In analogy to the deconfining phase, the coarse-grained manifestation a D,bg µ of monopoleantimonopole interactions, mediated by dual, off-shell plane-wave modes on the microscopic level, shifts the energy density ρ gs and the pressure P gs of the ground state from zero to finite values: ρ gs = −P gs = π Λ 3 M T (SU (2)) and ρ gs = −P gs = 2π Λ 3 M T (SU (3)). In contrast to the deconfining phase, where the negativity of P gs arises from monopole-antimonopole attraction, the negative ground-state pressure in the preconfining phase originates from collapsing and re-created center-vortex loops [1]. (There are two species of such loops for SU(3) and one species for SU (2)). The core of a given center-vortex loop can be pictured as a stream of the associated monopole species flowing oppositely directed to the stream of their antimonopoles [11]. Since by Stoke's theorem the magnetic flux carried by the vortex is determined by the dual gauge field a D,tr µ transverse to the vortex-tangential and since a D,tr µ is -in a covariant gauge -invariant under collective boosts of the streaming monopoles or antimonopoles in the vortex core it follows that the magnetic flux solely depends on the monopole charge and not on the collective state of monopole-antimonopole (2)) and λ c,M = 6.467 (SU (3)) g diverges logarithmically, motion. This, in turn, implies a center-element classification of the magnetic fluxes carried by the vortices justifying the name center-vortex loop.) Viewed on the level of large-holonomy calorons an unstable center-vortex loop is created within a region where the mean axis for the dissociation of several calorons represents a net direction for the monopole-antimonopole flow. Notice that each so-generated vortex core must form a closed loop due to the absence of isolated magnetic charges within the monopole condensate. In contrast to the deconfining phase a rotation to unitary gauge a D,bg µ = 0, ϕ = |ϕ| is facilitated by a smooth, periodic gauge transformation which leaves the value 1 of the Polyakov loop invariant: the electric Z 2 degeneracy, observed in the deconfining phase, is lifted in the ground-state. (For SU(3) it is an electric Z 3 degeneracy that is lifted.) By the dual (abelian) Higgs mechanism the mass of (noninteracting) quasiparticle modes is Evolution of magnetic coupling. The requirement that Legendre transformations between thermodynamical quantities need to be invariant under the applied spatial coarse-graining determines the evolution of the magnetic coupling g with temperature in terms of a first-order differential equation [1]. In Fig. 1 the temperature evolution of g is shown for SU(2) and SU (3). By virtue of Eq. (10) and Fig. 1 it is clear that inside the preconfining phase dual gauge modes are massive. Thus the full Polyakov-loop expectation is extremely suppressed as compared to the deconfining phase: Infinitely heavy, fundamental, and fermionic test charges are confined by the dual Meissner effect while dual gauge modes still propagate with a maximal offshellness |ϕ| −1 . Notice the logarithmic singularity at T c,M . Since the energy E v and the pressure P v (r) of a (nonselfintersecting) center-vortex loop, whose center of mass is at rest with respect to the heat bath, scale like +g −1 and −g −2 , respectively, this soliton starts to be generated as a massless and stable (spin-1/2) particle at T c,M [1]. Moreover, all (dual) gauge modes decouple by a diverging mass: For T ≤ T c,M no continuous gauge symmetry can be observed in the system at an extranlly applied spatial resolution smaller than ϕ(T c,M ). Results and summary. In Figs. 2 and 3 we present the results for the temperature evolution of the pressure and the energy density for both the deconfining (electric) and preconfining (magnetic) phase. To express the magnetic scale Λ M in terms of the electric scale Λ E continuity of the pressure is demanded across the electric-magnetic transition. We have Λ M ∼ 1 4 −1/3 Λ E (SU(2)) and Λ M ∼ 1 2 −1/3 Λ E (SU (3)). Notice the increasing negativity of the pressure with decreasing temperature in the magnetic phase. On the microscopic level this is understood in terms of an increasingly large caloron-holonomy, implying an increasingly large repulsion of its constituent monopole and antimonopole, which, in turn, means an increasingly large collimation of the monopole-antimonopole motion in the condensate. The effect is an increasing rate for the creation of (unstable) center-vortex loops implying, after spatial coarse-graining, an increasingly negative ground-state pressure and, at the same time, an increase of the quasiparticle masses. At T c,M all quasiparticles are infinitely heavy and the equation of state is P = −ρ, compare Figs. 2 and 3. The (continuous) order parameter of mass-dimension one for the spontaneous breakdown of the dual gauge symmetry U(1) D (SU(2)) and U(1) 2 D (SU (3)) across the (second-order like) electric-magnetic phase transition is the 'photon' mass in Eq. (10). The associated critical exponents ν are mean-field ones, ν = 0.5, for both SU(2) and SU(3) [1]. The transition is, however, not strictly second order because of a negative latent heat, see Fig. 3. The reason for this discontinuous increase of the energy density across the electric-magnetic transition is the discontinuous increase of the number of polarizations from two to three due to the dual 'photon' becoming massive. This effect is important because it stabilizes the temperature of the cosmic microwave background T CMB , described by an SU(2) pure gauge theory of Yang-Mills scale Λ CMB ∼ T CMB , against gravitational expansion. The latter increasingly liberates a formerly locked-in, spatially homogeneous Planck-scale axion field [1,12] which, eventually, will drive the Universe out of thermal equilibrium globally. The Polyakov-loop expectation, which is an order parameter associated with the spontaneous breaking of the electric Z 2 (SU(2)) and the electric Z 3 (SU (3)) symmetry, though strongly suppressed on the magnetic side, remains finite across the electric-magnetic transition. This happens despite the fact that the magnetic ground state is nondegenerate with respect to these symmetries.
2014-10-01T00:00:00.000Z
2005-07-04T00:00:00.000
{ "year": 2005, "sha1": "ad6861f426fbcf7c176b9f8a93b99085552a6838", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ad6861f426fbcf7c176b9f8a93b99085552a6838", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
9579712
pes2o/s2orc
v3-fos-license
Recurrent Paecilomyces Keratitis in a Patient with Jones Tube after Conjunctivodacryocystorhinostomy Dear Editor, Paecilomyces, which is found in soil and decaying vegetables, is a rare pathogen causing local and systemic infections [1]. We report a rare case of recurrent Paecilomyces keratitis 5 years after Jones tube placement in conjunctivodacryocystorhinostomy (CDCR). A 69-year-old woman was referred by a local ophthalmologist for the management of presumed fungal keratitis in the left eye. One month before presentation, she noted ocular discomfort with decreased vision. She had undergone left eye ocular surgery of CDCR 6 years previous and cataract surgery 2 months previous. After CDCR, she had used daily topical tobramycin and 0.02% fluorometholone eye drops. No known history of ocular trauma, contact lens use, or herpes simplex keratitis was evident. At initial examination, her best corrected visual acuity was finger counting at 30 cm. Slit-lamp examination demonstrated a 2.0 × 2.0-mm-sized epithelial defect with corneal stromal infiltration. There was moderate anterior chamber reaction and linear hypopyon. Corneal scrapings were cultured and confirmed the diagnosis of Paecilomyces infection. Topical amphotericin B 0.125% and voriconazole 1% were started. After 4 weeks of topical antifungal therapy, the epithelial defect and hypopyon were resolved. Her vision was improved to 20 / 50, but corneal opacity and thinning remained. There was no evidence of recurrence during the follow-up. Five years later, the patient presented with reduced vision in the left eye; her visual acuity in the left eye was 20 / 1,000 with spectacle correction. Slit-lamp examination showed geographic ulceration and radial Descemet’s membrane folding at the central cornea including the site of previous corneal opacity and thinning (Fig. 1A and 1B). The additional presence of mild anterior chamber reaction and no hypopyon led to a diagnosis of herpes simplex keratitis, for which acyclovir ointment and topical moxifloxacin were started. Cultures showed no growth of any organism. Geographic ulceration and chamber reaction were improved, but the corneal thinning resulted in a perforation despite treatment. The patient emergently underwent amniotic membrane transplantation and corneal button graft. Two weeks later, she developed a recurrence of keratitis in the graft and 2.0-mm hypopyon. A therapeutic keratoplasty was performed. The previous corneal graft was Korean J Ophthalmol 2016;30(6):479-480 http://dx.doi.org/10.3341/kjo.2016.30.6.479 Correspondence Recurrent Paecilomyces Keratitis in a Patient with Jones Tube after Conj unctivodacryocystorhinostomy Dear Editor, Paecilomyces, which is found in soil and decaying vegetables, is a rare pathogen causing local and systemic infections [1]. We report a rare case of recurrent Paecilomyces keratitis 5 years after Jones tube placement in conjunctivodacryocystorhinostomy (CDCR). A 69-year-old woman was referred by a local ophthalmologist for the management of presumed fungal keratitis in the left eye. One month before presentation, she noted ocular discomfort with decreased vision. She had undergone left eye ocular surgery of CDCR 6 years previous and cataract surgery 2 months previous. After CDCR, she had used daily topical tobramycin and 0.02% fluorometholone eye drops. No known history of ocular trauma, contact lens use, or herpes simplex keratitis was evident. At initial examination, her best corrected visual acuity was finger counting at 30 cm. Slit-lamp examination demonstrated a 2.0 × 2.0-mm-sized epithelial defect with corneal stromal infiltration. There was moderate anterior chamber reaction and linear hypopyon. Corneal scrapings were cultured and confirmed the diagnosis of Paecilomyces infection. Topical amphotericin B 0.125% and voriconazole 1% were started. After 4 weeks of topical antifungal therapy, the epithelial defect and hypopyon were resolved. Her vision was improved to 20 / 50, but corneal opacity and thinning remained. There was no evidence of recurrence during the follow-up. Five years later, the patient presented with reduced vision in the left eye; her visual acuity in the left eye was 20 / 1,000 with spectacle correction. Slit-lamp examination showed geographic ulceration and radial Descemet's membrane folding at the central cornea including the site of previous corneal opacity and thinning ( Fig. 1A and 1B). The additional presence of mild anterior chamber reaction and no hypopyon led to a diagnosis of herpes simplex keratitis, for which acyclovir ointment and topical moxifloxacin were started. Cultures showed no growth of any organism. Geographic ulceration and chamber reaction were improved, but the corneal thinning resulted in a perforation despite treatment. The patient emergently underwent amniotic membrane transplantation and corneal button graft. Two weeks later, she developed a recurrence of keratitis in the graft and 2.0-mm hypopyon. A therapeutic keratoplasty was performed. The previous corneal graft was A B C E D F sent for culture, as was a sample collected from the Jones tube placed during CDCR ( Fig. 1C and 1D). These cultures were identified as Paecilomyces lilacinus based on DNA sequencing analysis of internal transcribed spacer regions ( Fig. 1E and 1F). Despite treatment, no improvement was shown; the patient then underwent repeat penetrating keratoplasty, and the Jones tube was removed. After penetrating keratoplasty and removal of the Jones tube, she showed marked improvements and no complaint of epiphora. There was no clinical recurrence during 6 months of follow-up. Unfortunately, the patient developed acute graft rejection 6 months later, although she refused an additional surgery. Common causes of Paecilomyces keratitis are chronic keratopathy, ocular surgery, steroid therapy, and ocular trauma. Nissenkorn and Wood [2] reported that the use of topical steroid is an important risk factor of secondary infection after herpes simplex keratitis. Presence of an epithelial defect due to dendritic keratitis caused by herpes simplex virus can result in a secondary infection [3]. Boisjoly et al. [4] reported that persistent corneal epithelial defect and prolonged use of steroid eye drops are predisposing risk factors of fungal superinfection. In our patient, possible predisposing factors were chronic use of steroid eye drops after CDCR and presence of a Jones tube that acted as a reservoir for microbials. Especially, recurrence of keratitis was related to a persistent epithelial defect due to herpes simplex keratitis and secondary infection from Paecilomyces species present in the Jones tube. Paecilomyces species are difficult to eradicate because of their resistance to common antifungal agents such as natamycin, amphotericin B, fluconazole, and ketoconazole [5]. In this case, initial Paecilomyces keratitis was successfully treated with topical amphotericin B and voriconazole. However, recurrent Paecilomyces keratitis required a temporary corneal button graft because of corneal perforation and repeated penetrating keratoplasty for complete fungal clearance. This case report suggest that the careful use of steroid eye drops after ocular surgery is necessary in order to pre-vent postoperative keratitis, and that the Jones tube placed during CDCR can develop as a cause of anterior segment infection or worsened infection. Therefore, patients presenting with infectious keratitis who underwent CDCR with Jones tube placement require additional culture of the Jones tube.
2018-02-05T16:19:08.292Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "45c5109be24211e9cd6c11cd3c1ffdbc165ffa2a", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5156622?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "45c5109be24211e9cd6c11cd3c1ffdbc165ffa2a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
808430
pes2o/s2orc
v3-fos-license
Statistical Considerations in Biosimilar Assessment Using Biosimilarity Index When an innovative biologic product goes off patent, biopharmaceutical or biotechnological companies may file an application for regulatory approval of biosimilar products. Unlike small molecule drug products, biosimilars are not exact copies of their brand-name counterpart, and they are usually very sensitive to changes in environmental factors and have greater variabilities due to their complexity and sensitivity to variation in manufacturing processes. Facing these challenges, a biosimilarity index based on reproducibility probability is proposed to assess biosimilarity. In this article, we have demonstrated how to assess biosimilarity between the test and reference product in relative to a reference standard that is established in a study where reference product is compared with itself. Biosimilairty index approach is robust against biosimilarity criteria and has the advantage of allowing the assessment of the degree of similarity. Statistical Considerations in Biosimilar Assessment Using Biosimilarity Index Introduction According to the Biologics Price Competition and Innovation (BPCI) Act passed by the United States (US) congress in 2009, a biosimilar product is defined as a biological drug product that is highly similar to the reference product notwithstanding minor differences in clinically inactive components and there are no clinically meaningful differences in terms of safety, purity, and potency. Biological drug products contain active ingredients that are derived from or made by living cells or organisms. Unlike generic chemical drugs, biosimilar products are expected to have similar but not identical active ingredients as the innovative brand name product. Furthermore, biological products are very sensitive to small changes at various stages of the manufacturing process and environmental factors such as light and temperature. Therefore, biosimilars have greater inherent variability than chemical drugs. Current regulations for the assessment of bioequivalence (BE) for generic chemical drugs only concern with average bioequivalence. The main criticism against the criteria for BE studies is that they do not take variabilities of the drug products into consideration. Given the greater variabilities of the biological drugs it is recognized that current regulations and/or criteria for the assessment of BE may not be applicable for the assessment of biosimilarity between biologic products. The BPCI Act as part of the Affordable Care Act was signed into law in March 2010 which gave the US Food and Drug Administration (FDA) the authority to approve similar biological drug products. However, currently the FDA has not put forward clear standards for biosimilar approval [1] due to the complexity of the biological drug products. Facing these challenges, a new biosimilarity index approach was proposed by Chow et al. [2] to assess biosimilarity. This approach has the advantage of being robust to the study endpoints, criteria and study designs. Thus a universal approach for biosimilarity assessment could be implemented without well-accepted criteria by the regulatory agencies. The BPCI Act also introduced the term "highly similar" in the definition of biosimilarity, but there is little or no discussion regarding how similar is considered highly similar. The proposed biosimilarity index approach can also answer the question of "how similar is highly similar" as the index quantifies the degree of similarity. The purpose of the paper is to illustrate how to operationalize the biosimilarity index approach, and to evaluate the performance of biosimilarity index under a crossover design using average similarity criterion. In the next section, biosimilar index based on reproducibility probability is briefly introduced. In Section 3, the statistical properties of the biosimilarity index are discussed through simulation studies. In Section 4, an example is given to further illustrate the impact of the variability on the conclusion of biosimilarity. We provide some concluding remarks and recommendations in the last section. Biosimilarity Index In order to reflect the characteristics and the impact of variability on the therapeutic effect of biologic products, Chow et al. in 2011 proposed the development of an index based on the concept of the reproducibility probability to evaluate the degree of similarity between two drug products [2]. Reproducibility probability was first considered by Shao and Chow [3] to address the question of whether the observed significant result from a clinical trial is reproducible. testing procedure when the alternative hypothesis is true, replacing the parameter by its estimate based on the observed data. The hypotheses of the similarity testing are often expressed as two sets of one-sided hypotheses: Where, θ is the study parameter chosen to assess biosimilarity; θ L and θ U are the biosimilarity limits, i.e., the accepted lower and upper bounds for declaring biosimilarity. The evaluation of biosimilarity index depends on the form of the test statistics, which in turn depend on the study designs and the criteria chosen. For the 2×2 crossover design, we consider the following statistical model: where ijk Y is the response for subject i in the th k sequence at the th j period, where i=1,..., k n indicates subject, j=1, 2 indicates period, k=1, 2 indicates sequence; µ represents the overall mean; S ik ik S represents the random effect of th i subject in th k sequence, assumed independently and identically distributed (i.i.d.) as N (0, σ 2 ); j P is the fixed period effect; ( j,k ) T represents the fixed effect of the treatment in the th k sequence administered at the th j period; ijk ε is the withinsubject random error, assumed i.i.d. as N (0, σ 2 ). Finally ik S 's and ijk ε 's assumed to be mutually independent. When we choose average biosimilarity criterion, i.e., T R θ = − µ µ , the test statistics for Equation (1) are: Where, T R Y and Y are the least square means for the test and reference products; they can be obtained from the sequence-by-period means: .11 .22 By the estimated power approach, the biosimilarity index  BI P for the 2×2 crossover study using average biosimilarity criterion can be obtained: Where, L T and U T are the test statistics given in Equation (3). Both L T and U T follow non-central t-distribution, with n 1 +n 2 −2 degree of freedom and non-centrality parameters δ L and δ U respectively; δ L and δ U relate to the population means, variances and biosimilarity limits; their estimate δ  L and δ  U can be obtained from the data using the formula To apply the proposed biosimilarity index approach to assess biosimilarity, Chow et al. [2] proposed the following steps [4]: Step 1: Assess the average biosimilarity based on a given biosimilarity criterion. The cri-terion could be based on mean, ratio or variability. Step 2: Once the product passes the test for biosimilarity in Step 1, calculate biosimilarity index of Equation (4) based on the observed mean difference and standard deviation. The calculated biosimilarity index thus takes the variability and the sensitivity to heterogeneity in variances into consideration. Step 3: We then claim highly biosimilar if the calculated 95% confidence lower bound of the biosimilarity index is larger than p 0 , a pre-specified limit on declaring highly biosimilar. To establish p 0 , we recommend that it be based on RR p , the biosimilarity index ob-tained in an R-R study where reference product is compared with itself. By basing p 0 on RR p , the biosimilarity index approach allows us to assess the degree of similarity in relative to the reference product [5]. From the definition of the biosimilarity index and the testing steps outlined above, we can see that this approach has several advantages. First, it is robust with respect to the selected study endpoint, biosimilarity criterion and study design [4] because the biosimilarity index utilized in the second stage of testing "highly similar" is calculated using the same selected study endpoint, biosimilarity criterion and study design. Second, it takes variability into consideration for the calculation of the index, and is sensitive to the variance of the test products. Third, it allows the assessment of the degree of similarity. Other words, it provides an answer to the question of "how similar is considered similar?". Numerical Results The proposed biosimilarity index approach as outlined in Section 2 is implemented in simulation studies to demonstrate the statistical properties of the index. Standard 2×2 crossover study design and average biosimilarity criterion are used. The biosimilarity index is calculated as in Equation (4). Simulation design The study parameter θ is the mean difference between test and reference products, i.e., ) with equal allocation is also investigated. The parameter settings for the simulation studies are summarized in Table 1. Results A total of 1,000 random trials are generated for each combination of the parameter specifi-cations. Table 2 records the percentage of trials that have passed the Step 1 biosimilarity test, i.e., the probability of claiming biosimilarity on the basis of the average biosimilarity criterion. As the mean difference between the test and reference products increase, the probability of claiming biosimilarity decreases. When the variance of the test product increases, the probability decreases as well. Increasing sample size can help increase the probability of claiming biosimilarity. In typical BE studies, the sample sizes range from 18 to 24. To assess biosimilarity, the sample size for the comparative nonclinical and clinical studies are expected to be larger than those chosen in BE studies, but the studies are still conducted in limited number of patients when compared with that used in the pivotal trials for the innovative drugs. For those trials that have passed the Step 1 test, Table 3 reports the average of the p-values obtained from the Schuirmann's two one-sided tests (TOST) procedure. As the mean difference between the test and reference products increase, the p-value increases. When the variance of the test product increases, the p-value increases as well. In another word, as the mean difference and/or variance increase, for those trials where we are able to declare biosimilarity, the evidence against null hypotheses nonetheless weakens. The biosimilarity index, i.e., the reproducibility probability in Equation (4) is calculated as the steps outlined in Section 2. Table 4 further shows the value of p T R, for those trials that have passed the Step 1 test. As expected, the results show that the p T R decreases as the mean difference or variance increases; and it increases as the Next we calculate the percentage of trials that pass the "highly similar" test based on the biosimilarity index p T R . For the p 0 in Step 3 of the testing procedure proposed in Section 2, we choose it to be 0.8 p RR RR p where p RR RR p is assumed known and constant, set at 80%. As the mean difference between test and reference products increases or the variance of the test product increases, the percentage of passing "highly similar" test decreases. The percentage of passing increases as the sample size increases. Note that when the mean difference between test and reference products is large, such as the difference is 0.15, the test drug could not pass the "highly similar" test even if we declared biosimilarity in Step 1. This is due to the fact that null hypothesis in Step 1 was rejected on weak evidence. Or in another word, when we want to make claims on the degree of similarity, additional information is utilized in order to quantify the similarity in comparison with the reference product. Finally, when the test product has a larger variance than the reference product, the results show that it gets harder to conclude the same level of similarity. This demonstrates the proposed biosimilarity index approach is sensitive to the heterogeneity in variances, and can reflect the impact of variability of the biological products. Example As shown in the simulation studies, as coefficient of variation (CV) gets bigger, it is less likely that we are able to declare similarity even when there is no true mean difference. In this section, we use an example data to further illustrate the impact of the high variability on the conclusion of biosimilarity and how the biosimilarity index assesses the degree of similarity (Table 5). In the simulation studies above, we have considered the scenario where RR p is constant. This constant RR p could be obtained from a separate R-R study [5]. In the following example, we set out to obtain RR p concurrently as we assess the test product [6], and thus is considered random. To obtain RR p concurrently, a slight variation of the 2×2 crossover design is used. Namely, for the first sequence, subjects are treated with T at the first dosing period, and crossover to the second dosing period to receive R. However for the R treatment, subjects are again randomly split into two groups, and treated with either R 1 or R 2 . Similarly, for the second sequence, subjects will be split into two groups treated with either R 1 or R 2 respectively, then both of these groups will crossover to receive T in the second dosing period. In this case, the design essentially becomes a 4×2 crossover design (Table 6) We generate a sample data with a total sample size of 160, i.e., 40 subjects per sequence. The means and variances of the two reference products are assumed to be the same, i.e., We further assume there is no true mean difference between test and reference product either. A CV of 30% is chosen for all reference and test products. When R is compared with itself, biosimilarity could not always be declared; the probability of declaring biosimilarity depends on the CV when sample size is fixed. An example where similarity between different batches of R product could not be established is obtained and the sample means are given in Table 7. The observed mean difference When similarity could not be demonstrated between reference products, it is impossible to assess whether or not the test product is similar to this reference product. Careful studies should be conducted to avoid such situation. Under the same parameter setting, another set of data is generated. From this set of data, we are able to declare biosimilarity between R 1 and R 2 . The sample means are given in Table 8 = 0.430, the 90% CI of the mean difference is (-0.108, 0.117), that is, the similarity between T and R is also declared. From the observed mean differences and variabilities, the RR p  and TR p  as evaluated from Equation (4) is 0.884, and 0.894, which could be used to further assess the degree of similarity. Conclusion The numerical results in Section 3 have shown that as variance increases, the probability of declaring biosimilarity in the Step 1 test decreases; the biosimilarity index decreases. The biosimilarity index is calculated for the trials that have passed the Step 1 test, thus reflects the characteristics of the biological products that have already been declared biosimilairty based on average biosimilarity criterion. For the assessment of biosimilars, we should especially be aware of the higher variability of biological drugs, and its impact on the conclusion of biosimilarity testing. The results from the numerical studies have demonstrated that the biosimilarity index approach is sensitive to variance of the products. Other methods have been proposed in the literature to assess variability in addition to the assessment of average biosimilarity. Chow et al. [7] considered an approach based on the probability-based criteria for evaluating average biosimilarity, and demonstrated that the probability-based method is more sensitive to the change of variability than the moment based method. Hsieh et al. [8] has developed the statistical methodology for comparing variabilities for the assessment of biosimilarity and examined its performance under combinations of essential parameters. The advantage of the biosimilarity index approach is that it can be applied to whatever criteria chosen: the criteria used in Step 1 testing are used again in Step 3 to quantify the level of similarity. If we define 0 RR d p / p = , then d can be used to address the degree of similarity and the question of "how similar is highly similar?". In this article, we have set d=0.8, and claim it as highly similar. If we set d=0.7, and claim it as moderate similar, then the percentage of passing Step 3 test would be greater. Thus this factor d allows us to quantify the level of similarity in relative to the reference product. The regulatory agency would be able to consider the class of drugs and the impact of variabilities on clinical performance, and decide what is the tolerable difference, and on what level of similarity relative to the reference drug should be required of the biomiliar products.
2018-04-03T03:08:25.964Z
2013-09-02T00:00:00.000
{ "year": 2013, "sha1": "3d972b954ce38204aa1b6d496f1cf626fcc23502", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/statistical-considerations-in-biosimilar-assessment-using-biosimilarity-index-jbb.10000160.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bc8d598d4e62a7d37c53599bfe59de0406ccb3a2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
54665360
pes2o/s2orc
v3-fos-license
УДК 544 Micro-sized carbon fiber : a new supporting material for microorganisms in the decomposition of nitrogen and phosphorus nutrients in wastewater with high salinity Eutrophication, which kills fish, mussels and other animals in aquatic ecosystem, is the response to the excess of nitrogen and phosphorus nutrients. In this study, activated carbon fiber prepared from poly(acrilonitrile) PAN, a carbonaceous material of micro size with high specific area, has been evaluated to be able to create microbiological membranes that could be used in the decomposition of nitrogen and phosphorus compounds in wastewater with salinity up to 30 ppt. Utilization of carbon fiber in the sustainable treatment of highly contaminated aquaculture wastewater with organic and inorganic pollutants was considered as a promising application in Khanh Hoa province, Vietnam, with great treatment capacity, low system’s price and implementation’s cost. Introduction Aquaculture, just behind tourism, is the most important economical branch in Khanh Hoa -a province in the middle of Vietnam.Its benefits are obvious.It improves the quality of farmers' life and it is a nutrient supply to growing high tourism demand for food service.But, simultaneously, aquaculture has negative impact on the surrounding environment in the region.Produced wastewater from farms is mainly discharge into the coastal sea without treatment [1][2][3][4].Untreated wastewater in the huge volume with high content of organic components, nitrogen and phosphorous nutrients leads to serious environmental problems one of which is eutrophication -the explosive growth of algae in the excess of phosphates and dissolved nitrogen compounds. The concentration of the pollutants can be illustrated according to the total nitrogen (N) and phosphorus (P) criteria.At present, various direct biological methods are being used mainly in the purification of aquaculture wastewater with low salinity [5,6].The application of technological solutions with low price and transportation cost for using in the treatment of contaminated water with higher salinity in aquaculture areas is vital and practical, but could be a challenge. Recently, carbon fiber, a fibrous material with a diameter of 5-10 µm, whose main ingredient is carbon atom (> 90%), has been found out to be capable of purifying water effectively with a number of advantages [7][8][9][10][11]: high-rate treatment due to the high activity of microorganisms; successful treatment of Nitrogen and Phosphorus; simplicity of installation, ease of control; absence of harmful chemicals to human being and aquatic fauna.Each carbon fiber contains bulk of smaller fibers with diameter of 5-10 µm: type 1K -1000 filaments, type 3K -3000, type 6K -6000, etc.This type of material could be synthesized from various precursors: petroleum pitch, cellulose resin and polyacrylonitrile (PAN) [12].In Japan, carbon fibers are installed in ponds, lakes, and coastal sea, to improve water quality of many locations by removing organic contents in the eutrophic natural waters. In this study, the method of high salinity wastewater treatment applying carbon fiber prepared from PAN is utilized.This is a different method of significant high rated forming microbiological membranes, treatment contaminated wastewater with the listed advantages and low price, suitable for wastewater with considerable salinity, i.e. aquaculture. Materials and methods Materials.Activated carbon fiber with high surface area, type 12K -one fiber consists of 12.000 micro-sized fibrils, was purchased from DowAksa, USA.Steel and PVC tubes used for supporting frame were purchased from local stores. Sampling and analysis.The degree of stability of criteria characterized for the pollution level of aquaculture wastewater allows the experiment to be carried out in a period of three months.Operating parameters of the process: aeration, stirring with the flow in/out, 4-hour hydraulic retention. The input wastewater samples have been taken from the collection pit before being pumped to the treatment system, the treated wastewater samples -at the designed output after the retention time -4 hours.The water samples are filled in the 1.5l PET plastic bottles -cryopreserved and taken immediately to the laboratory for the following analysis. The progress of water treatment is evaluated by the following parameters: BOD5, total N, total P, ammonium, and nitrite, nitrate forms of nitrogen, salinity as well as num-ber of nitrifying bacteria in the water, are also analyzed according to Vietnamese standards (TCVN), SMEWW and HACH. Treatment Efficiency.Nutritional constituents, characterized by total N and total P prorated according to waste purifying proportion (%), is evaluated according to the following formula: , where X o -the value of the total N (or total P) at the output; X i -the value of the total N (or total P) at the input of the treatment system. Results and discussion Biofilm development.Adhesive micro-sized carbon fibers enable microorganisms to attach to its high area surface by conquering the energy barrier between microorganisms and the surface to form a biofilm.In less than a week of «swimming» in water with high density of nitrifying bacteria, Nitrosomonas, Nitrobacter -about 10 4 cells/mL, thick layers of biofilm have been developed on each bulk of the fibers, which could be observed on the fig. 1.The nature of biofilm can be illustrated from scanning electron micrographs of the carbon fibers after biofilm formation on their surface (fig.2).Fig. 1.Biofilms on micro-sized carbon fiber Fig. 2. Scanning electron micrograph of a biofilm: left -before biofilm formation, right -after biofilm formation [13] Carboxyl and phosphate groups on the cell surface of the bacterium are dissociated in water, made microorganisms negatively charged.The surface of biofilm negatively charged in the water as well.Two negative surfaces form an energy barrier for bacteria to attach the surface.By Brownian motion or movement of various cellular structure, the smaller size bacteria firstly get as close as possible to the carbon fiber surface and contacts reversely.Then extracellular polymeric substances and micro-sized fibers formed by microorganisms go through the energy barrier and adsorb to the surface of supporting fiber, where Van der Waals' force is able to attach the bacteria to the surface [14][15][16][17]. Carbon fiber consists almost exclusively of carbon atoms, thus this fiber type is less negatively charged compared to all other synthetic fibers used for biofilm supporting purpose [12].Therefore the energy barrier between negatively charged bacteria negative surface of carbon fibers is smaller compared to other adsorbing materials.After attachment, Van der Waals' forces are also more intensive between microorganisms and carbon fiber compared to other fibers.Moreover, carbon fibers derived from PAN precursor are turbostratic, thus fibers tend to have high tensile strength. Biofilm formed on the surface of carbon fibers also provides nutritious substances for microorganism in the aquatic environment.However, until now it has not been reported the reason why carbon fibers are more biocompatible than other synthetic fibers in efficient biofilm development.Important microorganisms used in water treatment, such as nitrifying bacteria to convert ammonium to nitrate and nitrite, are also absorbed to the surface of carbon fibers.Utilization of nitrifying bacteria could be applied in the treatment of heavily polluted water, likewise wastewater from various sources, including aquaculture.Carbon fiber is also safe, toxicity to the human and animal body is not reported.In the consequence, carbon fiber with high adhesive quality and micro size can be a serious candidate for a new supporting material for biofilm development in the aquatic environment. Wastewater treatment using micro-sized carbon fiber.The input water from tank cultivation aquaculture and water from the treatment system using carbon fiber are characterized by the values in the table 1. Wastewater was pumped directly from collection pit of the breeding farm in Vinh Luong district.Compared to maximum allowed values according to Vietnam's norms, biochemical oxygen demand (BOD5) are all beyond the specified thresholds, which cannot be reused for aquaculture.High ammonia nitrogen level with unpleasant odors is not suitable for the purpose of water supply.BOD5: N proportion is smaller than 20 and BOD5: P -smaller than 100.Consequently, the source is excess of nutritional components -nitrogen and phosphorous compounds, which may lead to serious water eutrophication.The water is also high of salinity, determined by titration with Ag-NO 3 standard solution, which seriously trouble the selection of treatment technology.Evaluated according to the sensory and physical chemistry criteria, the output water from the treatment system using micro-sized carbon fiber was improved considerably: the water was fresh without odors, pollution criteria like BOD5, total N, total P and ammonium nitrogen decreased sharply. Removal of Nutritive Salts -Nitrogen.Removal of nitrogen compounds is one of key matter in wastewater treatment to prevent eutrophication in the discharged aquatic environment.The reduction of nitrogen substances is due to their process of involving in the constitution of microbiological cells.During the process of anaerobic treatment, the organic nitrogen transforms into ammonia (NH 4 OH) under the process of catalytic hydrolysis.Ammonia nitrogen then could be oxidized to nitrate (NO 3 -) through nitrite form (NO 2 -).There are denitrification microorganisms in the biofilm on the surface of carbon fiber, which decompose nitrate nitrogen into nitrogen gas (N 2 ).Nitrogen then, in the free form of gas, was released into the air (fig 3).Dinitrogen monoxide (N 2 O) generation was suppressed by microorganisms on the carbon fiber, which is different from the treatment by activated sludge.The oxidation of nitrite to nitrate occurs at the high rate, which is proved by the low level of nitrite concentration in water.Constantly, nitrite form does not exceed 0,5mg/l Nitrogen (fig.4). Fig. 4. Variability of nitrogen environmental forms and removal efficiency of nitrogen and phosphorous compounds The speed of forming nitrate or the degree of ammonia deficiency occurs slowly.This can be explained by the high organic content (high BOD5 value), the low ammonia content, which is unsuitable for autotrophic microorganisms to develop, slowing down the oxidizing process from ammonia to nitrate.The result of determining number of nitrifying bacteria is adequate to these results. Removal of Nutritive Salts -Phosphorus.Phosphorus is removed from water due to the accumulation of phosphate ion (PO 4 3-) as polyphosphoric acid by microbes.Carbon fiber also releases iron ion (Fe 3+ ), which can react with phosphate ion in the water to form insoluble iron phosphate (FePO 4 ).Iron phosphate could be fixed and returned to farmland (fig 5 .).The single aerobic treatment system using carbon fiber cannot completely remove phosphorus compounds, except for a small amount of 1-2 mgP/L involving in the constitution of microbiological cells (table 1).Thus, phosphorus treatment by the methods of creating microbiological membranes on carbon fiber, and other supporting materials, can only remove phosphorus along with biomass during the process of discharging excessive sludge. Conclusion The treatment method of microbiological membrane formation on micro-sized carbon fibers is a high efficient treatment technique, that could be applied in aquaculture with high content of nitrogen and phosphorous compounds to prevent eutrophic water pollution.Nitrogen pollutants are removed through consequence of catalytic reactions, while phosphorous nutrient consumed mainly by microbes as food supply.Carbon fibers were also investigated as high biocompatible and non-toxic supporting material to aquatic fauna and human. Results of the experiment in this work demonstrate the ability of carbon fiber application in the treatment of aquaculture wastewater with high salinity.This method is also promising in the treatment of various contaminated water sources, such as domestic wastewater, industrial wastewater, contaminated natural water in ponds, lakes, rivers and coastal seas. Fig. 3 . Fig. 3. Nitrogen gas (N 2 ) generation from nitrogen nutrients in water using carbon Fig. 5 . Fig. 5. Phosphorus compounds accumulation by microorganisms on the surface of micro-sized carbon fiber Table 1 . Efficiency of high salinity wastewater treatment using micro-sized carbon fiber
2018-12-06T20:46:20.528Z
2018-02-21T00:00:00.000
{ "year": 2018, "sha1": "fbaec32ea40b647ce17f725389535c997a83e067", "oa_license": "CCBY", "oa_url": "https://journals.vsu.ru/sorpchrom/article/download/408/380", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fbaec32ea40b647ce17f725389535c997a83e067", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
4477611
pes2o/s2orc
v3-fos-license
Controllable design of super-oscillatory lenses with multiple sub-diffraction-limit foci The conventional multifocal optical elements cannot precisely control the focal number, spot size, as well as the energy distribution in between. Here, the binary amplitude-type super-oscillatory lens (SOL) is utilized, and a robust and universal optimization method based on the vectorial angular spectrum (VAS) theory and the genetic algorithm (GA) is proposed, aiming to achieve the required focusing performance with arbitrary number of foci in preset energy distribution. Several typical designs of multifocal SOLs are demonstrated. Verified by the finite-difference time-domain (FDTD) numerical simulation, the designed multifocal SOLs agree well with the specific requirements. Moreover, the full-width at half-maximum (FWHM) of the achieved focal spots is close to λ/3 for all the cases (λ being the operating wavelength), which successfully breaks the diffraction limit. In addition, the designed SOLs are partially insensitive to the incident polarization state, functioning very well for both the linear polarization and circular polarization. The optimization method presented provides a useful design strategy for realizing a multiple sub-diffraction-limit foci field of SOLs. This research can find its potentials in such fields as parallel particle trapping and high-resolution microscopy imaging. Multifocal optical elements are important for such applications as parallel particle trapping and three-dimensional imaging system [1][2][3][4] . For the purpose, much research has been reported both theoretically and experimentally on the realization of focusing the light into several focal spots simultaneously. In most recent studies, multiple foci are mainly obtained by employing two different methods. Through modulating the incident cylindrical vector beam in a 4Pi focusing system, several equidistant multiple spots can be generated along the optical axis 1, 2 . Furthermore, the metalens with longitudinal multiple foci has also been proposed 5,6 . However, these multifocal optical elements cannot precisely control the energy distribution among the realized focal spots, neither the relative positions nor the actual sizes. Our goal is to design the high-quality multifocal lenses owning the required number of foci and the preset energy distribution among them. On the other hand, resolving power is restricted by the Rayleigh diffraction limit 0.61λ/NA (where NA is numerical aperture) for an ideal optical system 7 . Overcoming this resolution barrier can improve the imaging quality, or greatly decrease the size of a single particle that can be manipulated. In recent years, much attention has been paid to utilize the super-oscillation theory 8,9 to design the planar metallic lenses composed of an array of circular nanorings with different widths [10][11][12][13] , and the sub-diffraction-limit focusing performance has been successfully realized 12,[14][15][16] . The achieved lenses are thus named as the super-oscillatory lenses (SOLs). They can create a sub-diffraction-limit hotspot at a distance far beyond the near-field region, thus without the contribution of evanescent waves. In 2012, N. I. Zheludev et al. reported an optical microscope showing an imaging resolution close to λ/6 by using a SOL for directly focusing the laser light into a subwavelength spot over more than 10 µm away and by precisely tailoring the interference of a large number of beams diffracted from a nanostructured binary amplitude-type mask 17 . Therefore, the design of nanostructured mask plays a paramount role in developing the SOLs for practical applications. According to the available publications, the design theories for a far-field superfocusing SOL have been based on the scalar angular spectrum theory 10,17 , vectorial angular spectrum (VAS) theory 13,14,18 , or vectorial Rayleigh-Sommerfeld diffraction integral 15 . However, it is difficult to achieve a precisely controllable high-quality light field with these ordinary methods. In this paper, we suggest a multi-objective and multi-constraint optimization model, aiming to implement the SOLs with an arbitrary number of sub-diffraction-limit focal spots along the optical axis and the desired energy distribution between them. The optimizing procedure of the model is designed adopting the Matlab programming language based on the genetic algorithm (GA) and the fast Hankel transform algorithm. It can flexibly control the intensity of electric field immediately behind the lens, making the number of foci arranged, as well as their relative positions and sizes. The achieved SOLs based on the presented method are verified by the rigorous electromagnetic simulation using the finite-difference time-domain (FDTD) numerical computation. Although the design described mainly suits for the linearly polarized beam (LPB), it is also applicable for the other polarized waves, like the circularly, radially, and azimuthally polarized beams. Design and optimization procedure Integral representations. The subwavelength optical field pattern can be constructed by employing the SOL consisting of multiple concentric nanorings through the interference of a massive transmitted diffraction beams. Assuming the LPB (electric field polarized along the X direction) illuminates normally on the SOL and propagates along the +Z direction, as shown in Fig. 1, according to the VAS theory under the cylindrical coordinate system, the electric field components of an arbitrary point P(r, ϕ, z) on the observation plane (Z > 0) can be expressed as 14,19,20 where q(l) is the longitudinally spatial frequency component and A 0 (l) is the angular spectrum of the electric field in the mask; J n (·) denotes the n th -order Bessel function 19 . The transversely and longitudinally polarized electric energy densities are calculated by = E r z E r z ( , ) (, ) r x 2 2 and ϕ E r ( , , z) z 2 , respectively; thus, the total electric energy density is x z 2 2 . If SOLs are illuminated by circularly polarized beam (CPB), the total electric energy density can be obtained from two orthogonally polarized LPBs, referring to 20 where E x and E z are described in Equation (1). In a high numerical aperture (NA) microscopic imaging system, the transversely polarized electric-field component is dominant, while the longitudinal one is always strongly attenuated in the image plane due to the polarization filtering of this imaging system 21,22 . Therefore, we do not consider the longitudinal component and the total electric energy density profiles of LPB and CPB tend out to be almost the same, quantized as x 2 and = I r z E r z ( , ) 2 ( , ) x 2 , respectively. What mentioned above demonstrates an acceptable agreement with the experimental results 12, 14, 15, 17 . Optimization model. Figure 2 presents the schematic of a multifocal SOL where a metallic film is etched with a great number of nanorings of specially designed widths. Taking LPB as an example for clarity, we consider the total electric energy density is approximated by The constraint model of GA can be optimized with the required optimization targets using the three-dimensional (3D) intensity distribution I. We constrain I along two orthogonal directions, including the optical axis and transverse axes in every focal plane. Then, a constrained linear programming model is extracted from an optimized design of multiple foci with optical transmittance as the objective function. In order to achieve a required high-quality sub-diffraction-limit multifocal field, which is influenced by many factors such as the focal length, full-width at half-maximum (FWHM), surrounding side lobes, and light uniformity of these focal spots, as shown in Fig. 2. We employ the Matlab programming language based on the GA to design the binary amplitude-type SOL that implements the predefined axial-intensity modulation over a given region. Here, the model of three objectives and three constraints is established to control the multifocal field's prescribed parameters. Hence, a constraint optimization model is built up as Equations (2) and (3). x with D f being the depth of focus; t i is the transmittance value of the i-th annular ring and N is the total number of rings contained in the mask. For the binary amplitude-type annular mask, the contained concentric rings are initially set to be equidistant and each ring can have either unit or zero transmittance, so the binary amplitude transmittance is encoded straightforward using the two digits {0, 1}. We built the three-objective and three-constraint model to control the main properties of the 3D intensity distribution behind the SOLs. The first objective function Min.(I 1m ) means that the energy surrounding each focal spot along the optical axis should be less than the energy of each focus as much as possible, that is to ensure the intensity of the targeted focal spots are the peaks of the axial intensity distribution, which is useful for controlling the number of focal spots; the second objective function Min.(I 2m ) represents the ratio of focusing intensity; μ represents the difference of the intensity distribution between foci; μ = 1 means that the intensity distribution between the focal spots changes a little, thus each spot has the similar energy. It's beneficial to achieve several homogeneous focal spots. The third objective function Min.(I 3m ) controls the sizes of the hotspots in the focal planes as small as possible, aiming to decrease the FWHM. On the other hand, the three constraints are used to control the light field parameters that we prescribe before optimizing, such as the intensity of the side lobe and FWHM. These objectives are related and difficult to control. Thus, we set a specific fluctuation range of the electric intensity distribution at different locations to ensure the light needed. As shown in Fig. 2, the intensity of the other points on the optical axis is set to below 0.3. A super-oscillation field always exists accompanied by high-energy side lobes, which will impede its widespread application 23 . Therefore, for the focusing planes perpendicular to the optical axis, the normalized intensity within the range of κ ≤ ≤ r FWHM FWHM 2 2 , the radial width of the transition dark region between the central main lobe and the surrounding side lobes is supposed to be lower than 30% of the peak intensity of the central lobe. A specific fluctuation range of the side lobe was set to ensure the light field as required. If the parameter is unsuitable, it may be difficult to convergence to an optimal solution. After analyzing the energy relationship between the side lobe and the central spot, a side lobe factor of 0.3 is chosen in our design. GA is widely used for such problems due to its powerful parallel and global searching capability 24,25 . Since a sub-diffraction-limit multifocal field is influenced by many factors, and these factors often conflict with each other. When we reduce the spot sizes, there always comes along with the increase of intensity for sidebands. Therefore, there exists a tradeoff of the light energy between the central foci and their side lobes. A feasible tradeoff is achieved when there are no significant side lobes, and meanwhile, the spot sizes maintain a good uniformity beyond the diffraction limit. The multi-objective optimization problem makes the components minimum simultaneously. The problem usually has no unique, perfect solution, but a set of nondominated, alternative solutions, known as the Pareto-optimal set 25 . Multi-objective optimization arises from the need for a strategy to address the multiple design factors for practical problems. As for GA, a fitness function must be set to make the optimized results and the objectives as close as possible. In our design, objective functions 1~3 in Equation (2) are served as the individual fitness functions when the genetic operation, named as "selection", is performed in GA. Intuitively, these three individual fitness functions can be weighted and summed up to formulate a single-objective optimization. Here, we assign a weighted coefficient w j to each objective function I j , so that the problem is converted to a single-objective problem with the objective function defined as, To achieve a compromised high-quality light pattern, the suitable weighted coefficients are important, which are set according to the importance of three objective functions. Here, we presume the weighted coefficients w 1 , w 2 and w 3 as 0.4, 0.4 and 0.2, respectively. GA is set to hold a population of 500, with a crossover probability of 0.7, and a mutation probability of 0.007. Through numerical calculations, it is found that the required SOLs can be steadily satisfied after several hundred iterations by using the above configurations. In addition, a fast Hankel transform algorithm can be applied to dramatically accelerate the calculation speed 26 . Result and Discussions In the following examples, an illumination wavelength of 532 nm is used in oil immersion medium (n = 1.515). The diameter of the mask is designed to be 20 μm with a total ring number of 100, so the minimum annular width is 100 nm. 200 iterations are sufficient to ensure the convergence and are thus used for each algorithm. SOLs #1~#6 are optimized and listed in Table 1. The proposed scheme has been validated by the 3D FDTD method for a 25 nm-thick aluminum film. According to the optimization procedure, the transmittance functions of SOLs are achieved according to the different requirements, as shown in Table 1. In order to describe the SOL (might contain several hundred rings) more compactly, the transmittance value t i is encoded from the first ring (innermost) to the N th ring (outermost) by continuously transforming every four successive binary digits into one hexadecimal digit. Taking the SOL #2 as an example, the first hexadecimal digit "A" denotes the real transmittance values of "1010". "1" and "0" represent the transparent and opaque annulus, respectively. Firstly, we consider the design of a SOL that produces two focal spots with the equal intensity distribution and spacing between the spots on the Z axis under the illumination of a uniform plane wave. The two focal spots are located at 1 μm and 2 μm away from the output plane of SOL along the Z axis. A random initial transmittance function is used at the beginning of iteration. Through the method we mentioned before, a convergent solution Table 2. Comparison of the designed and simulated results of the focal length f and FWHM for LPB and CPB. that matches the requirements can be obtained. The axial intensity distribution of the diffractive pattern generated by the designed SOL #1 is displayed in Fig. 3. The normalized intensity distributions calculated by the VAS theory and FDTD are compared in the Y-Z plane. The calculated intensity distribution is shown in Fig. 3(a) together with the results of LPB simulation in Fig. 3(b) and CPB simulation in Fig. 3(c). The axial intensity distributions are further compared in Fig. 3(d). It can be seen that the electromagnetic simulation results are consistent with the VAS predictions especially for the main lobe of the focus. It can be clearly seen that the two sharp peaks emerge from the low background in the axial intensity distribution, and two sharp foci located at the designed positions are clearly visible. The interval between the adjacent focal spots is about 1 μm, as expected. There observes a good focusing effect for both the LPB and CPB incident lights, which implies a partly polarization-independence of the designed SOLs. For the SOLs illuminated by the radially or azimuthally polarized beam, focusing properties become different. Nonetheless, our method as described is applicable for arbitrary polarized waves, like the radially and azimuthally polarized beams, as long as the definition of optical field in Equation (1) is correspondingly modified. According to the FDTD simulation results, for a linearly polarized plane wave (as an incident source), when the electric field polarizes along the X direction, we can see that in contrast to our VAS calculation, there exists the |E y | 2 component, which exhibits a weak four-lobe intensity. Additionally, the longitudinal field component |E z | 2 reveals an obvious two-lobe intensity pattern, as shown in Fig. 4. We usually ignore the slight |E y | 2 component, as well as the longitudinal component which is difficult to measure 21,22 . Thus, |E x | 2 is used and successfully predicts the positions and appropriate sizes of the achieved foci. The normalized intensity distributions in the transverse focal plane are compared in Fig. 5, which shows that the simulated focal planes agree well with the VAS calculation. However, the component |E x | 2 of LPB calculated by FDTD is not rotationally symmetric as shown in Fig. 5, it is wider in the X direction than that in the Y direction, which can be explained by the more accurate and generalized VAS methods [27][28][29] . The FWHMs of all the focal spots along the Y axis are listed in Table 2, which are all beyond the calculated diffraction limit 0.61λ/NA (0.61λ/ NA 1 = λ/2.471, 0.61λ/NA 2 = λ/2.435); NA 1 and NA 2 represent the numerical aperture of the focal spot 1 (FS 1 ) and focal spot two (FS 2 ), respectively, in the focal planes. In order to show the flexible control over the light field with the optimization model, M is tuned to generate three or more spots along the optical axis. The intensity distribution of SOLs #1~#3 in the Y-Z plane is shown in Fig. 6 for the LPB illumination. For all the three SOLs, the intensities of the foci calculated by the VAS theory are almost the same, as demonstrated in Fig. 6(a)-(c); all the foci are strongly and exactly focused at the preset positions along the optical axis; meanwhile, the on-axis intensity distribution predicted by the VAS theory coincides with the rigorous electromagnetic simulation result by FDTD for these three SOLs. Comparing the focusing characteristics of SOLs #1~#3, the foci number shifts from 2 (for SOL #1) to 5 (for SOL #3). As shown in Fig. 6 For the sake of practical applications, the ratio of the focusing intensity can also be adjusted. Through changing the parameter μ in the second objective function Min.(I 2m ) of the optimization model, we can modulate the intensity ratio of focal spots effectively. As shown in Fig. 7, the intensity distributions of the calculation and simulation results in the Y-Z plane demonstrate that the designed SOLs with the intensity ratios of 0.4:0.6:0.8:1 and 0.5:1:1:0.5 have been realized. Figure 7(b) and (c) show the generation of four focal spots with both different intensity distribution and separation. We measured the FWHMs of all the focal spots along the optical axis; for the SOL #5 with the designed focal length at f = 1, 2, 3 and 4 μm, the spot sizes are λ/4.289, λ/3.091, λ/3.897 and λ/3.706, respectively, all beyond the calculated diffraction limit, i.e. λ/2.471, λ/2.435, λ/2.379 and λ/2.306, respectively. Compared to the single-focusing SOLs, the focusing precision of the designed four-foci SOLs has some deviations from the preset focal lengths and distribution, which is related to the interference between the closely spaced focusing spots. It should be noted that the coordinates of focal points can be chosen arbitrarily, and the light intensity patterns between the foci can be predetermined through the proposed method. The optimization model presented in this paper can be applied for the generation of any desired longitudinal intensity distribution. Conclusions To sum up, we have shown an effective procedure for designing multifocal binary amplitude-type SOLs based on the VAS theory under the normal illumination of LPB or CPB. A GA optimization model has been proposed to control the focal spots' properties and accelerate the computational process with the fast Hankel transform algorithm. Several focal distributions have been built. Meanwhile, a comparison of the VAS theoretical calculations and the FDTD simulation results has been made to confirm the optimization model. The simulated results show that all the designed SOLs by our method agree well with the desired expectations and have good focusing characteristics. Hotspots generated by SOLs #1~#6 show the resolution beyond the diffraction limit. Additionally, the focusing intensity of each focal spot can be tuned easily by changing the parameters of the optimization model. The optimization design introduced here is an effective and universal procedure, which can be extended to study the diffraction of different light contours with different vector beams by a binary amplitude-type SOL, such as light tunnels, doughnut-shape focal pattern, optical needle, and so on. Various peculiar focusing patterns may find important applications in optical trapping, particle acceleration, three-dimensional imaging and fluorescence microscopy.
2018-04-03T02:44:07.310Z
2017-05-02T00:00:00.000
{ "year": 2017, "sha1": "1f8cf03acfd27f72d8055c38ed8ac0210e5cd0b2", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-01492-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f8cf03acfd27f72d8055c38ed8ac0210e5cd0b2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
231150017
pes2o/s2orc
v3-fos-license
Case Report: An EGFR-Targeted 4-1BB-agonistic Trimerbody Does Not Induce Hepatotoxicity in Transgenic Mice With Liver Expression of Human EGFR Agonistic monoclonal antibodies (mAbs) targeting the co-stimulatory receptor 4-1BB are among the most effective immunotherapeutic agents across pre-clinical cancer models. However, clinical development of full-length 4-1BB agonistic mAbs, has been hampered by dose-limiting liver toxicity. We have previously developed an EGFR-targeted 4-1BB-agonistic trimerbody (1D8N/CEGa1) that induces potent anti-tumor immunity without systemic toxicity, in immunocompetent mice bearing murine colorectal carcinoma cells expressing human EGFR. Here, we study the impact of human EGFR expression on mouse liver in the toxicity profile of 1D8N/CEGa1. Systemic administration of IgG-based anti-4-1BB agonist resulted in nonspecific immune stimulation and hepatotoxicity in a liver-specific human EGFR-transgenic immunocompetent mouse, whereas in 1D8N/CEGa1-treated mice no such immune-related adverse effects were observed. Collectively, these data support the role of FcγR interactions in the major off-tumor toxicities associated with IgG-based 4-1BB agonists and further validate the safety profile of EGFR-targeted Fc-less 4-1BB-agonistic trimerbodies in systemic cancer immunotherapy protocols. INTRODUCTION The success of immune checkpoint blockade using PD-1/PD-L1 and/or CTLA-4 inhibitors has validated the concept of immunomodulating monoclonal antibodies (mAbs) as a powerful therapeutic strategy, but responses are still limited to a minor fraction of cancer patients (1). Immune cell stimulation by agonistic mAbs acting on co-stimulatory receptors, such as CD40, OX40, and 4-1BB, is a particularly interesting approach, as these receptors are mainly expressed on T cells upon activation (2,3). 4-1BB (CD137, TNFRSF9) is a member of the tumor necrosis factor receptor superfamily (TNFRSF) that is transiently expressed following activation through the T cell receptor (TCR) (4). To date, a unique ligand for 4-1BB has been identified, 4-1BBL (TNFSF9), which is expressed on the surface of antigen-presenting cells (5). 4-1BBL trimerization leads to 4-1BB receptor clustering and TRAFs-mediated activation of NF-kB and MAPK intracellular signaling cascades leading to enhanced T cell proliferation and survival (6). However, off-tumor toxicities have been the major impediment to the clinical development of first-generation IgG-based 4-1BB agonistic mAbs. The fully human IgG 4 urelumab caused dose-dependent liver toxicity, including two fatalities (7,8). Additional studies have shown that dose reduction ameliorated liver toxicity, but also resulted in limited clinical activity (8). The fully human IgG 2 utomilumab displayed a better safety profile but is a relatively less potent 4-1BB agonist (9). Therefore, new strategies are being developed to preserve the anti-tumor effect avoiding offtumor toxicities associated with FcgR interactions (10)(11)(12). These approaches aim to confine 4-1BB co-stimulation to the tumor microenvironment. We have recently described a novel EGFR-targeted Fc-less 4-1BB agonistic trimerbody (1D8 N/C EGa1), which is a potent costimulator in vitro and exhibits enhanced tumor penetration and powerful anti-tumor activity in immunocompetent mice bearing gene-modified CT26 colorectal carcinoma cells expressing human EGFR (10). In this model, the anti-tumor effect of the bispecific trimerbody was dependent on human EGFR expression (13), but the potential toxicity profile was dictated by the endogenous mouse EGFR. In this context, the 1D8 N/C EGa1 trimerbody did not induce the systemic cytokine production and hepatotoxicity associated with IgG-based 4-1BB agonists (10). To further investigate this aspect and given that the anti-EGFR EGa1 V HH single-domain antibody was isolated from a phage-displayed llama V HH library immunized with EGFRpositive human cells (14,15), we studied here the impact of human EGFR expression on the liver in the 1D8 N/C EGa1 toxicity profile in a liver-specific huEGFR-transgenic immunocompetent mouse (16). In this model, systemic administration of IgG-based anti-4-1BB agonist resulted in nonspecific immune stimulation and liver toxicity, whereas treatment with the EGFR-targeted 4-1BB-agonistic trimerbody lacked these immune-related side effects. Hepatocyte Isolation and Culture Hepatocytes were isolated as previously described following the two-step collagenase perfusion technique followed by isodensity purification in a Percoll gradient (17). Briefly, livers from 3 months-old mice were perfused with Hanks´balanced salt solution supplemented with 10 mM Hepes and 0.2 mM EGTA for 5 min, followed by a perfusion (10-15 min) with William´s E medium containing 10mM Hepes and 0.03% collagenase I (Worthington). Livers were further minced, and viable hepatocytes were selected by centrifugation in Percoll and seeded in collagen I-coated plates (5 µg/sq cm) at a density of 28 x 10 3/ cm 2 in Dulbecco´s modified Eagle´s medium/F-12 (1:1) supplemented with 10% serum. Expression and Purification of Recombinant Antibodies The 1D8 N/C EGa1 trimerbody was produced in stably transfected HEK293 cells (10) cultured in complete DMEM with 500 mg/mL G418 (all from Life Technologies), and conditioned medium purified using the (Twin-)Strep-tag purification system (IBA Lifesciences) connected to an ÄKTA Prime plus system (GE Healthcare). The purified antibody was dialyzed overnight at 4°C against PBS + 150 mM NaCl (pH 7.0), analyzed by SDS-PAGE under reducing conditions and stored at 4°C. Purified antibody was tested for endotoxin levels by Pierce´s limulus amebocyte lysate (LAL) chromogenic endotoxin quantitation kit, following the manufacturer's specifications (Thermo Fisher Scientific). Endotoxin levels of purified antibody stocks were lower than 0.25 EU/ml as determined by LAL test. Purified anti-mouse 4-1BB IgG (clone 3H3) was purchased from (cat#BE0239, BioXCell). ELISA Purified mouse 4-1BB:hFc chimera (mo4-1BB), mouse EGFR: hFc (moEGFR) and human EGFR:hFc chimera (huEGFR) (all from R&D Systems) were immobilized at 3 µg/ml on Maxisorp ELISA plates (NUNC Brand Products) overnight at 4°C. After washing and blocking with 200 µl PBS 5% BSA (Merck Life Science), 100 µl of purified 3H3 IgG or 1D8 N/C EGa1 trimerbody were added and incubated for 1 hour at room temperature. The wells were washed for three times with PBS 0.05% Tween-20, and 100 µl of anti-FLAG mAb (clone M2; mIgG 1 ; cat#F1804, Merck Life Science) were added for 1 hour incubation at room temperature. The plate was washed as above and 100 µl of HRP-conjugated goat anti-rat IgG or HRP-conjugated goat anti-mouse IgG (both from Merck Life Science) were added to wells previously incubated with 3H3 IgG or 1D8 N/C EGa1 trimerbody, respectively. Afterwards, the plate was washed and developed using OPD (Merck Life Science). Biolayer Interferometry All biolayer interferometry was performed on an Octet RED96 (Fortebio). To investigate the binding of 1D8 N/C EGa1 to hu-EGFR or moEGFR, 30 nM of huEGFR or moEGFR in fusion with a human Fc region were immobilized onto AHC biosensors (Fortebio) coated with anti-human Fc antibodies for 20 min, in 20 mM HEPES, 150 mM NaCl pH 7.4 buffer (HBS). Then, biosensors were moved into 20 nM 1D8 N/C EGa1 in HBS and association was measured for 20 min followed by one hour of dissociation in HBS. To investigate the binding of hu-EGFR or moEGFR in solution to immobilized 1D8 N/C EGa1, biosensors coated with mo4-1BB in fusion with a human Fc region were prepared using amine reactive chemistry. Briefly, AR2G biosensors (Fortebio) were activated with s-NHS/EDC, coated with 2 µg mouse 4-1BB per biosensor at pH 6 for 20 min, and quenched with ethanolamine. Then, 10 nM of 1D8 N/C EGa1 in HBS was immobilized onto the biosensors for 30 min. Human or moEGFR (50 nM in HBS) was then introduced and allowed to associate for 20 min and dissociate for one hour. In both experiments, a reference biosensor coated and immobilized with the same ligands, but not receiving the experimental analyte proteins, was subtracted from the other sensorgrams prior to data analysis. Data were fit to 1:1 binding models using the Octet Data Analysis software (Fortebio). In the case of moEGFR's binding to immobilized 1D8 N/C EGa1, fitting included only its initial association phase, due to its biphasic binding. Flow Cytometry The cell surface expression of EGFR was analyzed on freshlyisolated liver cells from C57BL/6 WT and EGFR-tg mice, and on CT26 mock and CT26 huEGFR cells after incubation for 30 min with the human EGFR-specific chimeric mouse/human IgG 1 cetuximab (Merck KGaA), or the purified 1D8 N/C EGa1 trimerbody. After washing, cells were treated with appropriate dilutions of phycoerytrin (PE)-conjugated goat anti-human IgG F(ab′) 2 (Fc specific; cat#109-116-097, Jackson Immuno Research), or anti-FLAG mAb (clone M2), and then with PEconjugated goat anti-mouse IgG F(ab') 2 antibody (cat#115-116-072, Jackson Immuno Research). Samples were analyzed with a MACSQuant Analyzer 10 flow cytometer (MiltenyiBioteh). A minimum of 20,000 events were acquired for each sample and data were evaluated using FCS Express V3 software (De Novo Software). Toxicity Studies Eight weeks old C57BL/6 wild-type and DEGFR-tg littermates received a weekly i.p. dose of 3H3 IgG or 1D8 N/C EGa1 (6 mg/kg) for 3 weeks. Mice were anesthetized and bled on days 0, 7, 14, and 21. To obtain mouse serum, blood was incubated in BD microtainer SST tubes (BD Biosciences), followed by centrifugation. Serum was stored at −20°C until use. Serum levels of alanine aminotransferase (ALT) were determined at day 14 using Reflotron GPT/ALT strips and the Reflotron plus analyzer (Roche Diagnostics). One week after the last dose of antibodies, mice were euthanized and the liver and spleens, were surgically removed, weighted, and fixed in 10% paraformaldehyde for 48 h. Then fixed tissues were washed and embedded in paraffin. Tissue sections (5 µm) were stained with hematoxylin and eosin. Lymphocyte infiltration in the liver was quantified using the ImageJ software. Histological Studies Tissue samples were fixed in 10% neutral buffered formalin (4% formaldehyde in solution), paraffin-embedded and cut at 3 mm, mounted in superfrost ® plus slides and dried overnight. For different staining methods, slides were deparaffinized in xylene and re-hydrated through a series of graded ethanol until water. Consecutive sections for several immunohistochemistry reactions were perform in an automated immunostaining platform (Ventana Discovery XT, Roche; AS Link, Dako, Agilent). Antigen retrieval was first performed with the appropriate pH buffer, (CC1m, Ventana, Roche; Low pH buffer, Dako, Agilent) and endogenous peroxidase was blocked (peroxide hydrogen at 3%). Then, slides were incubated with the appropriate primary antibody as detailed: rabbit monoclonal anti-EGFR (mouse preferred) (D1P9C, 1/600, Cell Signaling, #71655) and mouse monoclonal anti-huEGFR (EGFR.113, 1/10, Leica, NCL-EGFR). After the primary antibody, slides were incubated with the corresponding visualization systems (OmniMap anti-Rabbit, Ventana, Roche; EnVisionFLEX+ Mouse Linker, Dako, Agilent) conjugated with horseradish peroxidase. Immunohistochemical reaction was developed using 3, 30-diaminobenzidine tetrahydrochloride (ChromoMap DAB, Ventana, Roche; FLEX DAB, Dako, Agilent) and nuclei were counterstained with Carazzi's hematoxylin. Finally, the slides were dehydrated, cleared and mounted with a permanent mounting medium for microscopic evaluation. Positive control sections known to be primary antibody positive were included for each staining run. Whole slides were acquired with a slide scanner (AxioScan Z1, Zeiss). Statistical Analysis Statistical analysis was performed using GraphPad Prism Software version 6.0. Data is presented as mean ± SD. Significant differences (P value) were discriminated by applying a two-tailed, unpaired Student's t test assuming a normal distribution. P values are indicated in the corresponding figures for each experiment. RESULTS AND DISCUSSION The 1D8 N/C EGa1 Trimerbody Binds to Human EGFR With a Higher Affinity Than to Mouse EGFR The EGa1 is a well characterized EGFR-specific V HH that was generated from a phage-displayed llama V HH library after immunizing and screening with EGFR-positive human cells (14,18). Binding studies using biolayer interferometry were used to compare the binding of EGa1V HH to human and mouse EGFR when integrated in a multichain bispecific anti-4-1BB x anti-EGFR trimerbody format ( Figure S1) (10). These interactions were investigated in two orientations, either with biosensor-immobilized EGFR and 1D8 N/C EGa1 in solution ( Figure 1A), or immobilized 1D8 N/C EGa1 and EGFR in solution ( Figure 1B). In both orientations, the interaction between 1D8 N/C EGa1 and human EGFR (huEGFR) dissociated much more slowly than the interaction between 1D8 N/C EGa1 and mouse EGFR (moEGFR); for biosensor-immobilized EGFR and 1D8 N/C EGa1 in solution, the interaction half-lives were~36 hours and~40 min for human and mouse EGFR, respectively, while for the reversed orientation, the half-lives were~20 and~1 min ( Figure 1C). The difference in measured dissociation rates in the two orientations probably reflects differences in avidity due to trivalent binding by 1D8 N/C EGa1 and bivalent binding by EGFR (fused to a human Fc region). A comparison of the primary sequence of huEGFR and moEGFR showed that EGa1's epitope, as seen in the 4KRO crystal structure (19), is mostly conserved, with four differing residues around the periphery of the epitope ( Figure 1D). This is consistent with the lower affinity of EGa1 for moEGFR determined by these binding studies. The 1D8 N/C EGa1 Trimerbody Shows Negligible Toxicity in Immunocompetent Transgenic Mice Expressing Human EGFR in the Liver Transgenic Alb-D 654-1186 EGFR mice (from now abbreviated as DEGFR-tg) are immunocompetent animals expressing an hepatocyte-specific truncated form of the human EGFR that lacks the intracellular catalytic domain (amino acids 654-1186) (16). Liver paraffin sections from wild-type C57BL/6 (WT) and DEGFR-tg mice were stained with moEGFR-specific and A B D C FIGURE 1 | Biolayer interferometry investigating the binding of 1D8 N/C EGa1 to human and mouse EGFR. (A) Human EGFR (huEGFR) and mouse EGFR (moEGFR), both in fusion with a human Fc region, were immobilized onto biosensors coated with anti-human Fc antibodies prior to the experiment. 20 nM of 1D8 N/C EGa1 associated with the biosensors for 20 min, followed by one hour of dissociation. Duplicate biosensors are shown, along with theoretical binding curves for the kinetic rate constants obtained by fitting. (B) 10 nM of 1D8 N/C EGa1 was immobilized to biosensors coated with mouse 4-1BB for 30 min, after which 50 nM of human or mouse EGFR (both in fusion with a human Fc region) associated for 20 min and dissociated for one hour. Theoretical binding curves are shown; note that fitting to mouse EGFR's association step was limited to the first binding phase, due to its heterogeneous binding. (C) Kinetic rate constants and dissociation constants were obtained by fitting of the experimental binding data from panels (A, B) to 1:1 binding models. In both experiments, 1D8 N/C EGa1 dissociates more rapidly from moEGFR than from huEGFR. (D) The crystal structure of the EGa1 V HH bound to human EGFR (PDB 4KRO), shown with and without the EGa1 V HH . Residues of EGFR that are conserved between huEGFR and moEGFR are colored white, while similar residues are yellow, dissimilar residues are red, and glycans are green. Differing residues in proximity to EGa1 are labeled with the murine residue in parenthesis. huEGFR-specific mAbs. In WT and DEGFR-tg mice, hepatocytes showed moEGFR expression on the entire surface of the cytoplasmic membrane with a strong and uniform intensity in most of the liver lobules ( Figure S2). In DEGFR-tg mice, hepatocytes showed a partial and segmental expression of huEGFR in the cytoplasmic membrane with a strong intensity distributed in segments or areas of different sizes, sometimes exhibiting a punctiform pattern, especially in centrilobular hepatocytes ( Figure 2B). Periportal and midzonal hepatocytes displayed also huEGFR expression albeit at a lesser extent ( Figure 2B). No expression of huEGFR was detected in WT mice (Figure 2A). These findings were further confirmed by flow cytometry, where it was also found that about 25% of freshly isolated primary hepatocytes from DEGFR-tg mice expressed significant levels of cell surface huEGFR ( Figure 2C), and that the 1D8 N/C EGa1 trimerbody is more efficient in recognizing hepatocytes isolated from DEGFR-tg mice than those from WT mice ( Figure 2C). We compared the toxicity profile of the 1D8 N/C EGa1 trimerbody with that of the well-characterized anti-4-1BB agonistic 3H3 IgG (4) in WT and DEGFR mice injected (6 mg/ kg) i.p. once a week for 3 weeks and euthanized 1 week later. As shown in Figure 3A, treatment of DEGFR-tg mice with 3H3 IgG resulted in significant enlargement of the spleen as demonstrated by weight (P = 0.0008). In contrast, treatment with 1D8 N/C EGa1 did not result in splenomegaly or hepatomegaly ( Figure 3A). The histologic study of the liver of mice treated with 3H3 IgG revealed significant mononuclear cell infiltration, forming periportal cuffs with thickening of tunica media and also infiltration foci associated with microvasculature, while no significant infiltration was observed in mice treated with 1D8 N/C EGa1 ( Figure 3B). Indeed, the surface of infiltrating mononuclear cells accounted for around 2.5% of the liver of DEGFR-tg mice treated with 3H3 IgG, while it only represented 0.06% in mice treated with 1D8 N/C EGa1 (P = 0.0026) or PBS (P = 0.0026) ( Figure 3C). Consistent with these results, we observed a 2-fold increase in alanine transaminase (ALT) levels in the serum of WT and DEGFR-tg mice treated with 3H3 IgG compared to mice of the same genotype treated with PBS. In contrast, mice treated with 1D8 N/C EGa1 showed little or no increase in ALT levels ( Figure 3D). The effect of treatment with 3H3 IgG or 1D8 N/C EGa1 on the levels of IFNg in serum was also compared. 3H3 IgG treatment triggered significant elevation of IFNg at days 7 and 21 in both WT and DEGFR-tg mice ( Figure 3E). In contrast, 1D8 N/C EGa1 induced levels of IFNg comparable to PBS-treated mice in both groups. In summary, we demonstrated that treatment of DEGFR-tg mice with the strong 4-1BB-agonistic 3H3 IgG induced a toxicity profile similar to that observed in WT C57BL/6 mice, with significant immune cell infiltration and systemic inflammation, indicating the suitability of the model to study 4-1BB-related toxicity. In contrast, none of these features were observed in DEGFR-tg mice treated with the 1D8 N/C EGa1 trimerbody, despite the expression of both huEGFR and moEGFR on the hepatocyte surface, which excludes that the lower affinity of 1D8 N/C EGa1 for moEGFR may be responsible for the absence of liver toxicity observed in WT mice. These results further support the role of FcgR interactions in the 4-1BB-agonist-associated immunological abnormalities and organ toxicities (20)(21)(22) and confirm the safety profile of EGFR-targeted 4-1BB-agonistic trimerbodies in systemic cancer immunotherapy protocols. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The animal study was reviewed and approved by the Animal Experimentation Ethics Committee of the Instituto de AUTHOR CONTRIBUTIONS MC, JMZ, and LA-V designed and supervised the study. MC, SLH, JM-T, GP-C, PG-G, and AT-G performed the core experiments. MC, GP-C, and JMZ were responsible for the animal experiments. PG-G and JM-T performed IHC analysis. MC, SLH, JM-T, PMPVBEH, AS, IF, LS, JMZ, and LA-V provided scientific feedback, discussed the data, and wrote the manuscript. All authors contributed to the article and approved the submitted version.
2021-01-08T20:16:58.189Z
2021-01-07T00:00:00.000
{ "year": 2020, "sha1": "b51d7d0f6da8f1c5a1378cd36d958c6802c63445", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2020.614363/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "b51d7d0f6da8f1c5a1378cd36d958c6802c63445", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
162173556
pes2o/s2orc
v3-fos-license
Low-Cost Alternatives for Conventional Tissue Culture Media The demand for high-quality, high-yielding, disease-free planting material has increased significantly over the last two decades with increasing demand for agricultural, forestry, horticulture products. Plant tissue culture has emerged as an important biotechnology and commercially viable tool to generate this high quality, disease free and high yielding planting material rapidly in the laboratory irrespective of the season. At present, there are around 200 commercial tissue culture laboratories in India with gross production capacity of about 500 million plantlets per annum. Banana, Sugarcane, Apple, Pineapple, Strawberry, Gerbera, Anthurium, Lillium, Orchids, Bamboo, Date Palm, Teak and pomegranate are some of the major plants tissue cultured in India. Introduction The demand for high-quality, high-yielding, disease-free planting material has increased significantly over the last two decades with increasing demand for agricultural, forestry, horticulture products. Plant tissue culture has emerged as an important biotechnology and commercially viable tool to generate this high quality, disease free and high yielding planting material rapidly in the laboratory irrespective of the season. At present, there are around 200 commercial tissue culture laboratories in India with gross production capacity of about 500 million plantlets per annum. Banana, Sugarcane, Apple, Pineapple, Strawberry, Gerbera, Anthurium, Lillium, Orchids, Bamboo, Date Palm, Teak and pomegranate are some of the major plants tissue cultured in India. Micropropagation is a vegetative propagation of plant under aseptic conditions. It can be used to produce disease-free plants by excluding disease-causing organisms during the propagation cycle. The major advantage of micropropagation is the extremely high multiplication rates. Therefore, this technique is highly suited for rapid multiplication of genotypes. An often-cited disadvantage of modern plant tissue culture methods is the relatively higher costs involved as compared The low cost alternative components of tissue culture media were evaluated for their potential to supply nutrients and to support growing explant of Banana cultivar in vitro for their cost effectiveness against conventional high cost media. The Murashigeand Skoog (MS) salts and sucrose were replaced by vermicompost (50 g/L), table sugar (30 g/L) and coconut water (70 ml/L), and agar was replaced by filter paper, sand and wheat floor. The conventional MS medium supplemented with 30 g/L sucrose and 8 g/L agar was used as control. A banana cultivar was regenerated on control and combinations of alternative components. MS salts in combination with filter paper as solid matrix was found as very efficient for shoot induction, multiplication and root formation with reducing the cost upto 58.13 %.Vermicompost as nutritive source was found to be very effective in reducing cost of media per plantlet. Table sugar reduced the cost of carbon source drastically by 97 %.Use of such alternatives can reduce the cost of media thereby minimizing the production cost of tissue cultured plantlets. to other methods (Sahu and sahu, 2013). The need for low-cost plant tissue culture systems, applicable for micropropagation has been emphasized to allow the large-scale application of such technology in developing countries. The use of chemicals such as carbon sources, gelling agents, inorganic and organic supplements, and growth regulators in culture media, make this technique expensive. Sucrose is usually used as a source of carbon and agar as the gelling agent, and together they constitute the most expensive components of the culture media. In this study our objective was to evaluate low cost alternative media (LCM) for shoot induction, shoot multiplication, root induction and hardening of banana explants, along with its cost effectiveness. Materials and Methods The tissue culture work was carried out in the Tissue culture laboratory of Lokmangal College of Agricultural Biotechnology, Solapur, MS, India. Plant material A locally available cultivar of Banana (Musa cavendish) which is widely grown by smallholder farmers in Maharashtra state, India was used as source material for this study. The plants were maintained in shadenet for use as source of explant. Media preparation For preparation of medium different low cost ingredients were used. Vermicompost along with table sugar (3%) and coconut water (7%) was usedas alternative to conventional MS salts whereas filter paper, sand and wheat flour was used instead of agar as solidifying agent. Standard MS medium with agar was used as control for all the treatments (Table 1). The prepared medium along with control was sterilized in autoclave at 121°C temperature and 15 lbs psi pressure for 15 min. Sterilization and preparation of banana explants All banana suckers were washed in running tap water for 20 minutes. The outer layers were removed from the pseudostems to remain with the shoot tip meristem which, was excised. Explants were kept in 1.5% citric acid for 30 minutes. They were surface sterilised in 70% ethanol for 1minute and rinsed five times in sterile distilled water. To prevent oxidation of phenolic compounds, the trimmed explants were stored in antioxidant solution (100mg ascorbic acid per litre of sterile distilled water) till the explants were taken in laminar air flow chamber for inoculation. Before culturing, trimmed explants were soaked in a solution of Carbendazimsolution (5%W/V) for 20 min. Then the outermost layer of sucker surface was removed and the explants were immersed into 0.5% mercuric chloride for 5 min. Again outermost layer of sucker was removed and rinsed with 70% ethanol for one minute. A vertical cut was given and the explants were inoculated in the shoot induction media so as to expose their meristematic region to the media. 6-BAP (4 mg/L) was used as source of Cytokinin for shoot induction in all alternative media as well as control. One explant was cultured in each culture bottle ( Fig. 1) and this was replicated 20 times. Cultures were incubated at 26 ± 2°C at 16 hr of fluorescent tube light and 8 hr darkness. Observations of morphological appearance were taken on 20 th , 40 th and 60 th day. Shoot multiplication Shoots having uniform size, from same cultures were sub-cultured on shoot multiplication medium containing 0.5 mg/L IAA. The number of shoots produced per explant and shoot length was recorded on 20 th , 40 th and 60 th day. Rooting Healthy and well established shoots were transferred to rooting media containing 0.5 mg /L IBA, 0.5 mg /L NAA and 0.05% activated charcoal. The cultures were incubated at 26 ± 2°C and 16 and 8 hr light and darkness, respectively. The number of days to root initiation was recorded. The number of roots, length of the roots and number of leaves was recorded after three weeks. Acclimatization and transfer of in vitro banana plantlets The culture bottles were kept open in the shade net for two days for primary acclimatization. The plantlets were then removed and roots were washed with sterile water to remove all the media from the roots. Then the plantlets were planted in mist chamber in coco peat for hardening for 21 days. The survival percentage during hardening of the plants and number of leaves was recorded to assess the success of the protocol of using alternative low cost media. Data analysis The data was analysed using Analysis of variance (ANOVA) in Microsoft Excel at 5% significance level. Shoot induction The response of the explant was found to be different to the different types of media and gelling agents (Table 2; Fig. 2). In case of control the growth of explant was healthy and started shoot generation within 40 DAI. Explant growth in Vermicompost and Agar (LCM-1) was also found to be healthy but start of shoot generation was extended to more than 40 DAI. In case of filter paper as a gelling agent, explant grew well in MS salts as well as Vermicompost which was comparable to control. Contrastingly, in case of media with Sand and Wheat floor as gelling agent, explant could not grow and characteristic browning of tissue was observed leading it to fail at in the beginning itself. There was no significant difference for number of shoots per explant and shoot length between control and LCM 2 (Table 3) suggesting the ability of filter paper to support and to supply the nutrients to the explant as good as agar. Interestingly, growth of shoot was not halted, but delayed, in any of the LCM treatment. Other treatments were significantly different than control. Root induction Though the number of roots was lesser than control, shoots in LCM2 had longer roots and equal number of leaves per plant (Table 4). Root length and number of leaves in LCM1 were not significantly different than control, which indicate its potential in root induction. In case of LCM3 root growth was not at all induced. Primary hardening Survival percentage of plants in LCM, though significantly different, was comparable to the control (Table 5; Fig. 3). Number of leaves in all the treatments was not different significantly. Plants in all the treatments were healthy. Cost comparison Cost of the media in case of LCM1 and 2 reduced drastically as compared to control by 57.15 % and 58.13 %, respectively (Table 6). LCM -7 Vermicompost (5%) Wheat flour (20%) The carbon source such as grade sucrose that is often used in the micropropagation of plants at laboratory contributes about 34% of the production cost (Demo et al., 2008). Sucrose has been reported as a source of both carbon and energy (Bridgen, 1994). There are reported success in reducing 90% cost of tissue culture banana plants by replacing sucrose. In the plant propagation medium Kaur et al., (2005) substituted sucrose with table sugar which reduced the cost of medium considerably by 96.8% similar to the present study. The findings of our study is in agreement with that of Prakash et al., (2002) who reported the reduction in the cost medium by 78 to 87% using common sugar. In case of cost of supporting matrix, filter paper was found to be greatly efficient as it reduced the cost by almost 98%. Both the alternative media were found to be cost effective in contrast with the control. Though, the final number of plantlets obtained in case of alternatives is low, both could generate more number of plantlets when the cost per plantlet was considered. Alternative media can effectively reduce production cost and hence the research for such alternatives is significant. The use of vermicompost and coconut water as nutrient source and that of filter paper as support matrix was found efficient alternative for conventional costly ingredients.
2019-05-30T23:44:20.238Z
2018-04-20T00:00:00.000
{ "year": 2018, "sha1": "3797287e6fd7c69923339a4c41d199c8dbf50548", "oa_license": null, "oa_url": "https://www.ijcmas.com/7-4-2018/Dipak%20D.%20Kadam,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9bb4939b916224ad326b3b469263821dce83e060", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
267483467
pes2o/s2orc
v3-fos-license
Alternative streamflow-based approach to estimate catchment response time in medium to large catchments: case study in Primary Drainage Region X, South Africa Event-based estimates of the design flood in ungauged catchments are normally based on a single catchment response time parameter expressed as either the time of concentration ( T C ), lag time (T L ) and/or time to peak ( T P ). In small, gauged catchments, a simplified convolution process between a single observed hyetograph and hydrograph is generally used to estimate these time parameters. In medium to large heterogeneous, gauged catchments, such a simplification is neither practical nor applicable, given that the variable antecedent soil moisture status resulting from previous rainfall events and spatially non-uniform rainfall hyetographs can result in multi-peaked hydrographs. In ungauged catchments, time parameters are estimated using either empirical or hydraulic methods. In South Africa (SA), unfortunately, the majority of the empirical methods recommended for general use were developed and verified in catchments ≤ 0.45 km² without using any local data. This paper presents the further development and verification of the streamflow-based approach developed by Gericke (2016) to estimate observed T P values and to derive a regional empirical T P equation in Primary Drainage Region X, SA. A semi-automated hydrograph analysis tool was developed to extract and analyse complete hydrographs for time parameter estimation using primary streamflow data from 51 flow-gauging sites. The observed T P values were estimated using three methods: (i) duration of total net rise of a multi-peaked hydrograph, (ii) triangular-shaped direct runoff hydrograph approximations, and (iii) linear catchment response functions. The combined use of these methods incorporated the high variability of event-based time parameters, and Method (iii), in conjunction with an ensemble-event approach sampled from the time parameter distributions, should replace the event-based approaches to enable the improved calibration of empirical time parameter equations. The conceptual approach used to derive the regional empirical T P equation should also be adopted when regional equations need to be derived at a national scale in SA. INTRODUCTION Deterministic event-based design flood estimation (DFE) methods are commonly used by practitioners in ungauged catchments (Van Vuuren et al., 2012).In the application of these deterministic event-based DFE methods (e.g., rational, standard design flood, lag-routed hydrograph, etc.), it is widely assumed that the peak discharge from a catchment occurs when the duration of rainfall over a catchment equals the time of concentration (T C ), i.e., when the entire catchment is contributing to runoff at the outlet.In applying other deterministic event-based DFE methods, e.g., synthetic unit hydrograph method, a trial-and-error approach is used to establish the storm duration which will result in the highest peak discharge.Thus, irrespective of whether the storm duration is T C -based or user-defined, the estimation of the catchment response time is necessary to select the critical duration of design rainfall to estimate the peak discharge using deterministic event-based DFE methods.Apart from T C , catchment response time could also be expressed using other time parameters, e.g., lag time (T L ) and/or time to peak (T P ).These time parameters are not only regarded as a fundamental input to deterministic event-based DFE methods, but any errors associated with these time parameter estimates will directly impact on peak discharge and volume estimates (McCuen, 2009;Gericke and Smithers, 2014). In considering observed rainfall hyetographs and streamflow hydrographs in gauged catchments, time parameters (e.g., T C , T L and/or T P ) can be defined by considering the time interval difference between two interrelated observed time variables, each obtained from a hyetograph (e.g., maximum rainfall intensity, centroid of effective rainfall, and/or the end time of a rainfall event) and/or a hydrograph (e.g., peak discharge, centroid of direct runoff, and/or the inflection point on the recession limb) (McCuen, 2009).In small, gauged catchments, a simplified convolution process is generally used to estimate time parameters.However, this simplification is neither practical nor applicable in medium to large heterogeneous, gauged catchments (Gericke and Smithers, 2014;2017).Apart from the difficulty in applying a similar convolution process in larger catchments to establish the temporal relationship between a catchment hyetograph (derived from numerous rainfall stations) and the resulting outflow hydrograph, a uniform response to rainfall is assumed.Hence, the variable antecedent soil moisture status resulting from previous rainfall events and spatially non-uniform rainfall hyetographs, which can result in multi-peaked hydrographs, are ignored (Gericke and Smithers, 2017).The use of point rainfall data to estimate catchment hyetographs also has several associated problems, e.g., lack of data at sub-daily timescales, poor synchronisation of time between different point rainfall and/or streamflow data sets, and the difficulties experienced when measuring time parameters directly from digitised autographic records (Schmidt and Schulze, 1984).The afore-mentioned limitations associated with point rainfall data are further aggravated by the decline of the South African rainfall monitoring network over recent years.The number of operational South African Weather Service (SAWS) rainfall stations has reduced from more than 2 000 in the 1970s to the current situation where the network is no better than it was as far back as 1920, with currently less than a 1 000 rainfall stations operational in a specific year (Pitman, 2011).Internationally, the number of operational rainfall stations is also declining (Lorenz and Kunstmann, 2012).In contrast to rainfall data, streamflow data are generally less readily available internationally, but the quantity and quality thereof enable the direct estimation of catchment response times at medium to large catchment scales, while there are approximately 708 flow-gauging sites in South Africa (SA) having more than 20 years of records available (Smithers et al., 2014). The analyses of hyetograph-hydrograph relationships to obtain time parameters are often performed manually, especially when rainfall-based time variables are required.As a result, such analyses are generally tedious, inconsistent, and subjective.Apart from the automated hyetograph-hydrograph analysis tool recently developed by Allnutt et al. (2020), most of the currently available hydrograph analysis tools (e.g., Arnold et al., 1995;Chapman, 1999;Lim et al., 2005) do not include both rainfall hyetograph and streamflow hydrograph characteristics primarily aimed at the estimation of time parameters.In other words, these automated tools were not primarily developed to identify and define time variables for the subsequent estimation of time parameters, but they rather focus on the estimation of general hydrograph characteristics, direct runoff, baseflow separation, and recession analyses. In SA, unfortunately, none of the empirical T C estimation methods recommended for general use, e.g., Kerby (1959) and United States Bureau of Reclamation (USBR, 1973) equations, were developed and verified using local data, neither are they applicable to large catchments given that the calibration catchment areas were limited to 0.45 km² (McCuen et al., 1984).Locally, the empirical T L estimation methods are limited to the United States Department of Agriculture Soil Conservation Service (USDA SCS, 1985), SCS-SA (Schmidt and Schulze, 1984), and the Hydrological Research Unit (HRU; Pullen, 1969) equations.The SCS methodologies are limited to small catchments (A ≤ 30 km²), while the HRU methodology typically applies to A ≤ 5 000 km² (Gericke and Smithers, 2014).Consequently, practitioners commonly apply the T C and T L methods outside their bounds, both in terms of areal extent and their original developmental regions, without using any local correction factors.As a result, and in line with the research priorities identified by the National Flood Studies Programme (NFSP; Smithers et al., 2014), Gericke (2016) developed a new approach to estimate observed T P values using only observed streamflow data to calibrate and verify empirical T P equations in a pilot-scale study in four climatologically different regions of SA.Given that both Gericke and Smithers (2017) and Allnutt et al. (2020) confirmed that T C ≈ T L ≈ T P in medium to large catchments, the versatility of the streamflow-based T P equations to estimate T C and/or T L is acknowledged. In considering the status quo in South African flood hydrology related to catchment response time parameters, the aim of this paper is to further develop and verify the streamflow-based approach of Gericke (2016) to estimate observed time to peak (T Px ) values and to derive a regional empirical T Py equation in Primary Drainage Region X, SA.The specific objectives are to: (i) develop a semi-automated hydrograph analysis tool (HAT) to extract and analyse complete hydrographs for time parameter estimation and based on primary streamflow data from 51 flowgauging sites, (ii) estimate the observed T Px values using 3 methods, e.g., duration of total net rise of a multi-peaked hydrograph, triangular-shaped direct runoff hydrograph approximations, and linear catchment response functions, (iii) derive a regional empirical T Py equation, and (iv) compare the performance of the derived T Py equation against existing T Py equation(s) to highlight the limitations of empirical equations when applied beyond the boundaries of their original developmental regions. The scope of the study is limited to Primary Drainage Region X, given that the 51 flow-gauging stations generally have better and more complete data sets for which the Department of Water and Sanitation (DWS) has done some stage-discharge extensions.In addition, this paper reports the development of a semi-automated HAT, which will also serve as a future benchmark to inform and support the envisaged development, testing, and verification of a comprehensive (fully automated) hydrograph extraction utility.A summary of the study area is contained in the next section, followed by a description of the methodologies adopted and the results achieved.This is followed by the discussion and conclusions. STUDY AREA South Africa, which is located on the southernmost tip of Africa, is demarcated into 22 primary drainage regions, i.e., A to X (Midgley et al., 1994), which are further delineated into 148 secondary drainage regions, i.e., A1, A2, to X4.As shown in Fig. 1, Primary Drainage Region X covers 31 193 km²; 70% extends across the Mpumalanga Province of SA, while the remainder extends into Eswatini (former Swaziland). Primary Drainage Region X is further delineated into 4 secondary drainage regions, i.e., X1 (11 227 km²), X2 (10 447 km²), X3 (6 322 km²), and X4 (3 197 km²).The 51 gauged catchments under consideration have catchment areas ranging from 6 km² to 21 583 km².The catchment topography is moderately steep with elevations varying from 112 m to 2 255 m above mean sea level and with average catchment slopes between 3.5% and 36.1% (USGS, 2016).The mean annual precipitation (MAP) ranges from 521 mm to 1 325 mm (Lynch, 2004) and the summer rainfall is regarded as highly variable.The flow-gauging stations in each catchment are classified by DWS as either primary (P), secondary (S), or tertiary (T) gauging sites based on the: (i) status (open/ closed), (ii) location and importance in the overall monitoring network, (iii) data availability, quality, and record length, (iv) type of calibration (standard/extended for above-structure-limit conditions), (v) site survey information available (yes/no), and (vi) flood frequency analyses conducted (yes/no). METHODOLOGY AND RESULTS This section contains the methodology adopted to achieve all the specific objectives and the associated results.https://doi.org/10.17159/wsa/2024.v50.i1.4067 Development of a semi-automated hydrograph analysis tool The HAT was developed in the Microsoft Excel and/or Visual Basic for Applications (VBA) environment and includes semiautomated routines to enable the identification, extraction, and analyses of complete hydrographs for the purpose of time parameter estimation, as detailed in the subsequent sections.The approximation of T C ≈ T P as proposed by Gericke (2016) forms the basis of the HAT and is based on the definition that the volume of effective rainfall equals the volume of direct runoff when a hydrograph is separated into direct runoff and baseflow.As shown in Fig. 2, the separation point on the hydrograph is regarded as the start of direct runoff (Q Dxi ), which coincides with the onset of effective rainfall (P Exi ).Hence, the extensive convolution process normally required to estimate time parameters is eliminated, given that the time parameters are estimated directly from the observed streamflow data without the need for rainfall data. Typically, a complete hydrograph extracted using the HAT will include: (i) start/end date/time of flow event, (ii) observed water level (m), (iii) observed discharge (m 3 •s -1 ) and total volume of runoff (Q Txi , m 3 ), (iv) direct runoff discharge (m 3 •s -1 ) and total volume of direct runoff (Q Dxi , m 3 ), (v) baseflow discharge (m 3 •s -1 ) and total volume of baseflow (Q Bxi , m 3 ), and (vi) the cumulative volume of direct runoff under the hydrograph rising limb (Q DRi , m 3 ). Extraction and analysis of flood hydrographs to estimate time parameters The procedural steps followed in Region X, with the aid of functionalities available in the HAT, include the (Gericke et al.(c) Assessment of the accuracy and relevance of the discharge rating tables (DTs) on the DWS website.In general, all the DTs in the study area were already quality controlled and extended (as required) by DWS (Flood Studies).However, in the absence of an extended DT (if required), the AMS data set was extended using a 3 rd order polynomial relationship up to 20%.As recommended by Gericke and Smithers (2017), the verification of the extension to +20% considered both the hydrograph shape, especially the peakedness as a result of a steep rising limb in relation to the hydrograph base length, and the relationship between individual peak discharge (Q Pxi ) and direct runoff volume (Q Dxi ) pair values.Typically, in such an event, the additional volume of direct runoff (Q DE ) due to the extrapolation is limited to 5%, i.e., Q DE ≤ 0.05 Q Dxi . (d) Implementation of user-defined truncation level criteria (Q TR ) associated with the record length (N) to extract complete hydrographs.The following truncation level criteria were implemented to ensure that the frequently occurring and lower AMS values, which could potentially result in underestimated time parameters, are excluded: (i) N ≤ 20 years, use the lowest/minimum AMS value, (ii) 20 < N ≤ 60 years, use the 25 th -percentile AMS value, and (iii) N > 60 years, use the median AMS value.For example, the median AMS value typically has a return period (T) = 2-year or an annual exceedance probability (AEP) = 50%.Hence, all complete hydrographs with a peak discharge > selected AMS value, i.e., partial duration series (PDS) values above a certain discharge threshold, were extracted.(e) Identification and extraction of complete hydrographs (cf.Fig. 2) associated with each AMS event and applicable truncation level criteria.A total of 4 454 complete hydrographs were extracted and analysed.The record lengths under consideration varied between 13 and 112 years, with an overall average record length of 49 years.The Q TR criteria were dominated by the minimum AMS (5 catchments) and 25-percentile AMS (29 catchments) values in 67% of all the catchments under consideration.Therefore, at least 75% of all the AMS events were included in the analyses at a catchment level, while it could be argued that 50% or more of the AMS events were discarded in the 17 catchments (33%) remaining where the median AMS criteria were applied.Given that record length is used as the guide for the Q TR criteria, the process followed is regarded as consistent, both in terms of the process itself and the results obtained.Subsequently, it is evident that not all the AMS values need to be included in time parameter analyses.As a result, only 2 284 hydrographs were considered in the final analyses. (f) Separation of complete hydrographs (cf.Fig. 2) into direct runoff and baseflow.The recursive digital filtering method (Eq. 1) as initially proposed by Lyne and Hollick (1979), further developed by Nathan and McMahon (1990), and implemented by Smakhtin and Watkins (1997) in a national scale study in SA, was used to separate the direct runoff and baseflow.Equation 1 is also the preferred baseflow separation method used by DWS and included as the default digital filter algorithm in the Hydrological Timeseries Data Management System (Hydstra) which is used to manage and maintain the whole DWS meteorological and hydrological database.Given that daily/sub-daily time-step data are more appropriate to time parameter estimation and the need for consistency and reproducibility, Eq. 1 with default α-parameter values ranging between 0.995 and 0.997 (Smakhtin and Watkins, 1997), and a fixed β-parameter value of 0.5 (Hughes et al., 2003), was used in all the catchments under consideration. where: Q Dxi is the filtered direct runoff (m 3 •s -1 ) at time step i, which is subject to Q Dx ≥ 0 for time i, α, β are the filter parameters, and Q Txi is the total streamflow (m 3 •s -1 ; direct runoff plus baseflow) at time i. (g) Estimation of the time parameter values associated with individual hydrograph/flood events using two different approaches: (i) net rise (duration) of a multi-peaked hydrograph (Eq.2), and (ii) triangular-shaped direct runoff hydrograph approximation (Eq. 3) and associated variable hydrograph shape parameters (Eqs 3a-c) as shown in Fig. 3. where: T Bxi is the triangular hydrograph base length (h) for individual hydrograph/flood events, t j is the duration of the total net rise (excluding the in-between recession limbs) of a multiple-peaked hydrograph (h), T Pxi is the net rise (duration) or triangular approximated time to peak (h) for individual hydrographs/flood events, T Rcxi is the recession time (h) for individual flood events, Q Dxi is the volume of direct runoff (m 3 ) for individual hydrographs, Q DRi is the volume of direct runoff (m 3 ) under the rising limb for individual hydrographs, Q Pxi is the observed peak discharge (m 3 •s -1 ) for individual hydrographs, K is the hydrograph shape parameter, N is the sample size, and x is a variable time parameter proportionality ratio, with x = 1, either T Pxi or T Px and/or T Cxi or T Cx could be estimated, while T Lxi or T Lx could be estimated by assuming that T L = 0.6T C , which is the time from the centroid of effective rainfall to the time of peak discharge.Equation 2 was adopted from Du Plessis (1984).Given that the complete hydrographs extracted are based on the userdefined truncation level criteria, hydrographs containing multiple peaks as shown in Fig. 2 could be possible.Hence, in applying Eq. 2, hydrographs are regarded as separate events when the start of a successive rising limb is characterised by the total discharge ≈ baseflow discharge.If the total discharge > baseflow discharge, then the net rise calculation continues from the trough after the previous peak.Therefore, Eq. 2 is regarded as the best estimate of the observed T Pxi values as extracted directly from the observed hydrographs. A scatter plot of the T Pxi values computed using Eqs 2 and 3 for all the catchments under consideration is shown in In Fig. 5 Thus, by using the above approach, as detailed in Step (g), both multi-peaked hydrographs (Eq.2) and triangularshaped direct runoff hydrograph approximations (Eq. 3) are included.Ultimately, Eq. 3, which reflects the actual percentage of direct runoff under the rising limb of each individual hydrograph, can also be used in future to expand the unit hydrograph theory to larger catchments.In other words, the variable hydrograph shape parameter (Eq.3a), which reflects the actual percentage of direct runoff under the rising limb of each individual hydrograph, can be used instead of the fixed volume of 37.5% normally associated with the conceptual curvilinear unit hydrograph theory. (h) Estimation of the 'average' catchment response time (T Px ) of all the flood events considered in each catchment by using a linear catchment response function (Eq.4), i.e., the relationship between individual paired observed peak discharge (Q Pxi ) and direct runoff volume (Q Dxi ) values. T x where: T Px is the 'average' catchment time to peak (h) based on a linear catchment response function, Q Dxi is the volume of direct runoff (m 3 ) for individual hydrographs, Q x D is the mean of Q Dxi (m 3 ), Q Pxi is the observed peak discharge (m•s -1 ) for individual hydrographs, Q x P is the mean of Q Pxi (m 3 •s -1 ), N is the sample size, and x is a variable time parameter proportionality ratio as defined before. A scatter plot of the average T Pxi values computed using both Eqs 2 and 3 in comparison to the catchment T Px values (Eq.4) for all the catchments under consideration is shown in Fig. 6.In Fig. 6, a high degree of association is evident, i.e., r 2 = 0.986 (Eqs 2 vs. 4) and r 2 = 0.999 (Eqs 3 vs. 4).At a catchment level, the averages of Eqs 2 and/or 3 were also comparable to those estimates based on Eq. 4, with average relative differences limited to 13.6% and r² values ranging from 0.97 to 0.99.Hence, the catchment response times based on an assumed linear catchment response function (Eq.4) provide results comparable to the sample-mean of all the individual response times as estimated using Eqs 2 and/or 3. The combined use of Eqs 2 and 3 not only incorporates the high variability of event-based time parameters, but the catchment T Px values (Eq.4) are also well within the range of other uncertainties inherent to all DFE procedures.Given that Eq. 4 provides a single, average catchment T Px value as required for deterministic event-based DFE, the use thereof in design hydrology and for the calibration of empirical time parameter equations, is recommended. Calibration, verification, and comparison of regional empirical time parameter equations Stepwise multiple regression analyses were performed on the T Px values (Eq.4) and geomorphological catchment characteristics (e.g., area A, perimeter P, centroid distance L C , hydraulic length L H , average catchment slope S, average main watercourse slope S CH , drainage density D D , and MAP) as included in Table A1 (Appendix) to establish the calibrated T Py relationship (Eq.5).Both untransformed and log-transformed data sets applicable to the above predictor variables were considered.In some of the 41 calibration catchments, the transformed predictor variables performed less satisfactorily when included as part of the multiple regression analyses, while the log-transformations resulted in negative response times.Subsequently, backward stepwise multiple linear regression analyses with deletion using untransformed data resulted in the best calibrated T Py regression and the following independent and statistically significant predictor variables were where: T Py is the estimated time to peak (h), A is the catchment area (km²), and S is the average catchment slope (%). The goodness-of-fit (GOF) statistics and correlation matrix applicable to the predictor variables are summarised in Table 1. Typically, the coefficient of multiple-correlation (R i ²) and the standard error of the estimate (S Ey ) serve as measures of accuracy, while the partial t-tests highlight the statistical significance of the individual predictor variables, and the total F-tests represent the degree of correlation between the T Px values and the predictor variables.In the correlation matrix, the degree of association between the predictor variables is defined using both the coefficient of determination (r²) and the variance inflation factor (VIF).Standardised residuals were also considered to identify possible outliers. At a 95% confidence level and degrees of freedom = 39, the critical t-statistic (t α ) = 2.02.In comparing the t-statistic values of each predictor variable in Table 1 with t α , it is evident that all t-statistic values > t α ; hence, confirming the statistical significance of these predictor variables and supporting their inclusion in Eq. 5.The latter results are further supported by all P-values being less than the significance level of 0.05. It is evident from the correlation matrix that a low correlation exists between the statistically significant predictor variables, with r² = 0.16, and this is further supported by the VIF = 1.19.Typically, the lowest VIF value that can be achieved equals one (1), while the range 1 < VIF ≤ 3 is associated with an acceptable to moderate correlation between predictor variables (Mediero and Kjeldsen, 2014).Hence, no collinearity exists between A and S, and they are both regarded as independent and statistically significant predictor variables.The inclusion of a slope predictor (S) is also regarded as essential to ensure that the size (A) predictor provides realistic catchment response times. Lastly, the S Ey results (≈ 5.2 hours) in Table 1 must also be clearly understood in the context of the actual travel time associated with the catchment sizes in the study area, as the impact of such an error in the T Py estimates might be critical in smaller catchments, it would be regarded as less significant in a larger catchment.The rejection of the null hypothesis (F > F α ) in Table 1 also confirmed the significant relationship between T Px and the statistically significant predictor variables as included in Eq. 5.In considering the standardised residuals computed in both the calibration and verification catchments, it was evident that ± 92% of the total sample have standardised residuals less than ± 2 (ranging between −1.68 and 1.56), except in the case of the calibration catchment, Catchment X2H005 (−2.22), and the verification catchments, Catchments X2H025 (2.04), X2H026 (2.20) and X2H028 (2.39), respectively.However, the three verification catchments have areas ranging from 6 to 25 km²; hence, these catchments are regarded as 'small catchments' and not necessarily 'medium to large catchments' , which this study focuses on.According to Chatterjee and Simonoff (2013), it is expected of a reliable regression model to have approximately 95% of the standardised residuals between −2 and +2, while standardised residuals ≥ ± 2 should be investigated as potential outliers.The standardised residuals ≥ ± 2 in the four identified catchments are regarded as 'acceptable' , given that T Py is consistent with the regression relationship implied by the other T Px values as included in Fig. 7. The high degree of association, as depicted in Fig. 7, not only confirmed the good correlation between T Px and T Py , but also the usefulness of Eq. 5 to estimate the catchment response time in both the calibration and verification catchments.The overall r 2 value equals 0.95. Given the high T Pxi variability observed at a catchment level, the lower T Pxi values (Eqs 2 and/or 3), which could be associated with rainfall events not covering the whole catchment and centred near the catchment outlet, occur more frequently, and thereby the average value, i.e., the catchment T Px (Eq.4), could be underestimated.On the other hand, the longer T Pxi values have a lower frequency of occurrence and are assumed to be reasonable at medium to large catchment scales as the contribution of the whole catchment to peak discharge seldom occurs as a result of the non-uniform spatial and temporal distribution of rainfall in a catchment.Furthermore, in some catchments (e.g., X2H010, 13-15, 26, 27, and X2H028), the correlation between the Q Pxi −Q Dxi pair values used to derive Eq. 4 is low (r² ≈ 0.1), despite the high agreement (differences ≤ 15%) between Eq. 4 and the averages of Eqs 2 and/or 3 in these catchments.Therefore, it could be argued that the T Px values (Eq.4) in the above cases might contribute to less appropriate T Py estimates (Eq.5) and need to be further investigated or improved by using an ensemble-event approach sampled from the T Pxi distributions. It is thus evident from the above paragraph that the non-uniform spatial and temporal distribution of rainfall implies that the whole catchment area (A) will seldom contribute to the resulting peak discharge at the catchment outlet.However, A is included in Eq. 5 without being able to consider the spatial and temporal variability.Subsequently, this serves as a further motivation that an ensemble-event approach should be deployed in future to address the uncertainty associated with individual catchment response times and to provide a probabilistic range of acceptable catchment response times at a catchment/regional level which can ultimately be used to improve the calibration of empirical time parameter equations. Hence, the high variability of individual-event observed T Pxi (Eqs 2 and 3) and estimated T Py (Eq.5) values relative to the catchment T Px (Eq.4) values in each catchment was further investigated using Eq. 6.The relative catchment response time variability or error at a catchment level are shown in Fig. 8. T T T T xi y x where: T PVar is the relative catchment response time variability/ error [over/underestimation (±)], T Px is the observed catchment response time (Eq.4, h), T Pxi is the maximum/minimum individual-event catchment response time (Eqs 2 and/or 3, h), and T Py is the estimated catchment response time (Eq.5, h). The high T Pxi variability as depicted in Fig. 8 is not only associated with an increase in catchment area, given that the variability ranges implied by Eq. 6 do not constantly increase with an increasing catchment area.Thus, it could be argued that such higher variabilities could also be associated with an increase in the spatial and temporal distribution and heterogeneity of other geomorphological catchment characteristics and rainfall as the catchment scale increases.Furthermore, the validity of the GOF results listed in Table 1 is also confirmed by and evident from Fig. 8, since the T Py estimates are well within the bounds of the maximum/minimum individual-event observed T Pxi variability in each catchment, except in the verification catchments smaller than 25 km² where the T Py estimates are associated with standardised residuals > ± 2. In order to compare the performance of the derived T Py equation (Eq.5) against existing equation(s), the empirical T Py equation (Eq.7) as originally developed by Gericke (2016) was also tested in the 51 catchments.As a result, a scatter plot of the T Py (Eq.7) and catchment T Px (Eq.4) values for both the calibration and verification catchments is shown in Fig. 9 to highlight the limitations when empirical equations are applied beyond their developmental regions. T x x x x x MAP where: T Py is the estimated time to peak (h), A is the catchment area (km²), L C is the centroid distance (km), L H is the hydraulic length (km), MAP is the mean annual precipitation (mm), S is the average catchment slope (%), and x 1 to x 5 are calibration coefficients (Gericke, 2016). The low to moderate degree of association (r 2 ≤ 0.68), as depicted in Fig. 9, highlighted that Eq. 7 in its current format would not be useful to estimate the catchment response time in most of the catchments under consideration, and thereby confirms that any empirical equation should be used with caution when applied beyond the boundaries of its original developmental regions. In addition, many of the standardised residuals exceeded the benchmark standardised residual value of ± 2. Typically, none of the 51 catchments considered in this study formed part of the catchments used to calibrate and verify Eq. 7. Subsequently, Eq. 5 is the preferred empirical equation to estimate T Py in Primary Drainage Region X. DISCUSSION AND CONCLUSIONS The aim of this study was to further develop and verify the streamflow-based approach of Gericke (2016) in Primary Drainage Region X, SA.By achieving the research aim, observed T Px values were estimated in a practical and objective manner without the need for rainfall data to ultimately derive a regional empirical T Py equation.The development of the HAT enabled the consistent extraction and analyses of complete hydrographs for the purpose of time parameter estimation using Eqs 2, 3, and/or 4. Given the high variability and complexities involved when time parameters are estimated, along with the technical problems encountered with observed streamflow data, e.g., exceedance of DTs, multipeaked hydrographs, etc., a fully-automated version of the HAT is preferred and would typically be required to deploy the proposed methodology at a national scale.Given that the whole DWS meteorological and hydrological database is managed, populated, maintained, and archived using Hydstra, it is recommended that the fully-automated HAT should be based on the current Hydstra tools available.This will not only ensure that the current Hydstra functionalities are optimally utilised, but will also enhance the possibilities of having the HAT built into a web-based version of Hydstra to enable practitioners to run the hydrograph extraction and analyses themselves.As part of the fully-automated HAT to be developed, with specific reference to design hydrology and for the calibration of empirical time parameter equations, the catchment T Px (Eq.4) and an ensemble-event approach sampled from the T Pxi distributions should be applied in future to replace the current event-based approaches to enable the improved calibration of empirical time parameter equations.The conceptual approach used to derive the regional empirical T Py equation (Eq.5) should also be adopted when regional empirical time parameter equations need to be derived at a national scale in SA.However, the application of Eq. 5 should be limited to Primary Drainage Region X, given the known limitations when empirical equations are applied beyond their developmental boundaries.Thus, when attempting to derive any new regional empirical time parameter equation(s) in SA, caution should be practiced by including, as far as possible, only predictor variables which are statistically significant, independent, easy to derive, and commonly available and used in practice.Hence, a balance needs to be achieved between the statistical correctness and userfriendliness of such empirical time parameter equations. Very often in hydrology, as in this case study, predictor variables might be statistically significant but, due to a high degree of multi-collinearity, the regression coefficient estimates and P-values in the regression model are likely to be unreliable.For example, it is well known in flood hydrology that A, L C , and L H in combination are useful to describe differences in catchment shape, which subsequently has an impact on the catchment response time.However, these predictor variables are very often associated with a high degree of multi-collinearity, especially L C and L H .The inclusion of both L C and L H , subjected to alternative statistical transformations to result in orthogonal variables, should therefore be considered, especially in catchments characterised by heterogeneous upper and lower catchment slope distributions where large differences between S and S CH exist.Typically, the inclusion of L C ensures that the runoff volumes which reach and concentrate at the catchment centroid much quicker (due to a steeper catchment slope in the upper reaches), in conjunction with the shorter L C distances to follow to the catchment outlet, result in the required shorter response times.The opposite is also true; hence, the response of a catchment is most likely to be influenced by a combination of geomorphological catchment characteristics and not by a single catchment characteristic, irrespective of whether such characteristics are statistically independent or not.Furthermore, the combined use of A, L C , and L H is also evident from hydrological literature applicable to the derivation of time parameter equations, e.g., T C equations (Sabol, 2008), T L equations (Snyder, 1938;Taylor and Schwarz, 1952;Pullen, 1969), and T P equations (Gericke and Smithers, 2016).It is acknowledged that some of these equations were developed many years ago, but they are still widely used in practice with great success. In the interim, and in the absence of a fully-automated HAT, it is also recommended that the current methodology be gradually expanded to Primary Drainage Regions A and B, before deploying it at a national scale.Approximately 110 gauged catchments covering the whole of the Gauteng, Mpumalanga, and Limpopo Provinces are situated in these regions.Typically, these three regions do not only form a continuous geographical region, but the largest percentage of SA's population also resides here and is frequently subjected to extreme flooding. , 2023): (a) Evaluation, preparation, and extraction of primary streamflow data for the period up to 2020/21 from the DWS streamflow database.(b) Identification and extraction of the annual maximum series (AMS) events, i.e., the annual flood peaks at each flowgauging station within a hydrological year.For example, a continuous record length of 50 years contains 50 AMS events. Figure 1 . Figure 1.Location of the 51 gauged catchments in Primary Drainage Region X Figure 2 . Figure 2. Time parameter relationships in the HAT (after Gericke and Smithers, 2017) Fig. 4 . Fig.4.In comparing Eqs 2 and 3 at a catchment level, the r² value of 0.84 (based on the 2 284 flood hydrographs) not only confirms the relatively high degree of association, but also the usefulness of Eq. 3. Taking into consideration the influence that catchment area has on response times, the degree of association between these individual T Pxi values could decrease with an increase in catchment area.In the case of deterministic event-based DFE, the ultimate goal is to estimate the average catchment T Px by considering the sample-mean of the individual responses based on Eqs 2 and 3, respectively.However, these individual responses can also be used to fit distributions for future ensemble-event approaches (Nathan and Ling, 2016). Figure 5 . Figure 5. Frequency distribution histogram of Q DRi values obtained from 2 284 hydrographs Figure 8 . Figure 8. Relative catchment response time variability (Eq.6) at a catchment level , a frequency distribution histogram of the Q DRi values expressed as a percentage of the total direct runoff volume (Q Dxi ) is shown.Taking into consideration that 2 284 (51.3%) of the individual flood hydrographs extracted were included in the final analyses, a few flood events could be characterised by either low (0.4%) or high (92.8%)Q DRi values.However, approximately 35% of the Q DRi values are within the 20 ~ 40% range.Only 15% of the Q DRi values are within the 30 ~ 40 % range; highlighting some relevance of the conceptual curvilinear unit hydrograph theory (USDA NRCS, 2010) which assigns 37.5% of the direct runoff volume to the hydrograph rising limb.
2024-02-06T18:11:32.197Z
2024-01-30T00:00:00.000
{ "year": 2024, "sha1": "70bc8b40ba5e122beef47aa4e922aadbf7ec08de", "oa_license": "CCBY", "oa_url": "https://watersa.net/article/download/17799/21012", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0b6f9ce4c4972d68d8688739851ef2688eae95c1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
59493165
pes2o/s2orc
v3-fos-license
A low-mass protostar's disk-envelope interface: disk-shadowing evidence from ALMA DCO+ observations of VLA1623 Due to instrumental limitations and a lack of disk detections, the structure between the envelope and the rotationally supported disk has been poorly studied. This is now possible with ALMA through observations of CO isotopologs and tracers of freezeout. Class 0 sources are ideal for such studies given their almost intact envelope and young disk. The structure of the disk-envelope interface of the prototypical Class 0 source, VLA1623A which has a confirmed Keplerian disk, is constrained from ALMA observations of DCO+ 3-2 and C18O 2-1. The physical structure of VLA1623 is obtained from the large-scale SED and continuum radiative transfer. An analytic model using a simple network coupled with radial density and temperature profiles is used as input for a 2D line radiative transfer calculation for comparison with the ALMA Cycle 0 12m array and Cycle 2 ACA observations of VLA1623. DCO+ emission shows a clumpy structure bordering VLA1623A's Keplerian disk, suggesting a cold ring-like structure at the disk-envelope interface. The radial position of the observed DCO+ peak is reproduced in our model only if the region's temperature is between 11-16K, lower than expected from models constrained by continuum and SED. Altering the density has little effect on the DCO+ position, but increased density is needed to reproduce the disk traced in C18O. The DCO+ emission around VLA1623A is the product of shadowing of the envelope by the disk. Disk-shadowing causes a drop in the gas temperature outside of the disk on>200AU scales, encouraging deuterated molecule production. This indicates that the physical structure of the disk-envelope interface differs from the rest of the envelope, highlighting the drastic impact that the disk has on the envelope and temperature structure. The results presented here show that DCO+ is an excellent cold temperature tracer. Introduction Rotationally supported disks have been observed extensively among most protostellar and pre-main sequence evolutionary stages (Li et al. 2014). Recent studies have revealed the existence of such disks in the Class 0 deeply embedded phase (Choi et al. 2010;Tobin et al. 2013;Codella et al. 2014;Lee et al. 2014). In the early stages of star formation, the envelope is not yet dispersed and contains enough material to influence the evolution of the star-disk system. The boundary between the disk and the envelope, known as the disk-envelope interface, must then play a role in the formation process. This region, however, is largely unexplored owing to limitations in the resolution and sensitivity, as well as the lack of observed rotationally supported disks, until now. Class 0 sources with confirmed rotationally supported disks grant us the opportunity to study the chemical and physical structure of the disk-envelope Current address: Universität Heidelberg, Zentrum für Astronomie, Institut für Theoretische Astrophysik, Albert-Ueberle-Str. 2, 69120 Heidelberg, Germany interface region, which is crucial for the next step of understanding this region's role in star formation (Sakai et al. 2014). Whilst CO isotopologues are good tracers of rotationally supported disks, they freeze out onto dust grains below the evaporation temperature T ev usually between 30-20 K (Jørgensen et al. 2005). These low temperatures are reached at the edge of the embedded disk (Visser et al. 2009). As a result molecular species whose abundance is enhanced at low temperatures are needed to trace the disk-envelope interface. DCO + emission is known to be optically thin and its abundance is enhanced at a narrow range of temperatures below the CO freeze-out temperature tracing the so-called CO snowline (e.g., Wootten 1987, Roberts et al. 2003, Mathews et al. 2013. Thus, DCO + is a good candidate molecule to trace the chemical, physical and kinematic structure of the disk-envelope interface. VLA1623-2417 (hereafter VLA1623) is a triple non-coeval protostellar system located in ρ Ophiuchus at d ∼120 pc . VLA1623A is the prototypical Class 0 source and emits predominantly in the (sub)millimeter range (André et al. 1993;. From modeling of ALMA Cycle 0 Article number, page 1 of 16 arXiv:1505.07761v1 [astro-ph.SR] 28 May 2015 A&A proofs: manuscript no. vla1623_dco_langed 3,5,10,15,20,40, 60 and 78σ with σ = 1 mJy beam −1 for 1.3 mm continuum, and -5, -3, 3, 5, 10, 15, 20 and 25σ with σ = 13 mJy beam −1 km s −1 for C 18 O. Velocity map (moment 1, color-scale) for (c) DCO + and (d) C 18 O overlaid with the corresponding intensity integrated map (contours). Contours are in steps of -10, -7, -4, 4, 7, 10 and 11σ with σ = 3 mJy beam −1 km s −1 for DCO + , and the same as in (b) for C 18 O. C 18 O observations, found that VLA1623A supports a Keplerian disk with a radius of at least 150 AU and a central mass M * of 0.2 M . The ALMA Cycle 0 observations also detected DCO + molecular line emission toward VLA1623A bordering the C 18 O disk. This grants us the opportunity to probe the disk-envelope interface of VLA1623A. In this paper we present the results of our ALMA observations and simple chemical modeling, aiming to understand the physical structure of the boundary between the envelope and the disk in a Class 0 protostar. In this paper, we present the results and analysis of the DCO + (3-2) (rest frequency: 216.11258 GHz) observations. The DCO + data were calibrated jointly with the continuum, C 18 O (2-1), and 12 CO (2-1) data. Further calibration details and results from the other observed lines can be found in . The spectral set-up provided a velocity resolution of 0.0847 km s −1 . The synthesized beam size for the DCO + images is 0.85 × 0.65 with P.A. = 96 • . The rms noise of the channel map is 12 mJy beam −1 for a spectral resolution of 0.0847 km s −1 , giving a peak S/N = 7. Our ALMA observations provide a maximum scale of 4 and a field of view (FOV) of 24 , with emission between 4 and 24 largely filtered out by the interferometer. The FOV together with the beam size of 0.85 constrain the scale to which any analysis of the data can be done. In addition to the ALMA Cycle 0 observations, we present the DCO + (3-2) results from our ALMA Cycle 2 Atacama Compact Array (ACA) observations carried out on 7 August 2014 (pointing coordinates α=16:26:26.390 δ=-24:24:30.688). Total observing time was 2 hours. Data calibration was done with J1517-243 and Mars for flux, J1625-2527 and Mars for gain, and J1733-1304 for bandpass. The rms noise of the DCO + channel map is 73 mJy beam −1 for a spectral resolution of 0.021 km s −1 , with a synthesized beam of 8.6 × 4.2 with P.A. = -76 • . These observations, with a mosaicked area of 6 , provide the DCO + emission between 4 and 18 scales. ALMA 12-m Array Cycle 0 The detected DCO + emission ( Fig. 1) is located between a velocity range of 2.8 to 5.2 km s −1 and shows a clumpy structure with two main clumps to the north and south of VLA1623A. The southern clump emission is stronger and is offset by about 2.5 (300 AU) from VLA1623A and in addition it borders the red-shifted emission of the disk traced in C 18 O. The northern clump slightly overlaps the blue-shifted emission of the C 18 O disk and borders VLA1623B's continuum emission. A couple of clumps with emission between 3 and 10σ are observed near the continuum peaks of VLA1623A & B. No significant emission was detected toward VLA1623W, separated by 10 to the west from VLA1623A, possibly either because of the lack of DCO + or because the emission is too weak and filtered out. The C 18 O line emission tracing the disk was found to be influenced by the outer envelope, with the blue-shifted emission being affected more than the red-shifted emission . We expect the same to hold true for the DCO + emission, thus potentially explaining why the northern blue-shifted clump emission is weaker than that for the southern red-shifted clump ( Fig. 1 a & b). The velocity weighted (moment 1) map of the DCO + emission (Fig. 1c) shows that its velocity gradient is similar to that of C 18 O, with the northern clump being blue-shifted and the southern clump being red-shifted, but with a smaller velocity range. The Position-Velocity (PV) diagrams of DCO + and C 18 O emission ( Fig. 3) are constructed and over-plotted with the best fitting thin disk models, Keplerian and Infall plus Keplerian out to 150 AU, obtained by . It appears as though both line emissions are well described by pure Keplerian rotation out to 300 AU. However the DCO + emission is too weak, peaking at 7σ in the channel map, to carry out further kinematical analysis. The similar velocity signatures of the emission from both species may be due to the disk edge dragging along material from the envelope. The above simple analysis of the observations indicates that the detected DCO + emission may be tracing a ring which borders the C 18 O disk. In addition, DCO + 's velocity gradient and PV diagram suggest that the ring is undergoing Keplerian rotation, this is even more plausible given that the C 18 O (2-1) emission traces a rotationally supported disk with a Keplerian velocity profile ). ALMA ACA Cycle 2 The DCO + (3-2) emission traced with the ACA confirms that the emission is concentrated around VLA1623 (Fig. 2), it does not peak on VLA1623A and instead encircles it, forming a shell-like structure. The ACA detects only weak emission at the position of VLA1623W. The DCO + emission mapped with the ACA shows the same velocity gradient (Fig. 2) as on the small scales, indicating that the kinematic structure is the same throughout. Analysis DCO + is detected at the disk-envelope interface of VLA1623A. Given that DCO + is optically thin and a good probe of temperature and CO freeze-out regions, we model the observed emission aiming to probe the physical and chemical structure of the diskenvelope interface. In this section we describe the model used and the results obtained from the model. DCO + chemical network and model We model the observed DCO + emission using a simple chemical network. Such a network, while it may not account for every possible reaction, is preferred as it gives insight into how the physical parameters of the temperature and density affect the observed emission. The network used is a steady-state analytic model that only takes the basic reactions that lead to DCO + production and destruction into account (Table 1, Fig. 4). Since DCO + is formed by the reaction of H 2 D + and CO, the rate-determining reactions in our network are with the back reaction where the activation energy due to the difference in zero point energy is ∆E ∼ 220 K. For reaction 5 in our network (Table 1), for simplicity, we adopt the total rate coefficient summing over all three pathways of the reaction. The rate coefficients for a twobody reaction are given by where T is the gas temperature, while gives the rate coefficient for cosmic-ray ionization (reaction 1 from Table 1). Our model takes as input a source density and temperature profile as a function of radius and the parameters needed to calculate the rate coefficients. The CO evaporation temperature T ev , desorption density n de and CO abundance X CO are free parameters. The CO abundance is assumed and not calculated. The CO evaporation temperature dictates when CO is in the gas phase (T > T ev ) or freezes onto the dust grains (T < T ev ). In a similar manner, the desorption density sets the boundary when the freeze-out timescales are too long (n < n de ) compared to the lifetime of the core (Jørgensen et al. 2005). We assume the density profile is equal to the H 2 density n H 2 and the abundance of HD X HD = 10 −5 with respect to the total hydrogen nuclei density n H = 2n(H 2 ). The model returns the calculated concentrations as a function of radius. The results can then be input into excitation radiative transfer programs such as RATRAN (Hogerheijde & van der Tak 2000) for further analysis. Given that CO is one of the parent molecules of DCO + , we study the effect of CO abundance through the use of different abundance profiles following the models detailed in Jørgensen et al. (2005) and Yıldız et al. (2010). The possible CO profiles are Constant and Drop abundance (Fig. 5). The former represents a fixed CO abundance throughout the core, whereas the latter, constrained by T ev and n de , is used to account for CO freeze-out in the chemical network. For the constant profile, the abundance is denoted by X 0 . For the drop profile, the abundance in the inner, drop and outer regions are denoted by X in , X D and X 0 , respectively. Previous studies using multiline single dish C 18 O have found that the abundance at X in is lower than X 0 for a number of sources (Alonso-Albi et al. 2010;Yıldız et al. 2010Yıldız et al. , 2013. One explanation is that some fraction of the CO ice is transformed into more complex and less volatile carbonaceous species in the cold phase. We thus take this effect in our model into account. A lower abundance of CO, due to freeze-out, allows an increase in Table 1. DCO + chemical network reactions and adopted rate coefficients Notes. The reactions in bold (4a and 4b) substitute the back reaction of the bottleneck (4) the abundance of H 2 D + (Mathews et al. 2013). However, since both molecules, CO and H 2 D + , are parent molecules of DCO + , a balance must be reached before the effective formation of DCO + takes place. This scenario is found to be common for the envelopes of early embedded protostars (Jørgensen et al. 2005) where the outer region shielded from the protostellar and interstellar radiation heating has low enough temperature that CO freezes out onto dust grains. In molecular clouds, and consequently in protostellar cores, H 2 chemistry plays a major role, hence the ortho-to-para ratio of H 2 influences the chemical reactions, and has been found to be crucial to the deuterium chemistry (Flower et al. 2006;Pagani et al. 2009). The effect of ortho-and para-H 2 (o-H 2 and p-H 2 ) is studied in our model through the inclusion of the ortho-topara ratio (o/p) and the distinction of o-H 2 and p-H 2 in the back reaction of the bottleneck (Eq. 2) in the chemical network. o-H 2 and p-H 2 are only added in the back reaction since it is here where the distinction has a significant effect (Table 1). We set a lower limit on o/p of 10 −3 at low temperatures, as constrained by models and observations (Flower et al. 2006;Faure et al. 2013). The o-H 2 and p-H 2 reactions and their parameters for the rate coefficients are taken from Walmsley et al. (2004). When o-H 2 and p-H 2 are included in the network, a thermal (LTE), upper-or lower-limit o/p ratio can be selected. In LTE, the ortho-to-para ratio is given by where T is the gas temperature. Selecting the upper-limit ratio produces 3 times more o-H 2 than p-H 2 . Since the back reaction with o-H 2 has a lower activation barrier γ than with p-H 2 , the o/p upper-limit implies that H 2 D + is being destroyed faster than generated, leading to a decreased production of DCO + (Fig. A.3), since H 2 D + is a parent molecule of DCO + (Fig. 4). The lowerlimit ratio, on the other hand, implies more p-H 2 which has a higher activation barrier γ for the back reaction, thus H 2 D + is generated faster than it is destroyed in turn increasing the DCO + production (Fig. A.3). As a starting point for our analysis of VLA1623A's DCO + emission, we use the density and temperature profile of VLA1623 obtained by Jørgensen et al. (2002) where 30 K is at Fig. 5. CO abundance profiles used in the model. The vertical dashed lines show the limits for the Drop abundance profile, evaporation temperature T ev and desorption density n de . X 0 denotes the abundance in the constant profile. X in , X D and X 0 are the inner, drop and outer region abundances for the drop profile. ∼1.5" assuming a distance of 120 pc. The profile was obtained by fitting single dish JCMT continuum images and the spectral energy distribution (SED) with continuum radiative transfer modeling using DUSTY resulting in a power law density profile of the form n ∝ r −1.4 . In the single dish continuum observations VLA1623A and B are unresolved and the density profile extends well beyond the ALMA field of view. It is expected, however, that VLA1623A dominates at 870 µm and that VLA1623B does not contribute much to the 450 µm continuum . Thus, the density and temperature profile obtained by Jørgensen et al. (2002) is representative of VLA1623A since VLA1623B is not significantly contributing to the continuum emission or the SED used to constrain the profiles. The results of the analytic chemical network are run through the molecular excitation and line radiative transfer program RA-TRAN to generate line emission maps. Since the structure we are trying to reproduce is ring-like, we calculate the level popu- Notes. See Fig. 5 for definition of X in , X D and X 0 lations with the 1-D version of RATRAN, and then run the level populations with the 2-D ray tracing to form the ring structure. The produced spectral image cubes are convolved with a Gaussian beam with the dimensions of the synthesized beam, continuum subtracted and then an intensity integrated map is generated. Radial profiles are extracted from the resulting images and compared with the observed profiles, which are integrated over the extent of the detected emission. We find no significant difference in the peak position between the method used here and running the models through the ALMA simulator with the actual ALMA configuration or our Cycle 0 observations. However, the ALMA simulator does show that the emission ≥4 is indeed filtered out by the Cycle 0 observations, producing somewhat narrower radial profiles. Molecular data for DCO + and C 18 O are obtained from LAMDA C 18 O: Yang et al. 2010; DCO + extrapolated from Flower 1999). As DCO + is optically thin, the comparison mainly focuses on the position of the peak and the integrated intensity profile with respect to radius, the velocity profile has no effect on the resulting model integrated intensity. Thus, we assume a free-fall velocity profile with a central mass of 0.2 M ) and T dust = T gas . The RATRAN output maps are convolved with the observed beam (0.85 × 0.65 , P.A. 96.24 • ) and compared with the observed emission through radial cuts. In addition to comparing DCO + , we also compare C 18 O in order to further constrain the physical structure of the disk-envelope interface. However, for C 18 O we assume a pure Keplerian velocity profile, in agreement with the rotationally supported disk it traces. For both emission lines, we use an inclination of 55 • (90 • = face-on), in accordance with the results obtained by . Modeling results In this section the results of altering the CO abundance, density and temperature profiles in the chemical model are discussed. Figure 6 presents the models, with the light blue region showing the location of the observed DCO + peak emission with respect to VLA1623A's position. Model naming follows the scheme xYz where x is the test number from Table 3, Y is either C for constant or D for drop abundance, and z is the case from Table 2. Chemical properties To probe the chemical conditions, we establish one case for the constant CO abundance with X CO = 10 −4 and six cases for the drop CO abundance with X D ranging from 5×10 −6 to 10 −8 and varying T ev and n de . Parameter ranges were selected based on trends found in previous work (Jørgensen et al. 2005;Yıldız et al. 2010) and adapted to the current observations. Parameters for each case are listed in Table 2. As a zeroth order test, we compare the abundance profiles to the radial position of the observed DCO + peak in the ALMA 12-m array data in order to investigate which chemical conditions best approximate our observations. This comparison neglects the fact that the peak emission radius also depends on the DCO + excitation, which will be taken in Section 4.3 into account. Figure 6 presents the model abundance profiles as functions of radius for different assumed CO abundances and physical structures. In general, the concentrations of CO and HCO + drop with radius whereas those of H 2 D + and DCO + increase with radius due to the lower temperature farther away from the source. Constant CO abundance (Fig. 6, left column) does not appear to produce a DCO + peak within the expected region. Since the radial abundance profile of DCO + is not altered by changing X CO , we focus instead on the drop CO abundance profile. While the drop CO abundance profile cannot alone alter the position of the peak, it produces several trends interesting to note. A kink in the abundance of DCO + forms at T ev for all the cases examined, though in most cases it is relatively small. Altering the abundance in the drop X D changes the shape of the peak but does not significantly alter its position nor its abundance (Fig. A.2). Decreasing the X D below 10 −7 , however, causes the abundance of the peak to drop by several orders of magnitude and become similar in magnitude to the kink at T ev , producing two peaks which are not observed (Fig, A.2, case f). Varying T ev and n de , in order to constrain the width of the drop, generates a very narrow peak which increases and drops quickly. To test the effect of the o/p ratio on our chemical network, we vary the ratio from the thermalized value to the upper-and lowerlimit for the drop CO abundance profile. In all cases, we find that setting o/p to the lower-limit does not change the position of the DCO + emission since the peak and bulk of the DCO + concentration for the thermalized o/p is already located in the o/p=10 −3 range (Fig. A.3). In fact, the peak of the modeled DCO + emission starts to decrease as the o/p ratio increases. The lower-limit o/p only alters the inner regions (<100 AU), which the data do not constrain. The upper-limit reduces the overall production of DCO + and pushes the peak outward to larger radii. In conclusion, altering the CO abundance and o/p ratio does not produce the observed results. Physical properties Altering the chemistry of the model does not reproduce the observed DCO + peak position, regardless of the case used. Thus we alter the density and temperature profile in order to find the conditions necessary to reproduce the observed emission. We set up 4 tests, including the original profile from Jørgensen et al. (2002) referred to as Test 1, by increasing or decreasing by a constant factor either the density or temperature profile. Parameters for each test are listed in Table 3. Increasing or decreasing the density profile by one order of magnitude, Tests 2 and 3, does not generate a significant radial shift of the DCO + peak compared with the original profile (Fig. 6, middle row). Altering the density, however, affects the concentration of the peak upwards or downwards with an increase or decrease of the density, respectively. The factor of 1.5 decrease in the temperature profile with cases b and d with X D ≈ 10 −6 − 10 −7 shifts the modeled DCO + peak effortlessly to the observed position (Fig. 6). Decreasing the temperature profile between a factor of 1 to 2 moves the modeled DCO + peak inward (Fig.A.4). For the constant CO abundance, any alteration of the temperature either over-estimates the modeled DCO + abundance or produces a peak too far inward. For the drop CO abundance, a temperature drop less than a factor of 1.5 does not move the peak inward enough, whereas a larger factor moves it too far inward. Cases b and d with X D ≈ 10 −6 − 10 −7 shifts the modeled DCO + peak effortlessly to the observed position ( Fig. 6 bottom row). Changing the CO abundance in the drop (Table 2) does not alter the location of the peak, as expected from the results presented in Sec. 4.2.1. Examining the temperature profile shows that the DCO + emission peaks at a range of 11-16 K. This is lower than the expected 20 K at ∼3 inferred from radiative transfer modeling of the observed continuum data (Jørgensen et al. 2002). We limit the decrease in the temperature profile to not fall below 8 K (Zucconi et al. 2001), however this limit generates no significant change in the outcome of our model since the limit falls near and beyond the edge of the FOV of our observations (Fig. 6, bottom right panel). From the results obtained from altering the chemical and physical conditions in our model, we can deduce that the DCO + emission is located at its observed position due to physical conditions, namely a lowered temperature, and not to special chemical conditions in the disk-envelope interface. Comparison with full chemical network To explore the limitations of the simple network, we compare the results to a full time-dependent deuterated chemical network, based on the RATE06 version of the UMIST Database for Astrochemistry (Woodall et al. 2007) extended with deuterium fractionation reactions (McElroy et al. 2013). Models with only gas phase chemistry and including gas-grain balance (freezeout, thermal desorption, and cosmic-ray-induced photodesorption) are run. The models start with the same initial abundandances (H 2 :HD:CO = 1:3×10 −5 :1×10 −4 ) and the same physical structure as the simple network. We find that the large network confirms both the general trend found in the simple network and also reproduces the peak position and abundance of DCO + at the observed position for abundances extracted at early times (∼10 5 yr). Inspection of the full network shows that in addition to the reactions listed in Table 1, the HCO + + D − − → H + DCO + reaction can become important if D/HD is large. The back reaction has a reaction barrier of ∼800 K, thus leading to fractionation of DCO + (Adams & Smith 1985). In practice, this reaction is only relevant for much lower densities than encountered in the VLA1623A envelope model. Similarly, dissociative recombination of H + 3 with electrons cannot be neglected at low densities. At very early times (< 10 3 yr, depending on density), reactions with H + and D + become more significant than those with H + 3 and H 2 D + . Finally, in full gas-grain models the bulk of the CO is frozen out in the cold outer envelope resulting in very low DCO + concentrations, unless an efficient non-thermal CO desorption process is included. Emission profiles Aiming to better constrain the physical conditions of the diskenvelope interface we compare the models and observations of both DCO + and C 18 O emission computed with RATRAN. For the comparison, the observed southern red-shifted clump in the ALMA 12-m array data for both lines is selected. The reason is two-fold: i) the red-shifted emission of the C 18 O disk suffers less absorption from the outer envelope than the blue-shifted lobe as noted above ; ii) the southern red-shifted clump traced by DCO + is the strongest and most prominent. Similarly, for the model of C 18 O only one lobe of the disk is selected for comparison. Consistent with the abundance plots, the constant CO abundance profiles do not fit the observed DCO + emission in any of the four tests, producing very broad peaks between 3 and 5 away from the source for tests 1, 2 and 3; and a peak at 1 for test 4. In a similar manner, for the drop CO abundance scenario, tests 1, 2 and 3 all produce peaks beyond 3 with varying broadness, and thus do not approximate the observed DCO + emission. This occurs for all examined cases (Fig. A.5). Models 4Db and 4Dd approximate the observed DCO + emission well (Fig. 7, orange and red lines, respectively). These results are in agreement with the conclusions drawn from the concentrations in the analytics chemical network model (See Sec. 4.2.2). Comparing the results of the model with the observed C 18 O emission (Fig. 8), we find that in all four tests a constant CO abundance over-predicts the extent of C 18 O, with the emission well above our 3σ level extending out to 3 or further, whereas we observe it only out to less than 2 from the central source. When the drop CO abundance is introduced, the source profile of VLA1623 from Jørgensen et al. (2002) overestimates the C 18 O emission in all cases, and generates a second peak at about 3 at half the intensity of the central peak in almost all cases. The width of the drop does not have any significant effect on the modeled emission. The decreased density and decreased temperature profile tests do not fare better, as they again largely overestimate the C 18 O beyond 2 and even produce secondary peaks. Case a (X D = 10 −6 ) for the original profile and decreased density seem to show some promise, however the modeled emission is around 3σ at 1.5 whereas observations at that radius are closer to 1σ. Surprisingly, we find that the increased density profile produces the best results with case d (X D = 10 −7 ), the same as that for DCO + (Fig. 8, red line). Hence we find that the DCO + and C 18 O observed emission are not reproduced by the same physical structure (Figs. 7 and 8). DCO + is well modeled by a temperature profile a factor of 1.5 lower than that needed to model C 18 O, whereas C 18 O is reproduced by a density profile an order of magnitude higher than required for DCO + . The abundance in the drop X D necessary to reproduce the emission is the same for both lines, X D =10 −7 . Large scale emission vs. model We also compare the radial integrated intensity profile of the DCO + ACA observations with the model, both along the plane of the disk and perpendicular to the disk in the outflow direction, since these data probe larger spatial scales than the 12-m array data. For this purpose, the models were run through RATRAN's ray tracer sky again but instead using the cell size of the ACA observations (0.84 ) and then were convolved to match the resolution of the ACA observations. The observed radial profiles are extracted from the red-shifted emission. The profile along the plane of the disk (peak at ∼4 ) is well described by the same model as in the small scale, the drop CO abundance profile for case d, with the temperature decreased by a factor of 1.5 (4Dd, Fig. 9). The temperature decreased by a factor <1.5 does not provide a better estimate. The constant CO abundance again overestimates the amount of DCO + produced in the outer regions. On the other hand, the integrated intensity profile along the outflow direction (peak at distances ≈5 ) is very well described by the model 1C (Fig. 9). This indicates that in regions that are not shadowed by the disk, the production of DCO + is as expected from dust continuum models. Discussion The results of our modeling show that the position of the DCO + emission (∼2.5 = ∼300 AU) is closer to VLA1623A than expected based on the spherically symmetric dust continuum radiative transfer modeling from DUSTY (∼4 = ∼480 AU). The results further show that the observed emission peaks at a dust temperature range of 11-16 K. Comparison of the radiative trans- Fig. 2. Radial profile of the DCO + Cycle 0 observations is shown in light gray. Observed radial profiles are overlaid with models convolved to the resolution of the ACA observations. Gray solid and dashed lines show the 1σ and 3σ levels, respectively, of the ACA emission. These results evidence that the DCO + emission along the disk plane is best approximated by model 4Dd, whereas along the outflow is best described by model 1C. fer results from RATRAN to the C 18 O emission from the disk suggest that the disk has a density higher by one order of magnitude than the emitting structure traced by DCO + . A possible explanation would be that VLA1623A is a Very Low Luminosity Object (VeLLO, Young et al. 2004, Dunham et al. 2008) undergoing episodic accretion and just coming out of the quiescent phase. This is highly unlikely, however, since Johnstone et al. (2013) find that the timescale for dust and gas to heat up after an accretion burst is short, on the order of hours to weeks. VLA1623A has been reported of having a bolometric luminosity between 0.4-2 L from early observations to more recent work (André et al. 1993;Froebrich 2005;Chen et al. 2013), in contrast to the expected luminosity of 10 −2 L for VeLLOs. Hence VLA1623A has not recently come out of the quiescent accretion phase and the location of the observed DCO + emission can not be attributed to the relic of a previous phase of decreased accretion. A more plausible explanation to the position and temperature of the region containing the DCO + emission is disk shielding. DCO + is observed to border VLA1623A's C 18 O disk in our present data. This could shield the outer parts of the disk from heating by the central protostar, thus moving in the freeze-out zone of CO, enhancing the production of DCO + closer to the protostar. This scenario is further supported by the result of our simple chemical model that the C 18 O disk is more dense than the region of the DCO + emission and than expected from Jørgensen et al. (2002)'s envelope density profile of VLA1623. We test the possibility of the disk-shielding scenario with radiative 2D disk plus envelope models using radiative transfer methods as in Harsono et al. (2013) (Fig. 10). For the models, we assume a central protostellar source of 1 L Chen et al. 2013), an envelope with a mass of 1 M (André et al. 1993;Froebrich 2005), with the addition of a disk mass of 0.02 M and radius of 180 AU . The outflow cavity is assumed to have an opening angle of 30 • . A thin disk with a scale height of 0.1 AU is adopted. Disk flaring is not included since we have no information on the flaring of VLA1623A's disk and the thin disk model approximates the C 18 O kinematics well. Two values for the centrifugal radius R c , 200 AU and 50 AU, are chosen. Within the centrifugal radius the velocity structure of the disk is Keplerian. We find that even for such a thin disk the temperature along the plane of the disk is lower than for the envelope at the same radius (Fig. 10, top and middle panels), thus moving the CO freeze-out zone closer to the protostar along the edge of the disk than in other regions of the core. For either centrifugal radius the temperature beyond 200 AU drops well below 20 K. Finally, we test whether the presence of a disk makes a difference in the location of the CO freeze out region. Figure 10 bottom panel, shows the model with the same conditions as in Figure 10 middle panel, but without the disk. This shows that omitting the disk causes the CO freeze out region to move outward to ∼400 AU, 150 AU further out than the models including a disk. The temperature conditions required to reproduce the observed DCO + emission with our simple chemical model are therefore in agreement with the results obtained from the 2D radiative transfer disk plus envelope model. These results strongly support the scenario where a disk can shield the regions at its edge from heating by the protostar. This shielding causes the CO freeze-out region to move inward toward the edge of the disk, bringing low-temperature enhanced molecules, such as DCO + , closer to the central protostar along the plane of the disk. However, the rest of the envelope is largely unaffected by the disk and a shell of molecules such as DCO + forms at a radius predicted by spherically symmetric models. The ACA observations of DCO + toward VLA1623A provide further evidence for this effect. These results show that the DCO + emission peaks closer to the source along the disk plane than along the outflow direction. Conclusions This work presents the results and analysis of ALMA Cycle 0 Early Science Band 6 observations of DCO + (3-2) in the extended configuration toward VLA1623A probing subarcsec scales, as well as Cycle 2 ACA data probing the larger scales. A simple chemical network was setup taking ortho-and para-H 2 into account in the rate-determining reactions. The density and temperature profile of VLA1623A was obtained from fitting the SED and dust continuum data with radiative transfer modeling using DUSTY (Jørgensen et al. 2002). Our simple chemical model coupled with VLA1623A's physical structure served as input for line radiative transfer calculations with RATRAN with 2-D ray-tracing. The CO abundance, density and temperature profiles were altered to study the effect of each parameter Article number, page 10 of 16 N. Murillo et al.: The physical and temperature structure of the disk-envelope interface on the location of the observed DCO + peak. The results of our observations and analysis can be thus summarized: 1. DCO + is observed to border the C 18 O disk around VLA1623A. Both emission lines show similar velocity gradients (blue-shifted to the north and red-shifted to the south), with DCO + emission between 2.8 to 5.2 km s −1 . The PV diagrams of C 18 O and DCO + suggest that both emission lines are well described by Keplerian rotation. However, the DCO + emission is weak, thus no further kinematical analysis was carried out, and whether the disk extends out to 300 AU cannot be confirmed. 2. Using a simple chemical network with the inclusion of ortho-and para-H 2 as well as non-LTE line radiative transfer, we model the observed DCO + emission. We find that using a constant CO abundance predicts a DCO + peak at around 4 , twice further out than observed, irrespective of the adopted o/p ratio and density profile. Our model results show that a drop CO abundance with a decreased temperature profile by a factor of 1.5 generates a peak at the same position as the observed emission. Thus, the observed DCO + peak is closer to VLA1623A and with a lower temperature (11-16 K) than that expected from a spherically symmetric physical structure constrained by continuum data and source SED. 3. The observed DCO + and C 18 O emission are not described by the same physical structure. In our model, the C 18 O emission is well reproduced by a drop CO abundance with the density profile increased by one order of magnitude at 1 (≤120 AU) radii, in comparison with the density profile needed to reproduce the observed DCO + emission. A constant CO abundance and a decreased temperature profile over-predict the extent of the C 18 O emission. 4. Disk-shielding is the best possible explanation for the observed DCO + emission toward VLA1623A. Disk-shielding causes the inward shift of the CO freeze-out region along the plane of the disk, lowering the dust temperature to <20 K, gen-erating a ring of molecules whose abundance is enhanced by low temperatures such as DCO + . The rest of the envelope is largely unaffected by the disk, thus the CO freeze-out shell predicted by spherically symmetric radiative models is expected to be located further out. This prediction is confirmed by our recent ALMA Cycle 2 ACA observations, which show that the DCO + emission along the outflow axis lies at larger radii, ∼5 , consistent with constant CO abundance models without any alteration to Jørgensen et al. (2002)'s temperature and density profile of VLA1623. 5. The disk-envelope interface in VLA1623A is shown to have a broken transition in density and temperature, with the impact generated by the presence of a disk being observable from small to large scales. Our observations and modeling results for VLA1623A show the disk-envelope interface to have different physical conditions than other parts of the envelope. Our results also highlight the drastic impact that the disk has on the temperature structure at ∼100 AU along the plane of the disk, with the effect being observable even at large scales. We suggest performing further observations to determine whether the unequal physical and chemical conditions observed in the disk-envelope interface of VLA1623A is a common phenomenon in protostellar systems with rotationally supported disks or a special condition of the present source.
2015-05-28T17:18:42.000Z
2015-05-28T00:00:00.000
{ "year": 2015, "sha1": "8c93cb77f74288e30e9cc9efa519baec1b0fce47", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2015/07/aa25118-14.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "8c93cb77f74288e30e9cc9efa519baec1b0fce47", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
118383524
pes2o/s2orc
v3-fos-license
Messaging to Extra-Terrestrial Intelligence Throughout the entire history of terrestrial civilization, only four projects involving transmitting of interstellar radio messages (IRMs) have yet been fully developed and realized. Nevertheless, we should understand a simple thing -- if all civilizations in the Universe are only recipients, and not message-sending civilizations, than no SETI searches make any sense. We present the theory and methodology of composing and transmitting of future IRMs. Type of polarization? 5. Power of radiation received? 6. How to demodulate the detected signals? 7. How to decode the received information? This list can be adapted to aid in decisions regarding transmissions from Earth of our own radio messages to possible extraterrestrial civilizations. Transmission of interstellar radio messages (IRMs) is essentially a new kind of human activity, involving radiation of coherent signals from the Earth into space, addressed to other reasoning beings. Humans have always peered at the sky, in the hope of finding there intelligences beyond our own. METI thus implies a special and purposeful transmission. We can thus replace the terms connected with a search for radio signals, with terms associated with the transmission of same. In a more general treatment, a transformation from SETI to METI can occur as a transition from the science of merely separating those messages that already exist in the nature from artificial ones -namely Their reasonable radio signals -to the art of creating messages that do not exist in nature -namely our deliberate radio signals directed toward Them. It would seem that there are two more new measurements in METI-space than there were in the case of SETI. Thus, the METI search space is a 9-dimensional one. We are compelled to consider such questions as "Why is it necessary to transmit and what we shall gain from doing so?" and "Is it dangerous to transmit messages to ETI?". In view of these two additional questions, we suggest that the proposed METI Institute should embrace the following 9-dimensional space of questions for consideration: We shall try to give answers to all of the above questions. It is important to note that any such answers will be not final, but only preliminary in nature. As we have already emphasized, METI is a new, emerging human activity, and nothing that it implies is yet settled. Therefore, readers have a rare opportunity to join in discussions leading to a new scientific endeavor. 1) Where to send interstellar radio messages? It has become much easier to answer this question since 1995, when an outstanding discovery was made. Swiss astronomer Michael Mayor and graduate student Didier Queloz announced in that year the detection of the first planet orbiting another Sun-like star, 51 Pegasus. Subsequent discoveries of well over 100 other exoplanets have made it clear that planets are ordinary celestial objects, as widespread as stars and galaxies. In our Galaxy alone, with on the order of 100 billion stars, 1% of them are stars of solar or nearly solar types. Here, among this remarkable billion, it is plausible to select stars to which our interstellar radio messages can be addressees. We do not propose restricting our targets to only these stars, but they should be our main goal, defined by our present understanding, recognizing that the question of other life sites is not yet settled, and that there remains an opportunity for further creativity and research. Our present list of requirements for candidate stars includes the following characteristics: • Main sequence stars; • Constant luminosity; • Age in the range of 4 to 7 billion years; • Single stars of spectral classes close to that of the Sun are preferable; • Position in the sky close to "preferable directions" -near the ecliptic plane, in the direction of remarkable astronomical objects, toward the center or the anti-center of the Galaxy, etc.; • It is desirable that we fall in the direction of remarkable astronomical objects as viewed from There, so that They might find us in the course of Their usual astronomical observations; • In case of targets representing known planetary systems, it is desirable that orbits of these exoplanets have low eccentricity, as such planetary systems are more stable, and there is no significant temperature fluctuation interfering with the origin of life; • It is desirable to choose stars inside the "Belt of a Life" -that "hothouse" area of our Galaxy, where because of coincidence of speeds of movement of stars and spiral sleeves, conditions for origin and long development of a life are believed optimum. In due course, in the process of accumulating knowledge about the Cosmos, other criteria, and other locations than the stars addressed here, may emerge. For now, we propose concentrating on the above criteria. 2) When to send IRMs to the selected star? Questions of time synchronization between our transmission and Their searches (or, in the case of SETI, between Their transmission and our searches) are very important. By Peter Makovetsky's estimation, as reported in his book "Look in the Root" ("Science" Publishing House, Moscow, 1979), competent synchronization allows us to increase the probability of establishing radio contact by a factor of tens. One possible method is to bind the moments of transmission ("Here") and searching ("There") to some well-known universal event which is observable everywhere in our Galaxy. For example, we could synchronize to the moment of maximum intensity of such explosions as Novae or Supernovae. Proceeding from simple geometrical parities, Makovetsky has calculated "schedules" for some neighboring stars in the case of we and They carrying out search coordinated to a Nova in the constellation, Cygnus which was observed on Earth on August, 29 th , 1975. Using modern, large optical telescopes, it is now possible to register the moments of flashes of Supernovae in neighboring galaxies. These can also be used for time synchronization of messaging and searching in deep space. 3) At what wavelength? The frequency band in which it is necessary to transmit IRMs coincides with that band which earlier has been proved most suitable for SETI -from 20 cm up to 1 cm, where the greatest range of radio communications is achieved. We define the energy potential of a space radio link as the product of power of the transmitter and the gain of the transmitting and receiving antennas, divided by the noise temperature of the receiving system. At current the state of development of our terrestrial technology, this relation is maximal in a centimetric band. We do not dismiss the possibility that, in due course, in the development of space communications, suitable energy potentials will be reached at infrared or optical wavelengths. Should that occur, our representations about optimum wavelength will of course change. Exact values of wavelength may even take on "magic" values. For example, 6.72 cm = 21 cm / Pi, would be known to all technological civilizations as the ratio of two universal constants, one physical (the radio emission line of interstellar neutral hydrogen) and the other mathematical. 4) What polarization to use? The polarization integrity of a radiated signal is one possible indicator of artificial origin. In addition, by using polarization modulation the direction of rotation of circular polarization, or the orientation of the plane of linear polarization, can be varied discretely or continuously, as a means of encoding an intelligent message. 5) What should be the energy of the transmitted radio signal? In the case of determining appropriate levels of power for dedicated transmitters specifically designed for continuous and systematic METI transmissions, estimates are readily computed. As for the somewhat different question of conducting METI now, using those instruments which currently exist, or will become available in the foreseeable future, a more important issue is the question not of transmitter power, but rather of realistic data rates to transmit meaningful information. The following summary shows computed data rates for METI experiments using the three most In these calculations, we assume the distance to which it is necessary to send our message is on the order of 70 light years, and we further assume that Their receiving system has the antenna with an effective aperture in 1 million square meters. A project to deploy just such a large radio astronomical antenna, the Square Kilometer Array (SKA), is now under development on Earth, and could be constructed within the next decade. 6) What modulation to apply? After more than 45 years of nearly continuous searches for intelligent signals from other civilizations, the overwhelming majority of studies employ surprisingly similar detection algorithms. It is accepted practice to apply digital spectral analysis, with the number of parallel analysis channels reaching hundreds of millions, and even several billions. For example, in its project "Phoenix" targeted search, the SETI Institute used a digital spectral analyzer of two million channels, with bin widths on the order of ~1 Hz. That allowed them to analyze, in real time, a bandwidth on the order of 2 MHz and on the order of 2 GHz in off-line mode! Having assumed what exactly the optimum receiver should look for, not only in searching for radio signals from Other civilizations, but also in terms of such signals as we might transmit to ETIs, we come to the conclusion that such modulation should have a clear spectral signature, allowing decoding with minimal ambiguity, by means of the above-mentioned parallel spectral analyzers. One such modulation format, well known and widely used on Earth, is frequency modulation (FM). 7) What are the optimum structure and method of encoding a transmitted message? Having suggested that a radio message should be synthesized on the basis of selfevident and physically proven spectral constituents, we now propose the following structure (Table 1). We identify three types of single-valued frequency function: "Constant", "Continuous", and "Discrete." A radio message to ETI message could employ a three-section structure and incorporates three specific languages, which we can call "the language of nature", "the language of emotions", and "the language of logic". In Table 1 Here, we draw a pertinent analogy with a suggested triune structure of human thinking, wherein we distinguish three components -intuitive, emotional, and logical. The first part of such a three-part radio message is designed by radio engineers, and represents a coherent signal, for example, elementary monochromatic CW or periodic LFM (linear frequency modulation). It is possible to adjust its frequency for variable Doppler correction, such that our modulated signal will be observed by aliens as a constant frequency. We suggest that ETI will intuitively understand the significance of a sounding signal thus sent. The second part of the message is created by people versed in the arts -composers, artists, architects -and consists of analog variations of frequency, representing our emotional world and our artistic conceptions. An elementary example is classical musical melodies. The third part of our message consists of discrete frequency shift keying, a digital dataflow, representing our logic constructions -algorithms, theories, cumulative knowledge about us and about the world around us. In the line "Analysis" our representations are displayed in terms of how such signals will be investigated "There", on the reception side (or "Here", in case of success of terrestrial SETI). The first message part is optimized for astrophysical analysis, with the purpose of revealing the effects of the interstellar environment, and supporting diagnostics of a propagation channel. The second part is analyzed by art critics; the third, by linguists, logicians, and other scientists. 8) Why should we transmit interstellar radio messages? Here, we step onto the shaky ground of "fuzzy and imprecise" reasoning and assumptions. A strict proof of the necessity and practicality of METI is of course impossible. Emotional and ethical reasons of a messianic and altruistic nature, such as "to bring to Aliens a long-awaited message that they are not alone in the Universe", are convincing and inspirational to only the few. Nevertheless, we should understand a simple thing -if all civilizations in the Universe are only recipients, and not message-sending civilizations, than no SETI searches make any sense… 9) Is it dangerous to engage in METI? We can refer to a fear of transmitting from Earth as METI-phobia. It has its roots in fears expressed right after transmission of the first interstellar radio message, from Arecibo in 1974. The Nobel laureate Martin Ryle, a prominent radio astronomer, publicly proposed imposing an interdiction of any attempts at messaging from Earth to prospective extraterrestrial civilizations. Our understanding of the given problem starts with a certain "double standard": many vocal and impressionable people are afraid of a super-powerful and superaggressive "Something" from which there is no salvation, and which has either already long ago found us, or which will by all means soon find us, from the radio emission of tens of powerful military radars in the USA and Russia, which formed the basis of their national ballistic missile warning systems, working continuously, day and night, since the sixties of the last century. Thus, even in the case of civilizations as primitive and power limited as we, detection over prodigious distances can already be assumed. Realized METI projects Throughout the entire history of our civilization, only four projects involving transmitting of interstellar radio messages (IRMs) have yet been fully developed and realized. In Table 2, these four projects are ordered by the dates of the first transmitting sessions (in total, as it can be concluded from the table, only 16 such sessions have ever taken place). The symbols T and E here represent the total transmit duration in minutes, and radiated energy in Mega Joules, of each of the four METI projects conducted to date. Lexicon, as well as data about the "Cosmic Call" project and its participants. In structure, Cosmic Call 1 closely paralleled the Arecibo Message. The size of this "Encyclopedia" was 370967 bits. In 2001 the "Teen Age Message" [3] was sent to 6 Sun-like stars. Here is the first and, unfortunately, so far the only time the three-section structure described above has been applied -a monochromatic sounding signal was first radiated, then the analog information (music), and finally, a digital message was transmitted. As a source for the analog portion of the transmission, quasi-monochromatic signals with a low level of overtones from the "Theremin" electric musical instrument were included. Such a signal greatly facilitates detection and perception over interstellar distances. The digital part of the message consisted of 28 binary, Arecibo-like, images with a total size of 648220 bits. In 2003 IRM "Cosmic Call 2" [4] was sent to 5 Sun-like stars. This was the first international IRM, and fragments of all three previous radio messages were included in it. We consider that all future messages from the Earth should have precisely such international content. Table 3 summarizes the expected times at which the Arecibo and the three Evpatoria messages sent to date will arrive at their corresponding target stars. The second column of Table 3 predicts the time when the era of the "Great Silence of the Universe" can potentially end for those at the receiving side of the communications link, in the optimistic event that They are there, and given the "happy case" that they should happen to find these particular intelligent signals from our terrestrial Civilization.
2019-04-14T03:15:35.567Z
2006-10-05T00:00:00.000
{ "year": 2006, "sha1": "17da57a99a1de30464f7ad5c8c0f75a40f3ca106", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "514218b2a22633044ef2ccc3c29f238f435705d0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248496444
pes2o/s2orc
v3-fos-license
Is gravitational entanglement evidence for the quantization of spacetime? Experiments witnessing the entanglement between two particles interacting only via the gravitational field have been proposed as a test whether gravity must be quantized. In the language of quantum information, a non-quantum gravitational force would be modeled by local operations with classical communication (LOCC), which cannot generate entanglement in an initially unentangled state. This idea is criticized as too constraining on possible alternatives to quantum gravity. We present a parametrized model for the gravitational interaction of quantum matter on a classical spacetime, inspired by the de Broglie-Bohm formulation of quantum mechanics, which results in entanglement and thereby provides an explicit counterexample to the claim that only a quantized gravitational field possesses this capability. Introduction Contemporary physics, at its most fundamental level, is in a somewhat peculiar situation, relying on two separate and apparently incompatible theories. On one hand, there is matter, described by the Fock space states of interacting quantum fields or, more specifically, the Standard Model with its 12 elementary fermions and its SU(3) × SU(2) × U(1) gauge symmetry, from which nonrelativistic, quantum mechanical behavior follows, at least in principle. On the other hand, there is spacetime, a Lorentzian 4-manifold which provides the metric and differential structures with which dynamical laws for matter can be defined and whose curvature is determined by the matter distribution via Einstein's equations. The quest of "quantum gravity", in its broadest meaning, refers to the goal of finding some common mathematical framework which, in the appropriate limits of observed physical phenomena, can embed the predictions of both quantum and gravitational physics. By virtue of the largely different mathematical structures, the prevailing believe is that this must be achieved by some sort of "quantization" of gravity, for instance in the sense of promoting some objects in the theory of general relativity (the metric, curvature, connection, volume, or area elements, . . . ) to a Hilbert space structure, albeit the precise meaning remains obscure and differs from model to model. Contrariwise, one may pose the question what would need to change about the formalism of quantum physics in order to be compatible with the principles of general relativity-a "gravitization of quantum mechanics" in the words of Penrose [1]such that quantum matter could be consistently described on a (classical) spacetime manifold, including its backreaction on spacetime. Leaving aside (important) mathematical details, the dynamics of quantum fields on a curved spacetime can be formulated as a well-defined theory [2,3]. The more fundamental challenge is the opposite question: how does one model the effect of quantum matter on spacetime curvature? The most straightforward approach to model this coupling of quantum matter to classical gravity is via the semiclassical Einstein equations [4,5] where the left-hand side is the Einstein tensor, constructed from the scalar and tensor curvatures R and R µν as well as the metric g µν , and the right-hand side contains the expectation value of the stress-energy operator in the quantum state Ψ . Its consistency as a fundamental model has been the subject of discussions [6,7,8,9,10], although with no conclusive result. In nonrelativistic situations, it can be understood [11] as resulting in a wave function dependent Newtonian gravitational potential for two particles of masses m 1 and m 2 , where · denotes the expectation value in the two-particle state with spatial wave function Ψ (t, r 1 , r 2 ) and is the self-gravitational potential of the i-th particle. This self-gravitational attraction of even a single particle has previously been considered as a route towards experimentally testing semiclassical gravity [12,13,14,15]. Recently, experiments have been proposed [16,17] which would be looking for the difference between the first two mutual interaction terms in the semiclassical potential (2) as compared to the potential expected from perturbatively quantized gravity: Quantized gravity then predicts an entangled two-particle state, whereas the semiclassical model would leave an initially separable state unentangled. The claim made by the proponents of these tests, however, goes beyond distinguishing the two potentials (2) and (4), stating instead that it is "impossible" for two particles to develop entanglement from a classical field [18] and that "anything capable of entangling two quantum systems and satisfying locality (plus a few other assumptions) must itself be quantum." [19] Therefore, so the idea, experimental evidence for entanglement would not only rule out the potential (2) but any semiclassical model for gravity. This idea has been criticized by Hall and Reginatto [20] who point out that entanglement is strictly prohibited only for the specific type of classical interactions as introduced by Koopman [21] and show that in the hybrid model of quantum-classical ensembles [22] entanglement can increase. Similarly, Pal et al. [23] show that entanglement can occur between two initially unentangled qubits through a third qubit whose reduced density matrix remains diagonal for the whole experiment, if there is some initial entanglement already present in the entire three-qubit system. Here, we present a different model as a counterexample to the claim that gravitational entanglement is evidence against semiclassical theories; one that makes use of the trajectory 1 as an additional "hidden" variable in the de Broglie-Bohm theory, and is closer in spirit to the mean-field approach based on the semiclassical Einstein equations (1). Contrary to the mean-field approach, a dependence of the gravitational field on these hidden variables allows it to possess information about the outcome of measurements beyond the classical information encoded in expectation values. Furthermore, it allows for the definition of hybrid models in which the gravitational potential depends on both the trajectories and the wave function, and which thereby interpolate between the maximally entangling model with a point particle source and the non-entangling semiclassical potential (2). The structure of this paper is as follows: In the next section 2, we present explicitly how the Bohmian trajectories can be used to source a Newtonian gravitational potential with the correct classical limit, and predicting entanglement between two gravitationally interacting particles. We review how, in the de Broglie-Bohm picture, entanglement of localized particles can be understood as arising from additional local fields on physical 3-space as the effect of conditional potentials [25,26], and we discuss the implications for semiclassical gravity. Section 3 provides an example how this approach can be generalized to a class of models, with the mean-field potential (2) and the trajectory based model from section 2 as limiting cases. In section 4 we explicitly calculate the spin-entanglement witnesses [16,27] proposed for nonclassicality tests of gravity, and show how they would confirm entanglement for the semiclassical models presented. The discussion section 5 reviews the implications of our results for the interpretation of experimental tests of gravitationally induced entanglement and addresses important limitations. Gravity sourced along Bohmian trajectories Consider a number of quantum particles with nonrelativistic energies, whose gravitational interaction we would like to describe within the frameworks of both the de Broglie-Bohm approach to quantum theory and classical general relativity. Gravity, according to general relativity, is modeled by the curvature of a classical spacetime manifold-which in nonrelativistic situations is fully determined by the Newtonian potential to good approximation. In addition to classical spacetime and the wave function, we introduce particle coordinates q i , which in the de Broglie-Bohm theory satisfy a guiding equation, depending on the N-particle wave function Ψ . Having these trajectories q i (t) at our disposal, enables us to use them as point-particle sources for a Newtonian gravitational interaction. In the Schrödinger equation for N = 2 particles we then introduce the potential such that the two dynamical equations form a coupled system. Let us study the consequences of this potential, starting with its classical limit. We notice that the Schrödinger equation yields the usual Ehrenfest theorem which results in the same classical equations of motion for the expectation values as the quantum potential (4). Whereas for the quantum potential a single term determines the motion of both particles, for the potential (6) the first term determines the motion of the first particle in the gravitational field of the second, the second term the motion of the second particle in the field of the first. The final term γ 0 in equation (6) is a function of only the Bohmian trajectories q i and does not contribute to the classical limit. It is necessary in order to maintain consistency with the experimentally confirmed gravitational phase shift [28,29,30]. For the further analysis, we turn to the local formulation of the de Broglie-Bohm theory [25,26], defined via the conditional wave functions We consider only one spatial dimension for the subsequent discussion, with the generalization to three dimensions being straightforward. The guiding equations (5) then can be written in terms of these conditional wave functions, implying that the evolution for the i-th particle depends only on the conditional wave function ψ i . The evolution of ψ i itself, however, depends nontrivially on both the other particle coordinates and the remaining conditional wave functions, thereby allowing for the quantum mechanical entanglement. This is expressed through the n-th order entanglement potential fields Π (n) The conditional wave functions then satisfy the Schrödinger equations with the time dependent effective potentials The potential fields Π (n) i depend, of course, on the conditional wave functions. For a solution consistent with the full 2-particle Schrödinger equation, each potential field must itself obey a partial differential equation depending on higher order entanglement fields [26]. The locality of the Schrödinger equation (11) comes at the price of this dependency on an infinite number of entanglement fields, which can be truncated at some order n for an approximate treatment, capturing entanglement up to a certain degree. For our purpose, this approach has its main advantage in how it explicitly reveals the entanglement. For any given solution Ψ of the full 2-particle Schrödinger equation, one can calculate the four potential fields of first and second order and treat them as given external potentials. Equations (11) together with the guiding equations (9) then describes a system of equations which is coupled only through the dependence of the effective potentials (12) on the trajectories q i (t). Of course, knowing the full solution Ψ the solutions for the ψ i and q i follow immediately. In this sense, this system is of little use for the purpose of solving the dynamical equations. It can, however, be useful in order to analyze the dynamical properties of the system. The trajectories described by equation (9) generally diverge from each other in a way determined by the spreading of the wave function. In a classical situation, where the wave function remains sharply peaked over the relevant time period, the trajectories, therefore, remain close to the classical trajectories u i (t) = x i , which solve the classical equations of motion Approximating q i (t) ≈ u i (t) in equations (12) rather than using the guiding equation, one obtains from (11) two fully decoupled Schrödinger equations for the conditional wave functions ψ i . As ususal, to lowest semiclassical order the effective potentials result in a phase after time τ. Consider an intially separable wave function, such that and, choosing γ 0 (q 1 , q 2 ) = Gm 1 m 2 |q 1 − q 2 | −1 as well as assuming Gaussian wave functions of widths σ i peaked at x 1 = u 1 and x 2 = u 2 , respectively, with the entanglement potentials These entanglement potentials only result in a trajectory-independent phase, whereas the gravitational potential yields a phase for each conditional wave function, where we assume a constant separation ∆ u = |u 1 (t) − u 2 (t)|. This identical phase of ψ 1 and ψ 2 can be interpreted as a phase of the 2-particle wave function Ψ . Note that it is identical to the phase predicted from the quantum potential (4) and used in reference [16], if the function γ 0 is chosen as the positive gravitational energy between the two Bohmian particle locations, i. e. if the first two terms in equations (12) cancel. For other choices it differs, specifically by a factor of two when choosing γ 0 ≡ 0. Thus far, we only considered classical states with a well defined trajectory. Adressing the experimental situation [16] of a superposition of two classical trajectories for each of the two particles, u ± 1,2 , we have the initially separable wave function with α and β symmetric functions sharply peaked around zero. Then the conditional wave functions, which generally depend on the other particle's trajectory, are the trajectory independent both describing a superposition of two classical trajectories. For a given potential V eff 1 , each of the two trajectories in ψ 1 acquires its own phase, and accordingly for ψ 2 . However, the effective potentials V eff i explicitly depend on the other particle's trajectory. Therefore, each of the four possible combinations (u ± 1 , u ± 2 ) acquires a different phase depending on the specific combination of trajectories, resulting in the same phases φ ++ grav , φ +− grav , φ −+ grav , and φ −− grav as predicted from the quantum potential (4). We conclude that the potentials (6) and (4) make the same predictions for both the classical limit and the gravitational phase shift; in other words, they agree with respect to experimentally tested gravitational phenomena, including the yet untested gravitational entanglement [16,17]. Nonetheless, the de Broglie-Bohm inspired model has a semiclassical interpretation in which curvature of a classical spacetime is sourced by the trajectories q i (t) of the particles. One may argue that this is pure semantics and that a model that makes the same physical predictions is, for all practical purposes, equivalent to quantized gravity. This is why, in the next section 3, we opt for a hybrid potential that interpolates between the potentials (2) and (4) by integrating the modulus-squared of the wave function only about a radius R around the particle coordinates q i . The local description provides us with an intuitive understanding of the origins of entanglement. In the limit of weak entanglement, starting with initially separable states, the entanglement resulting from the potential fields Π (n) i is negligible compared to that resulting from the interaction potential. A typical quantum potential V (x 1 , x 2 ) results in an effective potential V eff 1 (x) = V (x, q 2 ) for the first particle, depending on the second particle's trajectory, and vice versa. The particles become entangled due to this dependency on the other trajectory. It is then evident, that the same entanglement can be achieved if instead of the 2-particle interaction V (x 1 , x 2 ), the 2-particle wave function Ψ experiences a potential that already has an explicit dependence on the trajectories, such as our potential (6). Mean-field trajectory hybrid model In the previous section we presented a model, based on the de Broglie-Bohm trajectories, in which gravity can be understood as curvature of a single classical spacetime, despite inducing entanglement between two particles. This model presents a counterexample against taking such gravitational entanglement as evidence for a quantized gravitational field. Nonetheless, one may be tempted to argue that this model should be considered a quantum theory in some sense of the word, that it does not have a consistent relativistic generalization, or disqualify it as a legitimate counterexample for some other reason. In order to make the relation to mean-field semiclassical gravity more visible, we introduce a hybrid potential that interpolates between the potentials (2) and (6) by integrating the modulus-squared of the wave function only about a radius R around the particle coordinates q i . It, therefore, describes an entire class of models, parametrized by R, which include mean-field semiclassical gravity as a limiting case. We show how the entanglement between the particles decreases as R grows larger, reaching no entanglement only in the limit R → ∞ of the semiclassical Einstein equations. For introducing our semiclassical model, we begin with the most general Nparticle Schrödinger equation where ∇ i is the gradient with respect to the coordinate r i , and the potential V , besides an explicit dependence on time and position coordinates, has both a functional dependence on the wave function and depends on the Bohmian particle coordinates q i (t) which are determined by the guiding equation (5). Equations (22) and (5) form a coupled nonlinear system, which can in principle be solved for both the wave function solutions Ψ and the particle trajectories q i . We specify the potential to take the following form, depending on the parameter R: a regularization function f reg : and γ R a, for now, arbitrary function of only the q i with γ R → 0 for R → ∞. P i is simply the marginal probability distribution for the i-th particle. χ is the characteristic function for a sphere of radius R, limiting the integration to a spherical region around the particle positions; the functions N i ensure normalization. V R has no explicit time dependence but is implicitly time dependent through the coordinates and wave function. Additional linear potentials can be straightforwardly added to the Schrödinger equation (22). The regularization function f reg is required, such that no divergent self-interaction terms appear in the limit R → 0; its precise form is irrelevant for the further discussion, as self-gravitational effects will be neglected. Evidently, the potential V R mimics the behavior of the Bohmian potential (6)and thus the quantum potential (4)-in the limit R → 0. In this limit, the wave function dependence of V R vanishes. In the limit R → ∞, on the other hand, V R turns into the semiclassical potential (2), in which case it no longer depends on the Bohmian trajectories q i . Having both the semiclassical, nonlinear coupling to the wave function and the Bohmian trajectories at our disposal, we can source the gravitational potential by a mass distribution associated with |Ψ | 2 , as in the semiclassical model, with the distinction that only that part of the wave function contributes which lies within a radius R of the actual particle position, as determined by q i . Classical limit The Schrödinger equation (22) yields the usual equations of motion for the position expectation values. Note that the function γ R does not appear. In the limit R → 0, the potential limits to equation (6) and the classical equations of motion follow as derived in section 2. For finite R we introduce the characteristic function χ of the complement of the sphere of radius R such that χ + χ ≡ 1. We then have Note that inside the sphere of radius R, where χ ≡ 1, the self-force integral (25d) vanishes due to the antisymmetry under r ↔ r ′ . For quasi-classical particles, localized around q i ≈ r i with a spatial extent far smaller than R, we have P j (t, r) ≈ δ (r − q j ). The integrals via χ outside the radius R then yield negligible contributions, ∆ a i j ≈ 0 ≈ a self i . Furthermore, N j (t, R) ≈ 1, and we find the classical Newtonian equations of motion Two equal mass particles in a double Stern-Gerlach experiment For the further discussion, we consider the double Stern-Gerlach experiment proposed by Bose et al. [16] and depicted in figure 1. We focus on the special case of two particles of equal mass m. Ignoring the self-gravitational terms, that affect only each particle at its site but not both together, we find We consider a situation where the quasi-classical trajectories for two spin-1 2 particles are split in a magnetic field gradient over a short time period τ a and recombined after some free flight time τ. Assuming τ ≫ τ a , we can neglect the gravitational effects during the acceleration period, and only consider the four classical trajectories u Due to the gravitational attraction, and to lowest order semiclassical approximation, the particles acquire spin-dependent phases obtained by integrating the potential V R along the classical trajectories (cf. the Appendix for a detailed discussion): These phases still depend nonlinearly on the solution Ψ of the Schrödinger equation. As long as the gravitational effects are weak, however, the wave function in equation (28) can be approximated by the solution of the free Schrödinger equation [31]. For the proper choice of parameters-m σ 2 ≫h τ, where σ is the initial width of the wave function-we can also ignore the free spreading and, therefore, consider only the time independent wave function Ψ (r 1 , r 2 ) representing the superposition of all four possible spin combinations. In the case where the particles follow quasi-classical trajectories, the Bohmian trajectories follow closely, q i (t) ≈ u s i i (t). The time dependence can then be transformed away or omitted entirely by choosing v = 0, resulting in with the constant Γ = Gm 2 τ/h of the dimension of a length. We ignore the phase contribution φ s 1 s 2 γ of γ R for now, choose length units in which the width of wave packets around the trajectories is of order unity, and consider a Gaussian wave function in cylindrical coordinates, (x, y, z) = (x, r cos θ , r sin θ ). Each of the four contributions to Ψ is spherically symmetric with respect to the corresponding trajectory. We have and the normalization where N 1 (R) = N 2 (R) = N(R) regardless of the trajectory. Defining and writing ∆ u s 1 s 2 = u s 1 1 − u s 2 2 , using Q(p, q) = Q(q, p) = Q(−p, −q), we find Considering the four spin combinations independently, we find a global phase Φ R = φ ++ R = φ −− R , as well as the relative phases and their average The difference 2φ ∆ R = φ + R − φ − R is always nonzero and can, therefore, be tuned to take any value by adjusting the prefactor Γ . Figure 2 shows the phases as a function of R for different relative distances ∆ x and δ x with respect to the wave function width σ . The additional phase contribution from a nonzero γ R is Assuming that γ R (q 1 , q 2 ) = γ R (|q 1 − q 2 |) is a function of relative distance only, we have φ ++ γ = φ −− γ contributing only to the global phase Φ R . The relevant phase contributions then are Small R expansion In the limit R → 0, both the integral from −R to R and the normalization function N(R) tend to zero like R 3 . Three-fold application of l'Hôpital's rule yields the phases We can generalize this to an expansion around small R ≪ 1, by approximating up to and including O R 5 : Therefore, For arbitrary functions q(x), f (x), g(x), we have to cubic order in R: and hence, with q(x) = Q(x, x + δ x), The phases are then To lowest order, we again obtain the phases (38). As expected, this is the phase obtained from the Bohmian potential (6), as discussed in section 2. Without the function γ R , it is twice the phase expected from quantum gravity [16], although it can be easily amended to recover the quantum result in the limit R → 0 by choosing γ R (ξ ) → Gm 2 I 0 (ξ ) in this limit. Since the prediction of entanglement is based solely on these phases, we expect to be able to witness the same entanglement as for quantum gravity. The maximum amount of entanglement is independent of the choice of γ R , only requiring an appropriate rescaling of the parameter Γ via the particle mass m and the flight time τ. Large R expansion In the limit R → ∞, using the asymptotic expansion of the error function, we find N(R) → 1 and In order to expand the solution for large R we notice that approximately where only the ξ -dependent part of J R (ξ ) contributes to the phases (34). With and hence, assuming φ ± γ → 0 sufficiently fast, Although the phase average vanishes exponentially for large R, it yields nonzero values for any finite value of R. Witnesses for spin entanglement With the phases derived in the previous section, we can now turn towards the task of witnessing the gravitationally induced entanglement experimentally. Considering the double Stern-Gerlach experiment described in section 3.2 and depicted in figure 1, we first notice that the entanglement in the position degree of freedom gets transferred to the spin wave function of the whole system, which after passing through the interferometer and factoring out the global phase Φ R reads with the phases φ ± R given by equation (34). From the state (52) we can calculate expectation values for chosen spin observables and their correlations. Bose et al. [16] propose to witness spin entanglement via the function where σ (i) x,y,z denote the Pauli matrices acting on the spin of the i-th particle. Writing explicitly in the basis {|↑↑ , |↑↓ , |↓↑ , |↓↓ } and with the state (52), this witness function evaluates to This equation shows the dependence on the parameter R via the phase φ ± R . Having learned, both from the plots in figure 3 and the explicit form (51), that in the limit R → ∞ of mean-field semiclassical gravity one has φ + ∞ = −φ − ∞ , with the symmetry of the cosine, one can see immediately that the witness function can only take values 0 ≤ W ≤ 1, depending on the phase difference φ ∆ ∞ . It is also straightforward to show, that W ≤ 1 for any separable spin wave function, implying that it witnesses entanglement for any value W > 1. Since we can freely fix the phase difference φ ∆ R by adjusting the constant Γ , equation (54) suggests that for any combination of phases with an average phase φ Σ R > 0 entanglement witnessing values of W > 1 are possible, which is the case for any finite R. This is confirmed by the plot in figure 3, where the value of the entanglement witness is shown for different choices of the constant Γ for small R. For large R, we can approximate and expand around φ ∆ ∞ = −π/2 in order to obtain Fig. 4 Change of special cases: W 3 = W G ( 3π 2 ); small and W 4 = W G ( π 2 ); maximally entangled cases of the system, by the values of R with respect to the chosen numbers of Γ ; 0.5,1,2 and R max = 0.5 with the help of equation (57) which exceeds unity for any φ ∆ ∞ > −π/2. For large ∆ x = 2δ x, we find approximately φ ∆ ∞ ≈ −Γ /(3δ x), i. e. we can achieve entanglement for 2Γ ≈ 3πδ x. However, in order to obtain an observable phase, the distances ∆ x = 2δ x, and thus also Γ , must grow exponentially with R 2 . Hence, we can claim that our class of deterministic models with a classical gravitational interaction predicts entanglement for an experimental setup such as the considered one, except in the strict limit R → ∞, although the entanglement decreases rapidly with increasing R. A more general treatment of spin entanglement witnesses has been presented by Guff et al. [27]. Generalizing the state (52) to one can introduce a class of witness functions W G (θ ), parameterized by θ and defined via the projection on the state |θ = 1 √ 2 |ψ(0, 0) + e iθ |ψ(π, π) : with I denoting the identity operator. This witness is scaled differently from W above, with negative values indicating entanglement. Evaluating the expectation values in the state (52), one finds A large area of the two-dimensional parameter space spanned by the phases φ + R and φ − R can be covered with only the witnesses W 3 = W G ( 3π 2 ) and W 4 = W G ( π 2 ) [27], which can be seen as witness functions optimized for the detection of small entanglement and maximally entangled states of the system, respectively. Plots of these witnesses for different values of the parameter R are shown in figure 4, again confirming that entanglement can be observed for finite values of R. Because we can always choose parameters m and τ such that φ ∆ R is a multiple of π, W G (θ ) can then always take negative values unless φ Σ R is also a multiple of π. Specifically, no entanglement can be observed in the limit R → ∞ where φ Σ R → 0, whereas for large but finite R one finds 0 < φ Σ R ≪ π and negative values of W G (θ ) are possible at least in principle, if decoherence effects can be kept small. Discussion We presented a class of models for nonrelativistic quantum systems on a classical spacetime, where the curvature of spacetime is sourced in a semiclassical fashion, depending on both the wave function and the particle trajectory in the sense of de Broglie-Bohm theory. Except for the limiting case R → ∞, where our models yield the pure mean-field semiclassical gravity model based on the semiclassical Einstein equations (1), all models in this class have the capability to generate entanglement between two particles. We are not making any claim for these models to be a realistic representation of how gravity works in the regime of nonrelativistic quantum systems. However, they show as a proof of principle that there can be models that i.) result in entanglement between two particles, ii.) allow for an interpretation as the nonrelativistic limit of a theory for quantum matter on a classical spacetime, and iii.) are physically inequivalent to standard quantum mechanics, i. e. the nonrelativistic limit of perturbative quantum gravity in analogy to the limit of quantum electrodynamics to the Coulomb potential. In this regard, our models provide explicit counterexamples to the arguments that experimental evidence of entanglement would prove the necessity to quantize the gravitational field. Note that by "quantized" we mean the impossibility to describe spacetime as a classical Lorentzian 4-manifold. One could, of course, adopt a different notion of quantumness in which entanglement is a defining feature; this, however, would render the argument that entanglement provides evidence for quantization tautological. An important caveat concerns the function γ R we added to the potentials (6) and (23a). Without this function, the semiclassical interpretation of the gravitational field as being sourced by the mass m distributed with the modulus squared of the wave function over a radius R around the Bohmian positions is evident, and it provides the desired counterexample. If, however, R is below the size of superpositions for which the gravitational phase shift [28,30,29] has been observed, a nonzero γ R is required for consistency with these observations. The interpretation of the gravitational force as a consequence of a single, classical spacetime then becomes less convincing. Note also that the conclusion from our discussion here is not that any of the theorems regarding entanglement via classical and nonclassical channels, e. g. in reference [32], are incorrect. Rather our models show that for the purpose of exploring semiclassical alternatives to quantum gravity, the assumptions underlying these theorems could be too constraining. In this context, it is interesting to have a closer look at the one assumption stated explicitly by Marletto and Vedral [19], namely the locality assumption "that the two objects to be entangled should not be interacting directly, but only locally, at their respective locations, with the mediator." [19] The mediator of the gravitational interaction is spacetime curvature, with which the wave function interacts entirely locally via the Newtonian potential, exactly like in standard quantum mechanics. There is, however, a different nonlocal element in our models-as it must be in order to account for the violation of Bell's inequalities-which is the dependence of the source of spacetime curvature not only on the local value of the wave function (as in the mean-field approach) but also on the Bohmian trajectories. In this context, it is important that the models defined in sections 2 and 3 are nonrelativistic, and perfectly consistent as such. The attempt to find a relativistic version is met with difficulties, as the particle coordinates-or their field theoretic complements-in the de Broglie-Bohm theory do not conserve energy [33]. Nonetheless, the proposed experimental tests for entanglement are only formulated nonrelativistically themselves. To this effect, one must keep in mind that not only are the semiclassical Einstein equations equally inconsistent if not endowed with some objective collapse mechanism [8]; even the quantum potential (4) is the nonrelativistic limit of a theory (perturbative quantum gravity) known to be non-renormalizable at high energies. The question whether for a given nonrelativistic potential there is any complete and consistent relativistic theory that limits to said potential should, therefore, be considered an open one for both semiclassical and quantized gravity. Experiments in physics ultimately serve two purposes. On one hand, they can increase our trust in the established theoretical frameworks by confirming their predictions. An experimental confirmation of gravitational entanglement could considerably increase our confidence in perturbative quantum gravity as the low-energy limit of whatever the correct quantum theory of gravity may be. A failure to demonstrate entanglement, by contrast, would create serious doubt about traditional approaches. In this sense, tests for gravitationally induced entanglement are an invaluable tool. On the other hand, experiments can never give proof of any particular model. What they can do instead, is rule out certain elements from a set of plausible alternative models. If one puts the bar for theories describing the gravitational interaction of quantum systems as high as only allowing fully consistent relativistic models into this set of possibilities, one is effectively ruling out elements from an empty set-and all experiments seem equally useless. In order to arrive at meaningful statements about what is truly known empirically about quantum gravity, one should allow for candidate models to possess limitations, and carefully distinguish inconsistency in a strict mathematical sense from mere incompleteness for which it cannot be conclusively ruled out that there might be a mathematically consistent way-as implausible as it may appear-towards a full theory. Our point of view is that there are models that fall into the latter category and predict entanglement through interaction with a classical spacetime. As a consequence of the quantum equilibrium hypothesis [34], which states that the Bohmian trajectories q are distributed with |Ψ (0,x)| 2 , one expects them to remain close to the classical ones and we can approximate u ≈ q. Further assume that within each Σ t the potential changes sufficiently slowly with position and wave function, i. e. there is an ε 2 > 0 such that ∀t ∈ [0,T ], ∀x ∈ Σ t : |δV (t,x)| < ε 2 . (68) The L 2 norm of δΨ then satisfies, using the Cauchy-Schwarz inequality, ∂ t δΨ 2 2 = 2 δΨ 2 |∂ t δΨ 2 | = d N x δΨ * (t,x) ∂ ∂t δΨ (t,x) + δΨ (t,x) ∂ ∂t δΨ * (t,x) and we find ∀t ∈ [0,T ] : Hence, Ψ is well approximated by e iS(t) Ψ f for at least some time T , as long as Ψ f is sufficiently constrained around the classical trajectory u(t) and the potential depends slowly on position and reacts sufficiently slowly to changes in the wave function. If the system under consideration is initially in a superposition state Ψ (0,x) =Ψ f (0,x) = ∑ α j Ψ j (0,x), and the Schrödinger equation (59) is linear, then the previous considerations hold for every branch Ψ j independently, which acquire independent phases S j (t). This is no longer true for a nonlinear potential
2022-05-03T06:47:28.828Z
2022-05-02T00:00:00.000
{ "year": 2022, "sha1": "a279f4b91a4310a38fb743bdf68d7bfb5acea935", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10701-022-00619-0.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "eab586fa0b0a9b7249365a20dd93f8c5054da05f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250731389
pes2o/s2orc
v3-fos-license
A competitive family to the Beta and Kumaraswamy generators: Properties, Regressions and Applications : We define two new flexible families of continuous distributions to fit real data by compounding the Marshall–Olkin class and the power series distribution. These families are very competitive to the popular beta and Kumaraswamy generators. Their densities have linear representations of exponentiated densities. In fact, as the main properties of thirty five exponentiated distributions are well-known, we can easily obtain several properties of about three hundred fifty distributions using the references of this article and five special cases of the power series distribution. We provide a package implemented in R software that shows numerically the precision of one of the linear representations. This package is useful to calculate numerical values for some statistical measurements of the generated distributions. We estimate the parameters by maximum likelihood. We define a regression based on one of the two families. The usefulness of a generated distribution and the associated regression is proved empirically. The cdf H(z) and survival function H(z) = 1 -H(z) of the MO class with baseline G(z; τ τ τ) are and respectively, where ᾱ = 1α. Five important distributions are special cases of (4): the zero-truncated Poisson, logarithmic, negative binomial, geometric and zero-truncated binomial distributions. The cdf of X = max{Z 1 , . . ., Z N } conditional given N = n is and then the unconditional cdf of X follows from (4) The conditional cdf of Y = min{Z 1 , . . ., Z N } under N = n is and then the unconditional cdf of Y follows from (4) as Equations ( 5) and ( 6) define two Marshall-Olkin Power Series-G (MOPS-G) families under baseline G.They provide a strong motivation for explaining the failure time of any mechanism formed by an unknown number N of identical and independent (parallel or serial) components.The densities of X and Y are obtained by differentiating ( 5) and ( 6).We emphasize that these equations can generate many MOPS models.For each baseline G, we can generate ten (2 × 5) associated models from the five discrete distributions in Equation (4).For α = 1, we have the Power Series-G (PS-G) classes under baseline G. The minimum (Y) and maximum (X) statistics can be applied in several series and parallel systems with identical components and have many industrial and biological applications.In parallel systems, the random variable Y models the time of the first component to fail, while X models the time for the breakout system.A dual interpretation can be given for systems with serial components.These random variables are also very useful in oncology.For example, suppose we are studying a recurrence of a certain type of cancerous tumor of an individual after undergoing any kind of treatment.So, the time for the first cell to activate to produce cancer cells can be modeled by the generated distribution of Y, while the disease manifestation (if it occurs only after an unknown number of factors have been active) can be modeled by the generated distribution of X. Four new distributions based on the MOPS construction are introduced for illustrative purposes in Section Four special models.We derive linear representations for the densities of X and Y in Section Expansions.A package in R is presented in Section Numerical evaluation to calculate numerically several mathematical properties for the generated distributions based on the linear representations.General structural properties for the two families are addressed in Section Properties.In Section Estimation, we estimate the parameters for one of the families.We introduce in Section Regression the Marshall-Olkin Truncated Poisson Weibull regression defined from one of the families.In Section Two simulation studies, some simulations examine the accuracy of the maximum likelihood estimates (MLEs) and the quantile residuals (qrs).Two applications prove the utility of our finding in Section Applications.Finally, we offer concluding remarks in Section Conclusions. FOUR SPECIAL MODELS First, consider the zero-truncated Poisson in (4).The cdfs of the Marshall-Olkin Zero-Truncated Poisson-G (MOTP-G) distributions are determined from Equations ( 5) and ( 6) as and The Weibull cdf with scale parameter λ > 0 and shape parameter β > 0 is (for x ≥ 0) Then, the cdf and survival function of the MO-Weibull (MOW) distribution are respectively.By inserting the last two formulae in Equations ( 7) and ( 8) and differentiating the resulting expressions, we obtain the MOTP-Weibull (MOTPW) densities and respectively, where u = u(x) = (λx) β in f X (x) and u = u(y) = (λy) β in f Y (y). Second, consider the geometric distribution in (4).The cdfs of the Marshall-Olkin Geometric-G (MOG-G) classes follow from Equations ( 5) and ( 6) and The Burr XII (BXII) cdf is (for x > 0) where β > 0 and λ > 0 are shape parameters.For λ = 1 and β = 1 in Equation ( 13), we have the log-logistic (LL) and Lomax distributions, respectively.Hence, the cdf and survival function of the Marshall-Olkin Burr XII (MOBXII) distribution are and respectively. By inserting the last two formulae in Equations ( 11) and ( 12) and differentiating the resulting expressions with respect to x and y, respectively, we obtain the MOG-Burr XII (MOGBXII) densities and For the MOTPW and MOGBXII distributions (to the maximum X) referred to (9) and ( 14), some plots of the densities and cumulative functions are displayed in Figures 1 and 2, respectively.The various forms of the densities indicate more flexibility than the parent distributions. We can note increasing, decreasing, and unimodal shapes for the hrf of the MOTPW distribution in Figure 3. Also, we see a slightly different hrf with increasing, decreasing and increasing shape. We now obtain the density of Y when α > 1.By changing the denominator in Equation ( 20), we have Applying expansion (17) in the last equation . .and k = 0, 1, . ..).Using the binomial theorem, we can rewrite F Y (y) as where π i+k≥1 (y) is the exp-G density with power parameter i + k ≥ 1. Equations ( 18), ( 19), ( 21) and ( 22) are the main results of this section.These linear representations have great utility for deriving structural properties of the maximum X and minimum Y from well-known exp-G properties.More than thirty five exp-G models have been studied so far and then it is possible to construct at least three hundred fifty (70 × 5) MOPS-G models with properties determined from those exp-G properties.We can use statistical platforms with ten terms to have precise results. NUMERICAL EVALUATION In order to evaluate the analytical results presented in the previous sections, a package was implemented using the R programming language (R Core Team 2022).The MarshallOlkinPSG package was constructed in a generic way, that is, its most important functions allow generalizations for any baseline G distribution or even inform a zero-truncated PS distribution. The library code can be obtained from GitHub at https://github.com/prdm0/MarshallOlkinPSG.On the library's website (see https://prdm0.github.io/MarshallOlkinPSG) it is possible to have more information on the functions implemented through the documentation and usage examples. To install the package hosted and maintained on GitHub, it is necessary to previously install the remotes library.With the prerequisite met, the package MarshallOlkinPSG can be installed as: # Install the remotes package : # install .packages (" remotes ") remotes :: install_github ("prdm0/ MarshallOlkinPSG ", force = TRUE) The function eq_19() implements Equation ( 19) and compares, for example, with the exact MOTPW density in Equation (9).To facilitate comparison, the function pdf_theorical() implements this density function.By doing help(eq_19) it is possible to access an example of comparison of the two equations.Note that Equation ( 19) approximates ( 9) very well when finite sums are taken in applied problems.In other words, the results achieved by the function eq_19() approximates very well those from pdf_theorical().The function eq_19() will also allow any baseline cdf G(x) as an argument of eq_19(). The function eval_plot_moptw() allows to validating numerically Equation ( 19) by means of plots.The true parameters for the MOTPW density are: α = 1.20, θ = 1.50, β = 1.33, and λ = 2.In addition, we require just a few terms in the sums to obtain a reasonable level of precision as shown in the plots in Figure 5, where six or eight terms provide very accurate approximations. PROPERTIES We now provide some mathematical properties of T s that can be easily utilized in the linear representations of the previous section to find the corresponding properties of X and Y. The nth ordinary moment of T s has the form where Q G (u; τ τ τ) = G -1 (u; τ τ τ) is the qf of G. Explicit expressions for several exp-G moments can be determined from ( 23). The nth incomplete moment of T s follows the previous algebra where the integral can be calculated for the great majority of G distributions.The first incomplete moment m 1 (y) is the most important case of ( 24) to find mean deviations and Lorenz and Bonferroni curves. The moment generating function (mgf) of T s follows as The mgfs of exp-G distributions con be determined from Equation (25). An Acad Bras Cienc (2022) 94( 2) e20201972 10 | 20 The log-likelihood function for ψ from a random sample x 1 , . . ., x n of X is A similar development can be conducted for the random variable Y defined from Equation ( 6) for any baseline G. We can find the MLE ψ by maximizing Equation ( 27) using the MaxBFGS sub-routine (Ox program), optim function (R), and PROC NLMIXED (SAS).The AdequacyModel package can also maximize ( 27) using the PSO (particle swarm optimization) approach from the quasi-Newton BFGS, Nelder-Mead and simulated-annealing methods to maximize the log-likelihood function and it does not require initial values.Details are available at Marinho et al. (2019) and https://github.com/prdm0/AdequacyModel.These scripts can be executed for a wide range of initial values and may lead to more than one maximum.However, in these cases, we consider the MLEs corresponding to the largest value of the maximum log-likelihood.There are sufficient conditions for the existence of these estimates such as compactness of the parameter space and the concavity of the log-likelihood function, but they can exist even when the conditions are not satisfied.In general, there is no explicit solution for the estimates from maximizing ( 27), but we can establish theoretical conditions on their existence and uniqueness for very special models by examining the ranges of the score components. In a similar manner, we can construct many other regressions based on other MOPS-G distributions defined from Equations ( 5) and ( 6). The log-likelihood function for the vector ψ = (α, θ, η T 1 , η T 2 ) T from the MOTPW regression can be reduced to We obtain the MOTPW distribution for λ i = λ and β i = β. Let ψ be the MLE of ψ.Equation ( 29) can also be maximized using the gamlss regression framework (Stasinopoulos & Rigby 2008) in R. GAUSS M. CORDEIRO et al. A COMPETITIVE FAMILY TO THE BETA AND KUMARASWAMY G TWO SIMULATION STUDIES We perform two simulation studies.The first one examines the accuracy of the MLEs of the parameter estimates in the MOTPW distribution.The second one does the same for the MOTPW regression. The MOTPW distribution First, we evaluate the precision of the estimates in the MOTPW distribution based on 1,000 Monte Carlo simulations using the R software.The simulation procedure follows as: The inverse function Q(u) = F -1 (u) comes from ( 7) Generate u ∼ U(0, 1) and obtain the values x = Q(u) of the MOTPW distribution. For each scenario and value of n, one thousand samples are generated from the MOTPW regression fitted to each generated data set.The quantities reported in Table II are in good agreement with the asymptotic results for the MLEs. Residual analysis We investigate the quantile residuals (qrs) to verity the adequacy of the response distribution to determine outliers in the MOTPW regression.The same approach can be adopted to many other regressions defined from the distributions in ( 5) and ( 6).The qrs are given by (Dunn & Smyth 1996) where Φ(•) is the normal cdf and λ i and β i are defined in Equation ( 28).We consider the same scenarios for the simulations in Section Two Simulation Studies.For each fitted regression, the qrs are calculated from Equation (31).Figures 6, 7, 8, and 9 display QQ plots which show that the empirical distribution of these residuals is close to the standard normal distribution. APPLICATIONS The beta Weibull (BW) and Kumaraswamy Weibull (KwW) distributions have been widely used to fit real data in the last ten years or so.We compare the MOTPW distribution with the BW and KwW distributions since all of them have four parameters.The BW density pioneered by Lee et al. (2007) is where all parameters are positive.The KwW density introduced by Cordeiro & de Castro (2011) has the form where all parameters are positive. Application 1: Hourly dollar wage data The first application refers to hourly dollar wages for n = 534 US workers.These data are obtained from the SemiPar package (Wand et al. 2005).Table III lists the estimates, standard errors (SEs) in parentheses, and three classical statistics.The lowest values of these measures reveal that the MOTPW is the best model.Next, the likelihood ratio (LR) statistic for comparing the MOTPW and TPW models is 6.159 (p-value < 0.013) which supports the wider distribution. Figure 10a shows the histogram and the estimated MOTPW density.Figure 10b provides the empirical function and estimated MOTPW cdf, thus revealing that this distribution is appropriate for these data.We note that the co-variable d i1 is significant and d i2 is not.So, there is a real difference between normal and chemical diabetes groups in relation to relative weight and no difference between normal and overt diabetes groups to relative weight.The same findings can be seen in Figure 12. The LR statistic to compare the MOTPW and TPW regressions is w = 4.590 (p-value=0.032) that indicates that the fist regression is superior to the second regression to these data in terms of model fitting. The plot of the residuals reported in Figure 11a does not detect outliers and departures from the general assumptions.The worm plot (Buuren & Fredriks 2001) of the residuals in Figure 11b and the QQ plot displayed in Figure 11c show the adequacy of the MOTPW regression for the current data. A graphical comparison from the estimated cdfs in Figure 12 also supports the regression analysis. CONCLUSIONS We define two flexible Marshall-Olkin-Power-Series (MOPS) families of continuous distributions which can be very useful to fit real data.They are obtained by combining the Marshall-Olkin class (Marshall & Olkin 1997) and the power series distribution.Hundreds of continuous distributions can be easily formulated from the two families.We discuss some special distributions and maximum likelihood estimation.We introduce the Marshall-Olkin Truncated Poisson Weibull regression associated with one of the families.Some mathematical properties of these families are presented.We provide a package implemented in R software which can be used to determine numerically some mathematical properties for any distribution in the new families.The utility of the proposed models is proved empirically in two applications. Figure 3 .Figure 4 . Figure 3. Plots of the hrf of the MOTPW model. 20 GAUSS M. CORDEIRO et al.A COMPETITIVE FAMILY TO THE BETA AND KUMARASWAMY G An Acad Bras Cienc (2022) 94(2) e20201972 7 | 20 GAUSS M. CORDEIRO et al.A COMPETITIVE FAMILY TO THE BETA AND KUMARASWAMY G By differentiating F Y (y), the density of Y can be expressed as Figure 5 . Figure 5. Numerical evaluation of (19) with finite sums, where N N N and K K K denote the upper limits of terms in the related sums with the running indices n n n and k k k, respectively. Table I . Simulation results for the MOTPW distribution. Table II . Simulation results for the MOTPW regression. Table IV . Measures for diabetes data. Table V . Results for diabetes data.
2022-07-22T06:19:31.283Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "d2ea5b85fe3a9ea2f2f7a5fda37700922bdf5799", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/aabc/a/BSKvYDhQRtV9yp74YHv5nKB/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9c09931b701c8b910385f5ff2b39fed1ebb7948f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Medicine" ] }
2816977
pes2o/s2orc
v3-fos-license
Combined effects of cigarette smoking, gene polymorphisms and methylations of tumor suppressor genes on non small cell lung cancer: a hospital-based case-control study in China Background Cigarette smoking is the most established risk factor, and genetic variants and/or gene promoter methylations are also considered to play an essential role in development of lung cancer, but the pathogenesis of lung cancer is still unclear. Methods We collected the data of 150 cases and 150 age-matched and sex-matched controls on a Hospital-Based Case-Control Study in China. Face to face interviews were conducted using a standardized questionnaire. Gene polymorphism and methylation status were measured by RFLP-PCR and MSP, respectively. Logistic regressive model was used to estimate the odds ratios (OR) for different levels of exposure. Results After adjusted age and other potential confounding factors, smoking was still main risk factor and significantly increased 3.70-fold greater risk of NSCLC as compared with nonsmokers, and the ORs across increasing levels of pack years were 1, 3.54, 3.65 and 7.76, which the general dose-response trend was confirmed. Our striking findings were that the risk increased 5.16, 8.28 and 4.10-fold, respectively, for NSCLC with promoter hypermethylation of the p16, DAPK or RARβ gene in smokers with CYP1A1 variants, and the higher risk significantly increased in smokers with null GSTM1 and the OR was 17.84 for NSCLC with p16 promoter hypermethylation, 17.41 for DAPK, and 8.18 for RARβ in smokers with null GSTM1 compared with controls (all p < 0.01). Conclusion Our study suggests the strong combined effects of cigarette smoke, CYP1A1 and GSTM1 Polymorphisms, hypermethylations of p16, DAPK and RARβ promoters in NSCLC, implying complex pathogenesis of NSCLC should be given top priority in future research. Background Lung cancer kills over one million people each year all over the world, and it is a major public health problem as the leading cause of cancer death in men and second leading cause in women [1]. The two major forms of lung cancer are non-small cell lung cancer (NSCLC, about 85% of all lung cancer) which includes squamous cell carcinoma, adenocarcinoma and large cell carcinoma, and small-cell lung cancer (SCLC, about 15%) [2]. Lung cancer mortality has increased rapidly during recent years in Asian countries as the use of tobacco products is increasing [3]. About 80-90% of lung cancers are attributable to cigarette smoking, and an estimated 20% of all lung cancers are caused by a combination of environmental and/or genetic factors [4], but inter-individual differences in carcinogen metabolism may play an essential role in the initiation and progression of this environmental cancer and affect individual susceptibility to lung cancer [5,6]. Cigarette tobacco contains a variety of carcinogens, such as polycyclic aromatic hydrocarbons(PAHs), N-nitrosoamines, and aromatic heterocyclic amines [7]. PAHs are metabolized to reactive DNA binding diols epoxides by phase I (e.g. CYP1A1) and detoxified by phase II (e.g. GSTM1) before targeting DNA. It is possible that individual variations in metabolic activities in each phase or both phases of metabolism coordinately modulate the clearance of DNA [8]. Many studies have reported that polymorphism in CYP1A1 as well as in GSTM1, or combination effect of both, have been associated with different types of cancer risk including human lung cancer [9]. It is now recognized that not only genetic mechanisms, such as gross chromosomal alterations or single nucleotide mutations, but also aberrant DNA methylation provides one or both of the two hits postulated in Knudson's two hit hypothesis for the inactivation of tumor suppressor genes. Many studies have indicated that aberrant methylation of the promoter causes transcriptional silencing of some important suppressor genes, such as cell cycle gene p16, apoptosis gene DAPK, cell differentiation and proliferation gene RARb, DNA repair gene MGMT, and this has been implicated in the carcinogenic process in human lung cancer [4]. Furthermore, methylation has been described as an early event in lung tumorigenesis and variation in methylation status has been associated with cigarette smoke exposure [10,11]. In addition, only a relative small study has examined the relationship between polymorphisms in XRCC1, GSTM1, GSTP1, NQO1, and MPO and aberrant methylation of p16, RARb and MGMT in lung cancer [6]. Those result suggested that GSTP1 and NQO1 variations increased the risk of MGMT methylation, and the possibility of p16 and RARb methylations was increased for XRCC1 and MPO gene polymorphisms, indicating the interactions between gene polymorphisms and aberrant methylation of tumor suppress genes. Above facts led us hypothesize that major metabolic enzyme gene genetic polymorphisms and environmental factors, such as cigarette smoking and diet habits, may interact during the hypermethylations of tumor suppressor gene (TSG) promoters in the carcinogenesis of NSCLC. So, the present study have mainly investigated the association between cigarette smoking, polymorphisms of CYP1A1 and GSTM1 genes, hypermethylations of p16, DAPK and RARb gene promoters in NSCLC. were selected from patients newly diagnosed with diseases other than cancer and chronic respiratory diseases or from individuals receiving routine medical examinations at the same hospitals. There were no significant difference of mean age between cases and controls (59.81 ± 9.18 vs 59.91 ± 8.71 years). There were 125 males and 25 females in cases or controls group. This study was approved by the Ethical Committee of Anhui Medical University and conducted in accordance with the recommendations outlined in the Declaration of Helsinki, and all subjects provided written informed consent. Exposure to environmental factors Trained interviewers used a structured questionnaire to interview each subject face to face when the subjects agreed to take part in this study and underwent medical examination. The questionnaire mainly included questions on demographic factors, smoking history (duration and daily consumption of cigarettes), consumption of alcohol, tea drinking and dietary factors (i.e. intake of peppery and/or fruit), family history of cancer in first relatives (i.e., parents, siblings and offspring), and clinical features of lung cancer and complete medical history. Smoking habit was defined as smoking more than 1 cigarette a day for at least 1 year, or more than 360 cigarettes a year. Pack years were calculated by multiplying the number of packs of cigarettes smoked a day by the number of years the person had smoked. Alcohol habit was defined as drinking more than twice a week, consumption of more than 50 ml of heavy liquor or 500 ml of beer on each occasion. Tea habit was defined as drinking tea at least one time a day for at least 1 year. The servings of peppery or fruit was defined to intake more than twice a week. DNA extraction and genotyping Cases and controls were asked to provide 5 ml peripheral venous blood. This was separated in two aliquots of 1 ml serum and in two aliquots of buffy coats and stored at -20°C. Genomic DNA was extracted from the buffy coats using QIA Gen Blood Kit according to the manufacture's instructions (Qiagen Methylation status of the promoter region of the p16, DAPK and/or RARb was determined by MSP described by Zochbauer-Muller et al. [12]. Two sets of primers were designed, one specific for DNA methylated at the promoter region of each gene and the other specific for unmethylated DNA (Table 2). Amplification was carried out on ABI 9600 Thermal Cycler. Data analysis To determine the association between each of the test genes and lung cancer, the homozygous (AA or aa genotype) and heterozygous (Aa genotype) states of the variants were first analyzed as categorical variables, and then reanalyzed as dichotomized variables grouped by the risk genotype (i.e., 0 for the wild type homozygous, and 1 for the other genotypes combined). To evaluate the effects of combined genotypes, environmental factors either together or separately, subjects were categorized into homozygous wild type, and possession of one or more of the risk genotypes (heterozygous + homozygous for the variant). Compared with the wild type genotype, the odds ratio (OR) and 95% confidence interval (CI) of the various genotypes was calculated for lung cancer risks in univariate analysis model. Multivariate logistic regression was conducted to estimate the relationship between smoking, polymorphisms of metabolic enzyme genes and methylation inactivate of tumor suppressor genes in NSCLC after adjusted the potential confound factors. SAS software (version 9.1; SAS Institute, Inc.) was used for statistical analysis, using the x 2 and Fisher's exact test for differences between groups and t tests between means. All tests were two-sided, and a p value of <0.05 for any test or models was considered statistically significant. Results The ORs of major risk factors among cased and controls are shown in Table 3. After adjusting for potential confounders, there were no significant differences between the cases and controls in alcohol habit, tea habit, dust exposure (≥1 month/year), toxin exposure (≥1 month/ year), and the family history of lung cancer among first relatives of patients. Genotype frequencies for CYP1A1 and GSTM1 are calculated, which these distributions are consistent with the Hardy-Weinberg equilibrium model. In the control group, the allele frequency for MspI was 0.30 (a), whereas that for lung cancer group was 0.29. A non-significant difference was observed between cases and controls. In addition, 53% of controls and 63% of cases were homozygous for null variant allele of GSTM1. No significant associations between the variants of CYP1A1 or GSTM1 and lung cancer. However, significant associations were also found between lung cancer and the follow variables: smoking habit, pack years, peppery (servings, > 2 times/week), and fruit (servings, > 2 times/week) ( Table 3). Table 1 Summary of primer sequences, annealing temperatures and PCR product sizes used for CYP1A1 (MspI) and GSTM1 Gene Primer°C bp Reverse 5'-GAAGAGCCAAGGACAGGTA-3' This study confirmed smoking was the main risk factor of lung cancer, and increased 3.70 times greater risk of NSCLC compared with nonsmoker. Further, the OR of NSCLC increased with higher categories of total smoking pack year, from 3.54 in the second category to 7.76 in the fourth category ( Table 3). ORs of the three higher categories were all statistically significant. After adjustment for the potential confounding factors in the multivariate analysis models, ORs in each category of smoking pack years increased, and CIs became wider, but the general dose-response trend was maintained (Table 3). Interestingly, we found the preventive effects of peppery or fruit servings on lung cancer, and OR was 0.35 (95%CI, 0.16-0.76) and 0.16 (95%CI, 0.06-0.43), respectively. This study suggested non-significant association of variants of CYP1A1 and GSTM1 with NSCLC alone or in combination. However, the risk increased about 4-fold in smokers with CYP1A1 variants as compared with CYP1A1 wild homozygous non-smokers and 7-fold when smokers having null GSTM1 were compared with power GSTM1 non-smokers. These results can imply the interactions of smoking and the genetic variants of CYP1A1 and GSTM1 in NSCLC (Table 4). We used MSP to determine the frequency of methylation of p16, DAPK and RARb in 150 resected NSCLCs, which was 48.67%, 58.67% and 60.00%, respectively. In the corresponding nonmalignant lung tissues, it was seen at low frequencies for p16 (9.93%), DAPK (9.93%) and RARb (17.02%). Those indicated the significant difference between lung cancer tissues and nonmalignant lung tissue in methylations of three genes. In addition, we found that at least one of these three genes had methylation in 85.33% of the tumors; 26% of the tumors had only one gene methylated, 36.67% of the tumors had two genes methylated and 22.67% of the tumors had three genes methylated. A statistically significant corrrlation was found for the methylation status between p16 and DAPK (p = 0.0006), whereas the methylation status of the other genes was independent when compared with each other. Although no association was apparent among the CYP1A1 or GSTM1 polymorphisms and p16, DAPK or RARb promoter methylation, GSTM1 null genotype was significantly associated with at least one methylation among p16, DAPK and RARb genes (OR, 1.67; 95% CI, 1.01-2.77) (no data shown). Table 5 presents OR estimates for smoking habits, pack years, diet habits, family history of lung cancer, and polymorphisms of CYP1A1 and GSTM1 as compared with controls according to the cases with or without promoter hypermethylation of the p16, DAPK or RARb gene. Obviously, smoking habits increased the risk of NSCLC with promoter hypermethylation of the p16, DAPK or RARb, which OR is 4.56, 3.83, 3.11, respectively. As the amount of pack years increased, the risk of NSCLC with promoter hypermethylation of the p16, DAPK or RARb gene was greater, indicating a graded positive association between both. The results may also imply the interaction between cigarette smoking and promoter hypermethylation of the p16, DAPK or RARb gene in NSCLC. In addition, a possible association was found between null GSTM1 and NSCLC with promoter hypermethylation of the DAPK or RARb gene, implying effect of GSTM1 polymorphism on the aberrant methylations of TSG in lung cancer. Of note, higher consumption of fruit was associated with lower risk of NSCLC with or without promoter hypermethylation of the p16, DAPK or RARb gene (no data shown) ( Table 5). Based on above results, Table 6 considers the interaction between smoking habits, polymorphisms of CYP1A1 and GSTM1 variants in NSCLC with or without promoter hypermethylations of the p16, DAPK or RARb gene as compared with controls. We didn't found the interaction between CYP1A1 polymorphisms and GSTM1 variant in NSCLC with or without promoter hypermethylation of the p16, DAPK or RARb gene. Nevertheless, as compared with controls, the risk increased 5.16, 8.28 and 4.10-fold, respectively, for NSCLC with promoter hypermethylation of the p16, DAPK or RARb gene in smokers with CYP1A1 variants (Aa+aa). Strikingly, the risk strongly increased in smokers with null GSTM1, and the OR was 17.84 for NSCLC with p16 promoter hypermethylation, 17.41 for DAPK, and 8.18 for RARb in smokers with null GSTM1 compared with controls. In contrast, the smokers with null GSTM1 have lower risk for NSCLC without TSG promoter hypermethylation. To a certain extent, these results are in agreement with a previous multiplicative model for risk combination between smoking habits and metabolic enzyme gene polymorphisms analyzed when the cases were not stratified by TSG methylation status. These results may further confirm the interactions Table 6 Interactions between cigarette smoking and the genetic variants of CYP1A1 and GSTM1 in non-small cell lung cancer with or without promoter hypermethylations of the p16, DAPK and RARb genes between smoking, genetic variant of CYP1A1 and GSTM1, and promoter hypermethylation of the p16, DAPK or RARb gene in NSCLC (Table 6). Discussion Many epidemiologic studies have demonstrated cigarette smoking is the major risk factor of lung cancer [13][14][15], with a obvious dose-response relationship [16]. Our findings (OR = 3.70, p < 0.01) supported these results unquestionably. There are more than 4000 chemical materials in cigarette smoking, and approximately 200 may be carcinogens, such as aromatic hydrocarbons, which have proved to cause lung carcinogenesis, and increasing mortality from lung cancer is closely associated with the consumption of tobacco [14]. Although the majority of lung cancer patients are smokers, only 10-15% of all smokers will develop the disease [17], indicating environmental or genetic determinants in disease initiation, promotion and progression. Since many carcinogens require metabolic activation via phase I enzymes to enable to react with cellular macromolecules or metabolic detoxification via phase II enzymes to enable to eliminate from body, inter-individual differences in carcinogen metabolism may play a key role in environmental cancers [4,6]. The most frequently studied phase I and II enzymes include CYP1A1 and GSTM1. Studies from Japanese populations first found an association between CYP1A1 and polymorphisms and risk of lung cancer, with reports of >2-fold increased risk [18]. In a pooled analysis using data from 22 studies, a significant 2.4-fold increased in risk was observed in individuals carrying the MspI variant [19]. In addition, GSTM1 occurs in the null form in~50% of the Caucasian population. One of the first meta-analyses showed a modest increase in lung cancer among carriers of the GSTM1null genotype (OR = 1.13, 95%CI 1.04-1.25) [20]. The most recent and large meta-analysis [9] of Chinese population found that lung cancer risk for CYP1A1 variant was 1.34-fold (95%CI 1.08-1.67, p = 0.008) compared with the wild-type homozygous genotype, and the risk for the GSTM1 null genotype was 1.54-fold (95%CI 1.31-1.80, p < 0.001) as compared with the GSTM1 present genotype. A recent pooled analysis also suggested that genetic polymorphisms in CYP1A1 and GSTM1 are associated with lung cancer risk among Asian populations [3]. Few studies have researched gene-gene interactions in lung cancer. An early study from Japan [18] reported the combined effects of CYP1A1 MspI genotype and deficient GSTM1 in lung cancer (OR = 16.00), but only at a low-dose level of cigarette smoking. Also, another analysis indicated a possible interaction between the CYP1A1*2A allele and GSTM1 deletion on lung cancer risk in Caucasians [21]. However, as other studies have reported conflicting results for CYP1A1 and GSTM1 polymorphisms in lung cancer [4,6], our study found neither significant risk of lung cancer for CYP1A1 variants or GSTM1 null genotypes nor possible combination effects of CYP1A1 and GSTM1 polymorphisms in the development of lung cancer. The majority of epidemiological studies on the effects of low penetrant genes in cancer etiology have considered main effects single nucleotide polymorphisms, or gene-environment interactions and rarely gene-gene interactions, mainly duo to the lack of statistical power [22]. Most observed associations between cancer and low penetrant gene variants have been weak or very weak [21]. However, penetrance of a gene variant depends on events such as the interaction with external exposures, with the internal environment or with other factors (e.g., gene promoter methylation). In the present study, the significant interaction between cigarette smoking and CY1A1 or GSTM1 variants is consistent with the results of previous pooled analysis that the stronger association between the CYP1A1 MspI or GSTM1 null and lung cancer was found among smokers [22], but a non significant elevated risk of interaction between GSTM1 null genotype and lung cancer was reported among Asian by Benhamou and co-workers [23]. Cigarette smoking is known to be causally related to BPDE-DNA adducts that is elevated in the lung tissue of smokers with GSTM1 null genotype, which was found to induce mutations in the hotspot codons of the p53 gene [3,24]. Thus, we speculated that the interaction between CYP1A1 or GSTM1 polymorphisms and lung cancer is related to polycyclic aromatic hydrocarbons exposure derived from smoking because polycyclic aromatic hydrocarbons are primarily metabolized by CYP1A1 and GSTM1. The greater effects observed among smokers support the smoking-related etiology of lung cancer in Chinese population. It is now recognized that not only the inherited variation in DNA sequence (e.g. gene mutations) but also the epigenetic events, such as aberrant DNA methylatoin, both play an essential role in the origination and development of lung cancer. The most widely studied epigenetic event in relation to lung cancer included the promoter hypermethylation of p16, DAPK or RARb gene [4,6]. Our findings reported the percentage for p16, DAPK or RARb methylated was the 48.67%, 58.67% and 60.00% in the tumor tissues of patients with lung cancer, respectively. Those results were separately a little greater than other findings that p16 is methylated in~25-41% of NSCLC, DAPK in 16-44% and RARb in 40-43% [25,26], which the differences may mainly result from ethnic variants. The study examined the relationship between polymorphisms in CYP1A1 and GSTM1 and aberrant methylation of p16, DAPK and RARb in lung cancer. It is the first to found GSTM1 null was associated with at least one methylation of p16, DAPK and RARb gene promoters (OR = 1.67, 95% CI 1.01-2.77), supporting interaction between metabolic enzyme gene polymorphisms and hypermethylation of tumor suppressor genes in development of NSCLC [27,28]. Also, data from our unconditional logistic models is the first to show that tobacco smoke play dominant roles in NSCLC with hypermethylation of p16, DAPK or RARb promoter, but not without hypermethylation of those gene promoters. As the amount of cigarette smoking increased, the risk of NSCLC with p16, DAPK or RARb promoter hypermethylation increased. To our knowledge, we have first reported the interactions between smoking and polymorphisms of CYP1A1 and GSTM1 gene were significantly modified by hypermethylation of p16, DAPK or RARb promoter in NSCLC, indicating the combined effects of smoking, CYP1A1, GSTM1, p16, DAPK and RARb gene on development of NSCLC. The findings suggest that smoking related biological pathways leading to the development of lung cancer involve not only hypermethylations of p16, DAPK and RARb promoters but also genetic polymorphisms of CYP1A1 and GSTM1 genes. Although it is unclear that environmental factors underlie the targeting of specific gene promoters for hypermethylation, the characterization of gene-environment interaction and epigenetic influences in carcinogenesis is of great importance for preventive measures such as the setting of exposure threshold values, public health campaigns and chemopreventive approaches. Those all need to be further confirmed and thoroughly studied in different populations. This study has some strengths and limitations. This is first study on the interaction between cigarette smoking and the polymorphisms of CYP1A1 or GSTM1 for NSCLC with hypermethylations of p16, DAPK and RARb promoters, which carefully controlled for important confounding factors. The selective bias was mostly controlled by the design of a hospital-based case-control study. As other case-control studies, this study raises concern about recall bias and residual confounding. Of course, the major difficult is still the inability to separate exposures to factors prior to clinical onset from exposures to factors after clinical onset. In conclusion, this study confirmed that cigarette smoking is significantly associated with higher risk of NSCLC having hypermethylation of p16, DAPK or RARb promoter, and a general dose-response trend was confirmed. A striking finding was that the interactions between smoking and polymorphism of CYP1A1 or GSTM1 gene increased significantly greater risk of NSCLC with hypermethylation of p16, DAPK or RARb promoter, suggesting complex pathogenesis of NSCLC should be given top priority in future research.
2017-06-21T17:07:08.603Z
2010-08-12T00:00:00.000
{ "year": 2010, "sha1": "d4f02e2fa598856d769a558a68ba08a924cf0ac7", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-10-422", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d4f02e2fa598856d769a558a68ba08a924cf0ac7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1886806
pes2o/s2orc
v3-fos-license
Phenotypic analysis of peripheral B cell populations during Mycobacterium tuberculosis infection and disease Background Mycobacterium tuberculosis (Mtb) remains an unresolved threat resulting in great annual loss of life. The role of B cells during the protective immunity to Mtb is still unclear. B cells have been described as effector cells in addition to their role as antibody producing cells during disease. Here we aim to identify and characterize the frequency of peripheral B-cell subpopulations during active Tuberculosis and over treatment response. Analysis were done for both class switched (CS) and non-class switched (NCS) phenotypes. Methods We recruited participants with active untreated pulmonary Tuberculosis, other lung diseases and healthy community controls. All groups were followed up for one week from recruitment and the TB cases till the end of treatment (month 6). Results Peripheral blood samples were collected, stained with monoclonal antibodies to CD19+ cells, Immunoglobulin (Ig) M, plasma cells (CD 138+), marker of memory (CD27+), immune activation (CD23+) and acquired on a flow cytometer. Circulating Marginal zone B cells (CD19+IgM+CD23−CD27+) and memory phenotypes are able to distinguish between TB diagnosis and end of treatment. The frequency of mature B cells from TB cases are lower than that of other-lung diseases at diagnosis. A subpopulation of activated memory B cells (CD19+IgM+CD23+CD27+) cells are present at the end of TB treatment. Conclusions This study identified distinctive B cell subpopulations present during active TB disease and other lung disease conditions. These cell populations warrants further examination in larger studies as it may be informative as cell markers or as effectors/regulators in TB disease or TB treatment response. Background Tuberculosis (TB), remains an unresolved threat that is responsible for great mortality and morbidity in humans. Its causative agent, Mycobacterium tuberculosis (Mtb), was ultimately responsible for 9 million newly reported cases and 1.5 million deaths during 2013 [1]. Although great progress has been made on T cell based tuberculosis research, it is imperative that new avenues have to be explored and that previously underappreciated cell types are re-evaluated for their roles during the tuberculosis infection with the expectation of bringing an end to the epidemic. It is commonly accepted that B cell and antibody-mediated responses confers protection against extracellular pathogens and that the regulation and control of intracellular organisms are through cellular immune mechanisms. There is increasing evidence that demonstrate B cells functioning as mediators (in both effector and regulatory roles) of immunity outside of their classically designated profession as the facilitators of humoral immunity. B cell activation by Toll-like receptor (TLR) antigens or whole organisms (like BCG or Mtb) can lead to a range of outcomes to the host, either by producing antibody, secreting cytokines (including interleukin (IL)-6, IL-10, and interferon (IFN)-gamma) or presenting antigen to naïve T cells [2][3][4][5]. B cell responses are beneficial to the host during infections and damaging during autoimmune disease. Conversely, B cells have the capacity to limit the hosts defence (inflammatory response) against pathogens and shield against autoimmune pathologies. This demonstrates that B cells can have distinct roles as drivers and regulators of immunity depending on the functional properties they gain following receptor activation and differentiation. Although ongoing studies and literature supports the functional role of B cells during TB [6], the respective change in the frequency of the circulating B cell repertoire during active Mtb infection remains a topic for discussion as some studies report either a significant decrease [7] or increase [8] of peripheral blood B cell populations in actively infected patients. Immuno-phenotyping has proven to be a very useful tool in the identification, monitoring and management of various clinical diseases [9][10][11]. Although recent publications have sought to develop in depth multicolour flow cytometric panels for the accurate delineation of various lymphocyte populations and subpopulations (including B cells) during immunodeficiencies [12,13], very few studies exist that specifically assess immunephenotypic change during active Mycobacterium tuberculosis infection [7,14]. Little is known about the immune-phenotypic change of the B cell lineage during active Mtb as current literature largely focuses on the general B cell presence (primarily looking at CD19 + B cells only) [7,14], rather than on an in-depth analysis of various populations and subpopulations. This results in a lack of knowledge pertaining to changes in B cell populations implicated in effector roles such as circulating memory B cells or plasma populations. It also does not elucidate the current activation state of B cells nor the expression of surface molecules, thus highlighting the need for further investigation regarding this matter. In this brief preliminary report, a total of 96 participant samples spanning three groups (tuberculosisactive infection; 52 samples, other-lung disease; 24 samples, and healthy community controls; 20 samples) and various time points relating to treatment were used to assess the B cell repertoire in detail with the hope of identifying unique phenotypic differences between the groups that could suffice as biomarkers of disease. The primary contribution of this data would be to map the phenotypic distribution of B cells between these groups with a vast range, as it would include phenotypes for both IgM + and IgM − B cells. The actual isotype linked to the IgM − phenotypes have not been determined for this study. Patients This study was done in the Western Cape Province of South Africa where a 2003 report showed that the TB detection rate was 678/100,000 population [15]. Previous publications have defined the characteristics associated with the economically depressed and disadvantaged metropolitan population of homogeneous ethnicity primarily found in the Western Cape Province, presenting high incidence rates of Mtb and transmission [16,17]. The study participants included all attendees of the Infectious Diseases Clinics at Tygerberg Hospital, all community members of surrounding areas including Ravensmead, Uitsig, Adriaanse, Elsiesriver and local health care clinics. A total of 96 HIV negative participants, spanning three groups, were recruited for this study. All the study participants were also negative for Hep B. Fifty-two had active drug susceptible tuberculosis disease (on standard TB treatment regime) (TB disease status was confirmed by two separate positive sputum smear tests and a PCR for DNA of bacteria of the Mtb complex, by utilising the GeneXpert platform), 20 were healthy community controls (Mtb culture negative, Quantiferon test positive, therefor assuming latently infected with Mtb) and the third group consisted of 24 other-lung disease (OLD) patients. These OLD patients were all TB and HIV negative and presented with at least one of the following: a) febrile illness with chest symptoms, b) radiographic evidence of viral or bacterial pneumonia, c) bronchiectasis with acute exacerbation, or d) acute exacerbation of asthma or COPD (chronic obstructive pulmonary disease). The OLD patients were not on any steroid treatment at the time on recruitment. Table 1 summarizes the demographic data of study participants. B cell phenotyping Peripheral blood samples from the two control groups and patients were collected on various scheduled visits which included diagnoses, day 7 on treatment and at week 24 (end of TB treatment). White blood cells were obtained by subjecting each sample to a red blood cell lysing step using BD FACSlyse solution (BD Bioscience Pharmingen -San Jose, CA, USA). Leukocytes were stained following a standard procedure with anti-human CD3 (APC/Cy7, clone HIT3a), anti-human CD4 (PerCP/ Cy5.5, clone OKT4), anti-human CD8a (Brilliant Violet 510, clone RPA-T8), anti-human CD19 (PE/Cy7, clone HIB19), anti-human CD23 (FITC, clone EBVCS-5), antihuman CD27 (PE, clone M-T271), anti-human CD138 (APC, clone DL-101) and anti-human IgM (Brilliant Violet, clone MHM-88). All antibodies were purchased from BioLegend (San Diego, California, United States of America). A total of 100,000 lymphocytes/sample were acquired on a FACS Canto II (BD Biosciences). All post acquisition analysis was done with FlowJo Software v10 (Tree Star Inc.) and the frequencies of the parent populations determined (Fig. 1). The assessed phenotypes ( Fig. 1 showing the gating strategy using an treated TB case) were defined as follows: (1) Statistical analysis Differences in the frequency of B cell subsets between the groups were analysed using the non-parametric analysis with a Mann-Whitney correction and performed by Dr Justin Harvey (Stellenbosch University). All analysis were performed with the Statistica 12 software (Statsoft, Ohio, USA). Circulating marginal zone B cells and memory phenotypes distinguish between TB diagnosis and end of treatment We firstly aimed to identify phenotypes that were significantly different at diagnosis and the end of treatment (week 24). In Fig. 2 it is seen that the CD27 high memory B cells (CD19 + IgM + CD27 ++ ), p = 0.02880, and plasmablasts B cells (CD19 + IgM + CD138 + CD27 + ), p = 0.00389, populations were significantly higher at diagnosis as when compared to levels at the end of treatment. Sebina et al. [18] observed that UK donors who previously lived in, or visited areas denoted as highly TB-endemic had higher frequencies of memory B cells in their peripheral blood as compared to their counterparts who have not. This study also reported that BCG elicited the production of long-lived mycobacteria-specific memory B cells [18], which supports the notion of high observed frequencies of memory B cells at diagnosis as BCG vaccination is a common practice at birth in South Africa. It has been shown that the maintenance of memory B cells are dependent on the presence of antigen, and that they are lost within 10-12 weeks following its absence [19]. Although this is opposite to other publications stating the longevity of B cells [18,[20][21][22], it corresponds to the observed result of lower memory B cell frequencies at the end of treatment. The high numbers of TB-specific (or BCG-specific, or mycobacteriaspecific) memory B cells might be maintained in this population due to the continual exposure to antigen (either M.tb exposure in this high prevalence setting, or environmental or non-tuberculous mycobacterial exposure). It is reasonable that the decrease of memory B cells towards the end of treatment don't relate to a complete loss of memory B cells, but rather represents an overall decrease in line with the reduction of bacterial burden. Figure 3 shows that memory based B cell phenotypes were significant in the CS cohort as well with memory B cells (CD19 + IgM − CD27 + ); p = 0.01398, plasmablasts B cells (CD19 + IgM − CD138 + CD27 + ); p = 0.00968 and plasmablasts with memory phenotype a (CD19 + IgM − CD138 + CD27 ++ ); p = 0.03616. This continual significance in both NCS and CS phenotypes strengthens their importance as distinguishing factors. Circulating Marginal zone B cells (CD19 + CD27 + CD23 − ) were also significant in both NCS (p = 0.04680) and CS (p = 0.02138) analyses and between groups over time (Fig. 4). Together these results warrants further research into these phenotypes as potential biomarkers for treatment response. Circulating marginal zone-and Mature B cells can distinguish TB from other-lung diseases at diagnosis In the attempt to identify phenotypes that were unique to tuberculosis when compared to other-lung based diseases, two results showed to be significant. The first was NCS marginal zone (MZ) B cells (CD19 + IgM + CD27 + CD23 − ) with p = 0.02092 and secondly CS mature B cells were significant with p = 0.00026 (Fig. 4). With both of these phenotypes significantly lower in peripheral circulation during active TB disease (especially the CS mature B cells), it raises the question whether TB actively suppresses the B cell repertoire during disease. The NCS mature B cell repertoire does not recover to baseline levels during the first week of treatment as one would expect (in line with chemotherapeutic treatment alleviating bacterial burden). These findings support the notion that there is a possible underlying mechanism exploited by Mtb that could be crucial to the management of the infection as both MZ and mature B cells are implicated in effector functions of the adaptive immune system, as seen with the overexpression of programmed death 1 (PD-1) on lymphocyte frequencies during active TB infection [23]. Class switched and non-class switched mature B cells distinguish between tuberculosis, other-lung based diseases and healthy controls Class switched mature B cells from participants with active TB are present in significantly different levels when compared to other-lung diseases (p = 0.043711) and the healthy control group (p = 0.024488) (Fig. 5). Mature B cells were not only able to distinguish between these groups in the class switched category, but also in the non-class switched category where the difference between TB and OLD was highly significant (p = 0.000016) and between TB and the healthy control group where p = 0.025939 (not shown)). An interesting observation is that the class switched mature B cells from TB is present at higher frequencies when compared to the OLD group (Fig. 5), but that the inverse is observed with non-class switched mature B cells where they are present in significantly lower levels when compared to the OLD group (not shown). This would suggest that a strong preference is displayed towards (Fig. 6). This result identifies another combination of markers that could be used to distinguish TB from other lung based diseases and should be further investigated as a measure in the early diagnosis of TB. Conclusion This pilot study identified unique variations in the B cell repertoire during active tuberculosis infection when compared to healthy controls, other-lung based diseases and over the course of TB treatment. The first observation of memory-based phenotypes being the major distinguishers between diagnosis and end of treatment in both class switched and non-class switched phenotypes holds promise as markers for treatment response. The second important finding of this study is that circulating marginal zone B cells could not only distinguish between TB diagnosis and the end of treatment, but also has significantly different frequencies when compared to other-lung based diseases making it a candidate as biomarker for not only treatment response, but distinguishing active TB disease from other-lung based diseases. The observation that NCS and CS mature B cells could best distinguish between TB and the two control groups (healthy controls and other lung diseases) at diagnosis, but that their respective peripheral frequencies are present at an inverse level. Taken together, these results show that mainly B cell phenotypes implicated in activation and subsequent effector functions are influenced by TB and warrants further research to confirm their potential as biomarkers for TB disease and treatment response. These
2017-08-03T02:01:14.259Z
2016-07-29T00:00:00.000
{ "year": 2016, "sha1": "585258a7149e80abad4f91630d0ccfa7cb2a04fd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12950-016-0133-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "585258a7149e80abad4f91630d0ccfa7cb2a04fd", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
15201251
pes2o/s2orc
v3-fos-license
Epigenetic modulation of the drug resistance genes MGMT, ABCB1 and ABCG2 in glioblastoma multiforme Background Resistance of the highly aggressive glioblastoma multiforme (GBM) to drug therapy is a major clinical problem resulting in a poor patient’s prognosis. Beside promoter methylation of the O 6 -methylguanine-DNA-methyltransferase (MGMT) gene the efflux transporters ABCB1 and ABCG2 have been suggested as pivotal factors contributing to drug resistance, but the methylation of ABCB1 and ABCG2 has not been assessed before in GBM. Methods Therefore, we evaluated the proportion and prognostic significance of promoter methylation of MGMT, ABCB1 and ABCG2 in 64 GBM patient samples using pyrosequencing technology. Further, the single nucleotide polymorphisms MGMT C-56 T (rs16906252), ABCB1 C3435T (rs1045642) and ABCG2 C421A (rs2231142) were determined using the restriction fragment length polymorphism method (RFLP). To study a correlation between promoter methylation and gene expression, we analyzed MGMT, ABCB1 and ABCG2 expression in 20 glioblastoma and 7 non-neoplastic brain samples. Results Despite a significantly increased MGMT and ABCB1 promoter methylation in GBM tissue, multivariate regression analysis revealed no significant association between overall survival of glioblastoma patients and MGMT or ABCB1 promoter methylation. However, a significant negative correlation between promoter methylation and expression could be identified for MGMT but not for ABCB1 and ABCG2. Furthermore, MGMT promoter methylation was significantly associated with the genotypes of the MGMT C-56 T polymorphism showing a higher methylation level in the T allele bearing GBM. Conclusions In summary, the data of this study confirm the previous published relation of MGMT promoter methylation and gene expression, but argue for no pivotal role of MGMT, ABCB1 and ABCG2 promoter methylation in GBM patients’ survival. Background Glioblastoma multiforme (GBM) is still the most frequent primary brain tumor in adults and is characterized by a highly aggressive phenotype [1]. Despite advances in therapy, glioblastoma remains associated with poor prognosis and an overall survival time of about 1 year [2]. A major underlying factor is resistance to different chemotherapeutics. Several chromosomal, genetic and epigenetic alterations were identified in GBM [3], but the clinical value of the most glioma-associated molecular aberrations remained unclear [4]. However, a significant prognostic impact could be shown for the O 6 -methylguanine-DNA-methyltransferase (MGMT). The MGMT functions as a DNA repair enzyme, which repairs alkylating lesions of the DNA by removing mutagenic adducts from the O6 position of guanine, e.g. caused by the chemotherapeutic agent temozolomide [5]. Hence, it confers drug resistance and the therapeutic response to alkylating agents is improved in tumor cells expressing low levels of MGMT [5]. Furthermore, MGMT promoter methylation was demonstrated to result in decreased MGMT expression and correlates with a survival benefit in glioblastoma patients treated with alkylating chemotherapeutics such as temozolomide [6]. Expression and activity of the efflux transporters ABCB1 and ABCG2 have also been suggested as pivotal factors contributing to drug resistance by increasing the efflux of chemotherapeutic compounds in the setting termed "multidrug resistance". These ATP-binding cassette transporters (ABC transporters) belong to a superfamily of membrane pumps that use ATP hydrolysis to efflux various endogenous compounds and drugs outside the cell. ABCB1 was shown to be expressed both in low-grade glioma and high-grade glioma such as glioblastoma [7] and ABCG2 was found to be expressed in glioma stem cells as well as in endothelial cells of the large vessels of glioma tissue [5]. For both ABCB1 and ABCG2 an inverse correlation between the methylation status of Cytidine phosphate Guanosine (CpG) sites at the promoter region and the transporter expression was demonstrated [8,9]. Furthermore, ABCB1 promoter methylation is associated with the ABCB1 C3435T polymorphism which again influences the ABCB1 expression [10]. Similarly, for ABCG2 an association of the ABCG2 C421A polymorphism with both the transport function and expression of the efflux transporter was shown [11,12]. ABCB1 and ABCG2 promoter methylation have not been assessed in glioblastoma patients before. We therefore investigated promoter methylation of ABCB1 and ABCG2 in 64 glioblastoma patients using the pyrosequencing technology, which allows unequivocal quantification of the methylation status, and used MGMT promoter methylation as positive control. In our study we found a significantly increased MGMT and ABCB1 promoter methylation in GBM tissue but couldn't demonstrate any association of MGMT, ABCB1 or ABCG2 promoter methylation with overall survival of glioblastoma patients in multivariate Cox models adjusted for potential risk factors (gender and age) and stratified on the variable therapy (temozolomide vs. no temozolomide). However, we found a significant negative correlation between MGMT promoter methylation and MGMT expression and a significant association between MGMT methylation and the MGMT C-56 T polymorphism. Patient samples Malignant glioblastoma samples (n = 64) were obtained from patients who had undergone tumor resection at the Clinic of Neurosurgery of the University of Greifswald, Germany. Tumor samples were collected between 2003 and 2009 from patients with newly diagnosed glioblastoma who had received no antitumoral therapy before sample collection. Additionally, relapses of 17 of these patients were collected. For investigation of methylation status, fresh frozen human glioblastoma tissue samples (n = 4) and paraffin-embedded glioblastoma sections (n = 60) were analyzed by pyrosequencing, which is described as a highly reproducible method for quantification of MGMT methylation in both formalin-fixed paraffinembedded and fresh frozen samples [13,14]. Samples from 11 of the 64 GBM patients have been available for mRNA expression analysis and 9 further GBMs have been added to investigate the mRNA expression in a total of 20 GBM patients. All tumor samples were histologically classified by a neuropathologist at the Department of Pathology of the University of Greifswald according to the WHO criteria of tumors of the nervous system using formalin-fixed, paraffin-embedded specimens. Clinico-pathological features of the analyzed patients are summarized in Table 1. All investigations described in this study were approved by the Ethics Committee of the University of Greifswald, Germany. DNA Isolation Genomic DNA (gDNA) was isolated from fresh frozen tumor samples or formalin-fixed, paraffin-embedded glioblastoma sections using the NucleoSpin® Tissue Kit (Macherey-Nagel, Düren, Germany) according to the manufacturer's instructions. 2-5 slices à 5 μm of the formalin-fixed, paraffin-embedded glioma tissue sections were used per sample. Concentrations of the isolated genomic DNA were determined using a NanoDrop 1000 Spectrophotometer (PEQLAB, Erlangen, Germany). Bisulfite Treatment and PCR Amplification For evaluation of the promoter methylation status of MGMT, ABCB1 and ABCG2 1800 ng of the isolated gDNA per sample were bisulfite treated using the EpiTect® Bisulfite Kit (Qiagen, Hilden, Germany) following the manufacturer's instructions. The bisulfite treated DNA was subjected to PCR amplification of the specific promoter regions of MGMT, ABCB1 and ABCG2 gene by the use of primer sets designed to amplify sequences containing CpG sites to be investigated ( Table 2). The detailed conditions for the PCR amplification of the promoter region of interest are summarized in the Additional file 1 with the Figures S1-S3. Pyrosequencing for promoter methylation analysis Pyrosequencing analysis was performed on the PSQ™ 96MA System (Biotage, Uppsala, Sweden). Methylation of target CpGs was assessed by determining the ratio of cytosine to thymine incorporated during pyrosequencing. Cytosine incorporation indicated a methylated CpG and thymine incorporation an unmethylated CpG. Quantification of the methylation status was performed using the provided software from PSQ™ 96MA System (Biotage, Uppsala, Sweden). Five CpG methylation sites were investigated for MGMT promoter methylation, two for ABCB1 promoter methylation and three for ABCG2 promoter methylation. The average percentage methylation of the different CpG sites of each gene promoter was calculated and used in all analyses. During the establishing process of the methylation assays, the analytical sensitivity and quantitative accuracy of the three methylation assays have been assessed. We correlated the methylation results for the first CpG site of ABCB1 (Additional file 1: Table S1A), ABCG2 (Additional file 1: Table S1B) and MGMT (Additional file 1: Table S1C) methylation assays of three independent measurements. These same 19 samples measured in triplicates determined a high quantitative accuracy of the assays with high significant (*** p < 0.001) Spearman correlation coefficients between 0.88 and 0.99 (Additional file 1: Tables S1A-C). Methylation-specific PCR (MSP) 1.8 μg DNA has been bisulfite-converted using the EpiTect® Bisulfite Kit (Qiagen, Hilden, Germany). 2 μl of the bisulfite-converted DNA was amplified in a PCR consisting of 20 pmol of primers (Eurofins MWG Operon, Ebersberg, Germany), 1.25 mM MgCl 2 , 10x Reaction buffer, 1.5 units Taq-Polymerase and 200 μM dNTPs (all Invitrogen, Karlsruhe, Germany). The thermal cycling conditions used were as follows: 95°C for 10 min, and 40 cycles of 95°C for 45 sec, 52°C for 50 sec, 72°C for 1 min with a final extension of 72°C for 10 min. Two μl of the amplified first-round product was used for second round of amplification with 20 pmol of primers (Eurofins MWG Operon, Ebersberg, Germany), 1.25 mM MgCl 2 , 10x Reaction buffer, 1.5 units Taq-Polymerase and 200 μM dNTPs (all Invitrogen, Karlsruhe, Germany). The following thermal cycling conditions were followed: 95°C for 10 min, and 20 cycles of 95°C for 45 sec, 65°C for 25 sec, 72°C for 30 sec with a final extension of 72°C for 10 min. The amplified products were run on a 2% agarose gel with an expected size of 81 bp for methylated product and 93 bp for an unmethylated product. We analyzed the agarose gel bands using the KODAK Gel Logic 200 Imaging System (Eastman Kodak Company, Rochester, NY, USA) (Additional file 1: Figure S8). Our corresponding pyrosequencing results for MGMT are included in Additional file 1: Table S2. To validate the performance of the MSP conditions chosen, methylated and unmethylated standard samples provided from the EpiTect PCR Control Set (Qiagen, Hilden, Germany) have been used as controls which showed the expected bands only in either the methylated or unmethylated PCR (Additional file 1: Figure S8). However, beside U87MG glioblastoma cells as a methylated reference [15] and LN18 glioblastoma cells, we chose a spectrum of differently methylated GBM samples of the pyrosequencing analysis: two strong, two middle and two unmethylated GBM samples for assay comparison. Even though it is difficult to directly compare the qualitative method of MSP with the quantitative method of pyrosequencing, it is still visible, that those three glioblastoma samples (GBM1, GBM3, and GBM6) with the most intensive methylated bands in MSP show in addition to U87MG cells the three highest methylation percentages in the pyrosequencing analysis (28.2%, 61.21%, and 74.74%), indicating more or less comparable results of both methylation detection methods. Quantitative Real-Time PCR Total RNA was isolated from 20 human fresh frozen glioblastoma samples and 7 normal brain tissue samples (frontal/temporal lobes) using the PeqGold RNAPure™ reagent protocol (Peqlab Biotechnologie, Erlangen, Germany), which allows (based on the guanidinisothiocyanat) the dissociation of cells and inactivation of RNases and other enzymes at the same time. The provider of RNAPure guarantees optimal purity and high rates of yields of non-degraded RNA. Subsequently, RNA was measured photometrically at the wavelength of 260 nm using the Nano Drop™ 1000 Spectrophotometer from PEQLAB (Erlangen) to get information about the purity. 1 μl of each sample was applied. Beside the concentration of the RNA, indicated in μg/μl, the purity ratios 260/280 and 260/230 were determined. It was proven, that the purity ratio (260/280) of our samples accounts for 1.8 to 2.0 (2.2 for the ratio 260/230). RNA was further always placed on ice to avoid degradation and long-time storing of the RNA was performed at −80°C. 500 ng of total RNA were used for cDNA synthesis with the High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA) in a 20 μl reaction volume. Real-time PCR was performed with 10 ng final concentration of cDNA using the ABI Prism 7900 Sequence Detection System (Applied Biosystems, Foster City, CA). cDNA was amplified using Assays on Demand for MGMT (Hs01037698_m1), ABCB1 (Hs00184491_m1), and ABCG2 (Hs01053790_m1), all conjugated with fluorochrome 5-carboxyfluorescein (FAM), and 18S rRNA (Predeveloped TaqMan Assay Reagent, catalog no.: 4319413E, Applied Biosystems, Foster City, CA) conjugated with fluorochrome VIC (Applied Biosystems). Applied Biosystems guarantee maximum and equivalent amplification efficiency as well as specificity of all TaqMan® Assayson-Demand Gene Expression Products (Application Note, Applied Biosystems: Amplification Efficiency of TaqMan® Assays-On-Demand™ Gene Expression Products). Further, only assays with exon junction spanning probes were selected in order to avoid amplification of contaminating genomic DNA. The analysis of the amplification efficiencies of our used PCR assays by measuring a serial dilution of selected cDNA showed a PCR efficiency of about 90% for all assays (Additional file 1: Figure S4A-F) allowing us to analyze the expression of our target genes by the ΔΔC T -method. Thus, quantification was performed with the comparative ΔΔC T -method. For the analysis of the quantitative RT-PCRs using the delta Ct-method we set the expression value of each GBM sample against the mean expression value of all analyzed control brain samples. Thus, the target gene expression in the GBM samples represents a multiple of the target expression in the control brain. In addition to 18S rRNA we further analyzed the gene expression of TBP and GAPDH to validate their suitability as housekeeping genes in our samples. Using commercially available GAPDH and TBP assays (Applied Biosystems), we determined a similar distribution of values in 10 non-malignant brains, 97 GBM samples and 21 astrocytomas validating the expression measurements of MGMT, ABCB1 and ABCG2 based on normalization to the 18S rRNA content of our samples, as seen in the Additional file 1: Figure S7. Analysis of genetic variants All patients were screened for MGMT C-56 T (rs16906252), ABCB1 C3435T (rs1045642) and ABCG2 C421A (rs2231142) gene polymorphisms using the polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method using the primers listed in Table 3. The detailed conditions for PCR-RFLP are described in the Additional file 1. mRNA expression of the markers CD133, GFAP and PECAM in glioblastoma samples To assess the content of tumor cells and endothelial cells we decided to measure GFAP as a marker of astrocytic cells, CD133 as marker for glioblastoma stem-like cells and PECAM (CD31) as endothelial marker in the glioblastoma and non-malignant brain tissue. The CD133, GFAP and PECAM expression in non-malignant brain, glioblastomas (GBM) and the glioblastoma cell line LN18 is shown in Additional file 1: Figure S6.1. The expression of CD133 is significantly elevated in GBMs compared to non-malignant brain samples, showing that glioma stem-like cells are probably more common in the tumors than in healthy brain. These findings support that most of the cells analysed in our GBM samples represent tumor cells [16]. Besides, GFAP and PECAM expression greatly vary between the glioblastoma samples, but are not significantly different to the non-malignant brain, indicating a similar number of astrocytes and especially endothelial cells in the tumor tissue. Thus, our findings of an altered methylation status in GBM compared to non-malignant brain are mostly based on tumor cells instead of endothelial cells. Furthermore, we correlated the expression data of GFAP, CD133 and PECAM with MGMT, ABCB1 or ABCG2 expression. MGMT, ABCB1 and ABCG2 did not significantly correlate with either GFAP, CD133 or PECAM gene expression (Additional file 1: Figures S6.2, S6.3 and S6.4) except the slight, but significant correlation of ABCG2 and PECAM (Spearman's r = 0.494, p = 0.037, Additional file 1: Figure S6.4C), which may be due to the known localization of ABCG2 in endothelial cells of the blood-brain and the blood-tumor barrier. Nevertheless, an exact comparison to or quantification of the tumor cell content in relation to other cell types in the glioblastoma tissue does not seem possible since each individual tumor cell can hold a different pattern of gene expression and thus our expression analysis gives an insight into the tumor in its entirety but not into the individual cells that form the whole tumor mass. Statistical analysis Methylation data were analyzed using the statistical programs SAS V 9.1 (SAS Institute Inc., Cary, NC, USA) and STATA (Intercooled Stata/SE 10.1). Frequencies were calculated for categorical data. Metric data were described using median and interquartile range as well as minimum and maximum values. Spearman correlation, Mann Whitney U test (comparison of two groups), Kruskal Wallis test (comparison of > 2 groups) and Fisher's exact test were used for bivariate comparisons. A p-value of <0.05 was considered to indicate statistical significance. The multivariate Cox proportional hazard regression analysis was used to examine the association between the patient's overall survival and mean methylation of ABCB1, ABCG2 and MGMT, respectively, adjusted for potential risk factors including gender and age at diagnosis. The duration of a patient's overall survival (OS) was defined as the time from the first tumor detection until death or the end of the study (30.6.2009). Patients who were alive at the end of the study were included as censored data into the model. The variable "therapy" (with temozolomide vs. without temozolomide) did not fulfil the assumptions of proportionality and was excluded from the a priori defined model. This variable was used as strata variable instead. All predictors were dummy coded. Hazard ratios and 95% confidence intervals were estimated. In a sensitivity analysis we included (1.) mean percentage of methylation over the respective methylation sites as continuous variable and (2.) every single methylation site separately as continuous variable into the model. Furthermore two different Cox models were analyzed for patients treated with or without temozolomide, respectively. Clinico-pathological features of the analyzed patients The study population comprised 64 patients with glioblastoma multiforme WHO°IV (GBM). For the correlation of the methylation degrees between primary tumors and relapses, 17 relapses of primary glioblastoma multiforme WHO°IV tumors were analyzed in comparison to the respective primary tumor. Clinico-pathological features of all analyzed patients are summarized in Table 1 Methylation status, expression level and overall survival of glioblastoma patients Several studies predict MGMT promoter methylation as an important prognostic factor for clinical outcome of glioblastoma patients treated with temozolomide [6,17]. Therefore, we analyzed five CpG sites in the MGMT promoter, of which four CpG sites have been already investigated in a previous cutting-edge publication in the field [18]. Because MGMT methylation was suggested as a pivotal prognostic factor for OS of glioma patients who were treated with temozolomide [6,17], we established Cox models for all glioblastoma patients, patients treated with temozolomide as well as patients without temozolomide application, respectively. Continuous Cox models for the entire glioblastoma patient cohort (with and without temozolomide treated patients together), for the patients treated with temozolomide and for the patients treated without temozolomide did not show any significant overall survival difference dependent on the MGMT methylation level (Table 4). Also Dunn and colleagues used for their studies the method of pyrosequencing, but showed MGMT methylation as an independent prognostic factor associated with prolonged OS [19]. Thus, we analyzed the association of MGMT methylation and OS by dividing the MGMT methylation levels in the subgroups according to Dunn and colleagues by using our cut-off of 5.72% (mean normal brain ± 2 s.d.; first group: methylation level >5.72% -<20%; second group: methylation level >20% -<35%; third group: methylation level >35%, Additional file 1: Figure S5) [19]. However, this analysis displayed no significant difference in OS between the subgroups as well (Kruskal Wallis test, p = 0.9948). Because it is known, that MGMT methylation and expression are tightly linked [20] in the way that MGMT methylation leads to loss of MGMT expression [21], we analyzed this association in a subgroup of 20 GBM patients for which MGMT expression levels have been available. A significant negative correlation between MGMT methylation and expression could be identified (Spearman's rank correlation coefficient: -0.474; p = 0.035; Figure 1A), indicating the downregulation of MGMT expression by methylation [21]. Furthermore, a highly significant elevated MGMT methylation has been detected for 64 GBM patient samples compared to 7 healthy brain samples (Mann Whitney test p < 0.001; Figure 1B). Since ABCB1 represents a multidrug resistance factor in several malignancies, including glioma [7], we additionally investigated the influence of ABCB1 promoter methylation on patients' outcome by using a new established pyrosequencing assay to detect the methylation degree in the ABCB1 promoter. The analysis of the methylation status involved two CpG sites located in the CpG island of the ABCB1 promoter and showed a broad interindividual range in the methylation level in our patient cohort with a median of 27.3% (minimum 1.3%, maximum 85.4%). To investigate whether both CpG sites of the ABCB1 promoter for each person are methylated in the same extent, correlation analysis was performed demonstrating a high correlation of methylation of the two investigated CpG sites (Spearman's rank correlation coefficient: 0.782, p-value <0.001). In relation to the OS of all glioblastoma patients and patients treated with temozolomide no significant association of the ABCB1 methylation status could be detected in a continuous, multivariate Cox model (Table 5). In a cohort of 20 GBM patients, for which ABCB1 expression levels have been available, also no significant correlation between ABCB1 methylation and expression has been detected (Spearman's rank correlation coefficient: 0.242, p = 0.304; Figure 1C). However, the ABCB1 methylation measured in 64 GBM patients was significantly higher than in the controls (Mann Whitney test p = 0.007; Figure 1D), suggesting a different epigenetic regulation in glioblastomas than in healthy brain. A further resistance factor suggested to be relevant in glioma is the efflux transporter ABCG2 [5]. For determination of the ABCG2 promoter methylation a novel pyrosequencing assay was established by our group to analyze three CpG sites that have been previously determined in other tumor entities using methylation specific quantitative PCR and bisulfite genomic sequencing [22,23]. The median ABCG2 promoter methylation status was 30.28% with a broad interindividual range (Min. 3.63%, Max. 83.57%). But for each patient the three investigated ABCG2 CpG sites show a very high correlation in their methylation degree: CpG site 1 and site 2 with a Spearman's rank correlation coefficient of 0.972 (p-value <0.0001), CpG site 1 and site 3 with a Spearman's rank correlation coefficient of 0.953 (p-value <0.0001) and CpG site 2 and site 3 with a Spearman's rank correlation coefficient of 0.970 (p-value <0.0001). In continuous multivariate Cox models for all glioblastoma patients (patients treated with and without temozolomide) no trend for a survival benefit has been detected (Table 6). Furthermore, no correlation of ABCG2 methylation and expression could be identified in a group of 20 GBM patients (Spearman's rank correlation coefficient: -0.170, p = 0.474; Figure 1E) and no significant difference in ABCG2 methylation of GBMs and normal brain has been measured (Mann Whitney test p = 0.051; Figure 1F). As expected, through all multivariate analyses for both the entire glioblastoma cohort and patients treated with temozolomide a significant worse OS for older patients could be identified. Association of the promoter methylation degree with the analyzed Single Nucleotide Polymorphisms (SNPs) Because of a strong association with the MGMT methylation in glioblastoma [24] and further tumors like colorectal carcinoma [25,26], pleural mesothelioma [27], and lung cancer [28], the MGMT C-56 T polymorphism was included in our study. The frequency of the MGMT -56C and -56 T allele was 87.5% and 12.5% in our cohort, respectively, and its distribution was in Hardy-Weinberg equilibrium (p = 0.521). As hypothesized the MGMT promoter methylation degree of the analyzed glioblastoma samples was significantly correlated with the genotypes of the MGMT C-56 T polymorphism (Figure 2A; Wilcoxon test, p-value = 0.02), showing a higher methylation level in patients with the T allele. Regarding the analyzed ABCG2 SNP the frequency of the ABCG2 421C and 421A allele was 89% and 11%, respectively, which is in Hardy-Weinberg equilibrium (p = 0.957) and the frequencies of the ABCB1 alleles 3435C and 3435 T were 38% and 62% in our patient population, respectively, being in Hardy-Weinberg equilibrium with a borderline p-value (p = 0.0503), too. Though the transport function and expression of ABCG2 is known to be influenced by the ABCG2 C421A polymorphism [11,12] and the C3435T polymorphism in exon 26 seems to modulate the expression of ABCB1 [29], we could not determine an association between the different genotypes of the ABCG2 C421A polymorphism and the ABCG2 promoter methylation ( Figure 2C) or for the ABCB1 methylation status and the ABCB1 C3435T polymorphism ( Figure 2B). Correlation of the methylation degrees between primary tumor and relapse To compare the consistency of the methylation degrees before and after treatment, the promoter methylation has been analyzed in 17 primary tumors and relapses of the same patients. The mean ABCG2 methylation degree of the primary tumors was significantly correlated to the relapses of the respective patients (Spearman's rank correlation coefficient: 0.804, p-value <0.001; Figure 3C) indicating a stable ABCG2 promoter methylation level before and after treatment. While the mean MGMT methylation degree of the primary tumors showed at least a trend to be correlated to the relapses of the same patients (Spearman's rank correlation coefficient: 0.42, p-value = 0.09; Figure 3A), for the ABCB1 methylation status a correlation between primary tumors and relapses was not evident ( Figure 3B). Relationship of the promoter methylation degree with the age at diagnosis and the gender Using bivariate analyses, no significant association with the age at diagnosis or the gender has been detected for MGMT methylation, ABCG2 methylation or ABCB1 methylation (data not shown). Discussion Understanding molecular factors relevant for drug resistance of glioblastoma multiforme is pivotal for the development of personalized therapeutic approaches to this highly aggressive tumor. In several studies the role of MGMT methylation as molecular marker for overall survival of glioma patients treated with alkylating agents is discussed [6,17,30,31]. Beside MGMT, the drug efflux transporters ABCB1 and ABCG2 are thought to affect survival of glioma patients due to their role in drug resistance [32,33]. In particular, temozolomide-mediated cytotoxicity is modulated by ABCB1 expression [34]. However, in contrast to MGMT methylation no data have existed for ABCB1 and ABCG2 promoter methylation in glioblastoma tissue until now. Thus, we focused on establishing new pyrosequencing assays for the analysis of the methylation status of the ABCB1 and the ABCG2 promoter in a collective of 64 glioblastoma patients using MGMT promoter methylation as reference. Methylation status was analyzed using pyrosequencing because it allows a highly reproducible quantification of the methylation degree at each individual CpG site and enables rapid parallel processing of a large number of samples [13]. A pivotal role plays the design of the sequencing primer and the pyrosequencing program to minimize the risk of assaying DNA that was not fully converted during bisulfite treatment [13]. However, because pyrosequencing is based on a PCR, which amplifies the bisulfite treated DNA across different epialleles, and the pyrosequencing displays DNA methylation as an average methylation level at each individual CpG position, it is not possible to provide methylation information on an epiallelic level. Thus, results of pyrosequencing should always be interpreted with caution regarding an epiallelic influence. Compared to pyrosequencing MSP is susceptible to false-positive and false-negative results because of mosaic methylation patterns with variable grade of methylation at the primer positions [13], especially when nested primers are used for clinical samples with small amounts of poor quality DNA like FFPE samples [13,35], which represented the largest proportion of analyzed GBM samples in this study. In addition, Dunn and colleagues described pyrosequencing as suitable method for FFPE samples [19] as well as our fourth tested CpG site of MGMT promoter has been shown as prognostic relevant, while MSP and SQ-MSP for MGMT methylation detection have not been in a Cox model of a recent study [14] and authors recommended pyrosequencing for MGMT methylation analyses in high-throughput settings [36]. In general, in previous studies the role of MGMT methylation as molecular marker for overall survival of glioblastoma patients is highly discussed between authors who detected [6,17] or did not find an impact on overall survival [30,31]. We also investigated the previously by Esteller and colleagues published predicting CpG sites [18] but we could not determine a significantly different overall survival of GBM patients (with or without temozolomide treatment) in dependence on their MGMT promoter methylation status. Because this result is contradictory to prior publications about MGMT methylation as an independent prognostic factor [6,17,19], we additionally investigated different aspects of the MGMT promoter methylation to prove the reliability of our methylation data. Thus, we performed a correlation analysis of MGMT promoter methylation and MGMT expression in a subgroup of 20 GBM patients for which MGMT mRNA expression data have been available. A significant negative correlation between MGMT promoter methylation and MGMT expression was seen as already predicted by previous studies [20,21]. Furthermore, we found a highly significant elevation of MGMT promoter methylation in GBMs compared to normal brain. In agreement with a previous study [19] we also detected only a marginal MGMT promoter methylation in non-neoplastic brain samples and a significantly increased MGMT promoter methylation in our GBM. Moreover, we investigated the MGMT C-56 T SNP, because it is located in the enhancer region of the MGMT gene only 18 bp downstream from the analyzed MGMT CpG site. A significantly higher MGMT promoter methylation in carriers of the T allele has been described recently in glioblastoma [24], diffuse large B-cell lymphoma [37], colorectal carcinoma [25,26], pleural mesothelioma [27], and lung cancer [28]. In our patient cohort we could confirm a significant higher MGMT methylation level in patients with the T allele than in C-56C wildtype patients underlining a precise measurement of the MGMT promoter methylation level in our study. Further, this would also imply that patients with the T allele show a minor MGMT expression and thus should have a better response to temozolomide. Contrary to this, we could not find any relation of the C-56 T MGMT polymorphism to overall survival of our patient cohort, again arguing against a fundamental role of MGMT in the prognosis of glioblastoma patients as seen by our MGMT promoter methylation analysis. In addition to MGMT, we studied ABCB1 promoter methylation because ABCB1 is significantly expressed in glioma and discussed as a potential resistance factor [7]. Additionally, for acute lymphocytic leukaemia the methylation of ABCB1 was associated with a trend toward a better OS [38], while in patients with bronchioloalveolar carcinoma no correlation between ABCB1 methylation status and patients' OS was observed [39]. To date, no study analyzing ABCB1 promoter methylation and its relation to ABCB1 expression and OS of glioblastoma patients is reported. Our new established pyrosequencing assay showed a high correlation of the methylation degree of both analyzed CpG sites with each other similar to the ABCG2 methylation assay. Despite a significantly higher ABCB1 methylation in GBM samples of our cohort, the ABCB1 methylation level was not associated with the OS of GBM patients and was not significantly related to the ABCB1 expression. Similarly, an ABCB1 promoter hypermethylation was shown in MCF-7 human breast cancer cells [40] and in human prostate cancer compared with benign prostate hypertrophy [41]. Moreover, a significantly higher methylation ratio for the ABCB1 promoter in gastric cancer samples than for non-neoplastic mucosa has been reported [42]. The prostate cancer study detected a significant correlation of ABCB1 promoter hypermethylation with worse clinicopathological features [41]. However, both published ABCB1 methylation studies did not analyze any association with patient's overall survival. A further drug resistance gene we decided to analyze was ABCG2, because this efflux transporter was found to be expressed in glioma stem cells as well as in endothelial cells of the large vessels of glioma tissue. Similarly to ABCB1, ABCG2 could mediate chemotherapeutic resistance by the efflux of cytostatics [5]. In addition, an inverse correlation between promoter methylation of ABCG2 and its expression in lung cancer and multiple myeloma has been determined [9,22]. To establish a pyrosequencing assay for ABCG2 we used a study of Turner and colleagues [22] as reference in order to analyze the same CpG sites in the ABCG2 promoter, because methylation of these CpG sites was shown to be associated with ABCG2 expression in multiple myeloma. Moreover, a recent study investigated the same CpG sites of the ABCG2 promoter showing differences in methylation levels between three renal carcinoma cell lines [23]. In our study, a positive correlation of the ABCG2 methylation level in primary tumor and relapse of the same patient was observed, showing a consistent ABCG2 methylation status before and after treatment with radio-and chemotherapy. Interestingly, no association between ABCG2 promoter methylation and ABCG2 expression or overall survival was seen. The missing effect of ABCG2 methylation on GBM patients' survival could be explained by the fact that temozolomide, which is the most applied cytostatic for patients with GBM, is not a substrate of ABCG2 [43], and thus modulation of ABCG2 expression should not affect the therapy and survival of GBM patients. Furthermore, for each pyrosequencing assay we assessed a limited number of CpGs (five CpGs for MGMT; two CpGs for ABCB1; three CpGs for ABCG2). Thus, there could be the possibility that CpG sites of the methylation assays, which have not been tested in this study, could have a prognostic value for the GBM patients. However, we interrogated CpG sites, which have been tested in parts before in other publications, as the MGMT CpG sites [6,14,18] and the ABCG2 CpG sites [22] or have been specifically described as prognostic relevant such as our investigated CpG site 4 of the MGMT assay [14]. Furthermore, previous authors investigated a comparable number of CpG sites for MGMT [14]. Nevertheless, it may be useful to test also a larger number of CpG sites for the ABCB1 and ABCG2 assays in the future, e.g. using a Human-Methylation450 (HM-450 K) BeadChip [44]. Conclusions In summary, our study represents a combined investigation of promoter methylation and gene polymorphisms of the pivotal drug resistance genes MGMT, ABCB1 and ABCG2 in glioblastoma multiforme. Our data argue against any relevant impact of MGMT, ABCB1 or ABCG2 promoter methylation on overall survival of glioblastoma patients. However, we could detect a significant negative correlation between MGMT promoter methylation and MGMT expression, a markedly elevated MGMT and ABCB1 promoter methylation in glioblastoma specimens and a significant correlation between MGMT methylation and the MGMT C-56 T polymorphism. Additional file Additional file 1: PCR amplification of promoter regions of interest. Figure S1. Illustration of the MGMT promoter sequence analyzed by pyrosequencing for determination of the methylation status. Figure S2. Illustration of the ABCB1 promoter sequence analyzed by pyrosequencing for determination of the methylation status. Figure S3. Illustration of the ABCG2 promoter sequence analyzed by pyrosequencing for determination of the methylation status. PCR-RFLP amplification details. Figure S4A-F. Real-Time PCR efficiencies. Figure S5. Grading of MGMT methylation levels according to Dunn et al., 2009. Tables S1A-C. Quantitative accuracy of methylation assays. Figure S7. Comparison of housekeeping genes. Figure S8 and Table S2. Data of Methylation-specific PCR (MSP) for MGMT according to Hegi et al., 2005. SV, and WH performed data analysis. MCO, SBM, KW, IC and HKK wrote or contributed to the writing of the manuscript. All authors read and approved the final manuscript.
2017-06-21T02:29:47.791Z
2013-12-31T00:00:00.000
{ "year": 2013, "sha1": "f4816c4159d95bcb470bc0c2c9a51b26050fe9cb", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-13-617", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f4816c4159d95bcb470bc0c2c9a51b26050fe9cb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118475486
pes2o/s2orc
v3-fos-license
Dynamic probes of quantum spin chains with the Dzyaloshinskii-Moriya interaction We consider the spin-1/2 anisotropic XY chain in a transverse (z) field with the Dzyaloshinskii-Moriya interaction directed along z-axis in spin space to examine the effect of the Dzyaloshinskii-Moriya interaction on the zz, xx and yy dynamic structure factors. Using the Jordan-Wigner fermionization approach we analytically calculate the dynamic transverse spin structure factor. It is governed by a two-fermion excitation continuum. We analyze the effect of the Dzyaloshinskii-Moriya interaction on the two-fermion excitation continuum. Other dynamic structure factors which are governed by many-fermion excitations are calculated numerically. We discuss how the Dzyaloshinskii-Moriya interaction manifests itself in the dynamic properties of the quantum spin chain at various fields and temperatures. The analysis of the effect of the Dzyaloshinskii-Moriya interaction on the dynamic quantities of quantum spin chains was reported in several papers [22,23]. In particular in Refs. [22,23] such analysis was performed using the symmetry arguments and field-theoretical methods for the isotropic Heisenberg (XXX) chain. In our study we restrict ourselves to a simpler model, i.e. anisotropic XY chain, the dynamic properties of which are amenable to detailed analytical and numerical analysis. We should also note that Cs 2 CoCl 4 provides a new example of spin- 1 2 XY chain [24] that may renew interest in the calculation of observable quantities for such a chain [25,26,27]. To be specific, we consider N spins one-half governed by the following Hamiltonian H = n J x s x n s x n+1 + J y s y n s y n+1 + n D s x n s y n+1 − s y n s x n+1 + n Ωs z n . (1.1) Here J x , J y are the anisotropic XY exchange interaction constants, D is the z-component of the Dzyaloshinskii-Moriya interaction and Ω is the transverse magnetic field. Such model was introduced in Refs. [28,29]. Moreover, the case of isotropic XY interaction J x = J y (in this case the Dzyaloshinskii-Moriya interaction can be eliminated from the Hamiltonian by a spin axes rotation) was analyzed in some detail [34,35]. The performed analysis is based on the Jordan-Wigner transformation, with J = 1 2 (J x + J y ) and γ = 1 2 (J x − J y ). In our calculations we consider both periodic and open boundary conditions. Of course, in the thermodynamic limit the results for bulk characteristics should be insensitive to the boundary conditions implied. In the former case, i.e. when s α N +1 = s α 1 , the bilinear Fermi form (1.3) is periodic or antiperiodic depending on whether the number of fermions is odd or even. After the Fourier transformation, with u κ = sgn (γ sin κ) It should be noted here that only the energy spectrum Λ κ (1.8) but not the coefficients of the Bogolyubov transformation u κ , v κ (1.6) depends on D. Using (1.8) one immediately finds that the energy spectrum is gapless when γ 2 ≤ D 2 and Ω 2 ≤ J 2 + D 2 − γ 2 or when γ 2 > D 2 and Ω 2 = J 2 . In our numerical calculations we use open boundary conditions [36]. with A nm = Ωδ nm + 1 2 (J + iD) δ m,n+1 + 1 2 (J − iD) δ m,n−1 , B nm = 1 2 γ (δ m,n+1 − δ m,n−1 ). We are not aware of a general analytical solution for this problem. For the two particular cases, namely, the anisotropic XY chain without field and the Ising chain in a transverse field such solutions can be found e.g. in Ref. [37]. In our study we solve Eqs. (1.10) numerically for chains of about a few hundred sites. The relation between the spin model (1.1) and the noninteracting Jordan-Wigner fermions (1.7) is a key step in the statistical mechanics calculations for one-dimensional spin-1 2 XY systems. In our study of the dynamic properties we focus on the dynamic spin structure factors The dynamic structure factors are of considerable importance since they are directly comparable with inelastic neutron scattering experiments of some quasi-one-dimensional substances. The dynamic transverse spin structure factor S zz (κ, ω) can be easily evaluated analytically (Section 2). The transverse dynamics is governed exclusively by a two-fermion excitation continuum the properties of which in the case of XY chain without the Dzyaloshinskii-Moriya interaction were discussed earlier [38]. Another quantity which is also governed by the two-fermion excitation continuum is the dynamic dimer structure factor [39,40,41]. Therefore, the effect of the Dzyaloshinskii-Moriya interaction on the two-fermion excitation continuum deserves a separate discussion (Section 3). The xx and yy dynamic structure factors are computed numerically (Section 4). We compare and contrast the properties of different dynamic structure factors at different values of the transverse field and temperature emphasizing the effect of the Dzyaloshinskii-Moriya interactions. We end up with conclusions (Section 5). Some preliminary results of this study were announced in the conference paper [42]. Dynamics of transverse spin correlations We start with the zz dynamic structure factor of the model ( after standard calculations using the Wick-Bloch-de Dominicis theorem we arrive at where n κ = (exp (βΛ κ ) + 1) −1 is the Fermi function. This result agrees with the corresponding formula for the transverse dynamic susceptibility χ zz (κ, ω) derived in Ref. [32]. Introducing the function the dynamic structure factor S zz (κ, ω) (2.1) can be expressed as follows In the limit of isotropic XY interaction γ = 0 Eq. (2.3) yields the result obtained earlier [34]. In the limit This coincides with the expression obtained earlier in Ref. [38]. We notice that Eq. (2.4) is more generally valid for the case γ 2 > D 2 (when Λ κ > 0) and T = 0. In the case D 2 > γ 2 and T = 0 or in the most general case of arbitrary D and T > 0 (as well as in the case D = 0 but T > 0 which was not considered in Ref. [38]) the zz dynamic structure factor exhibits new qualitative features in comparison with the analysis reported in Ref. [38]. Again the zz dynamic structure factor is governed exclusively by two-fermion excitations as can be seen from Eq. (2.3), however, for D 2 > γ 2 , T = 0 or for T > 0 all three δ-functions in Eq. (2.3) may come into play. Two-fermion excitation continua The zz dynamic structure factor (2.3) can be rewritten in the form In accordance with (3.1) we distinguish three two-fermion excitation continua (they correspond to j = 1, 2, 3 in Eq. (3.1)) which govern S zz (κ, ω). The gray-scale plots of S We remark that although we were not able to find a simple analytical form of the most important lines in the κ-ω plane characterizing the two-fermion excitation continua for a general case of the spin chain (1.1) it is easy to determine these functions numerically for any set of parameters using MAPLE or/and FORTRAN codes. We also recall that in the simplest case of the isotropic XY model in a transverse field (γ = 0, D = 0) the corresponding results have been derived analytically [43]. However, in the case γ = 0, D = 0 the analytical results have been reported only in the limiting cases Ω = 0 or γ = 1 [38]. Of course, the case γ = 0, D = 0 considered in the present paper is even more complicated. In what follows we take a typical set of parameters J = 1, γ = 0.5, D = 1, Ω = 0.5. We begin with the case of infinite temperature T → ∞ (β = 0). The two-fermion dynamic structure factor may have nonzero values in the plane wave-vector κ -frequency ω if the equation has at least one solution κ ⋆ 1 , −π ≤ κ ⋆ 1 < π. In Fig. 6 (panels a (j = 1), b (j = 2), c (j = 3)) we show the regions in the κ-ω plane in which equation (3.5) has four solutions (dark-gray regions), two solutions (gray regions) or has no solutions (white regions). The bounding lines of the regions in which equation (3.5) has solutions constitute the upper (ω (j) u (κ)) and the lower (ω u (κ)) boundaries of the two-fermion excitation continuum, respectively; moreover, for some regions of the wave-vector κ the lower boundaries ω (j) l (κ) may be equal to zero. Alternatively, we may find the upper and the lower boundaries of the twofermion excitation continuum seeking for the maximal and minimal values of We have found that ω l (κ) = 0 occur for the values of κ 1 , κ ⋆ 1 , which satisfy the equation Moreover, Eq. (3.7) also holds along the boundary ω(κ) between the gray and the dark-gray regions in the upper (j = 1) and the middle (j = 2) panels in the left column in Fig. 6. On the other hand, the quantities . Thus, the mentioned boundary lines in the panels in the left column in Fig. 6 are the lines of van Hove singularities akin to the density of states effect in one dimension. Further, we have found that for almost all cases ∂ 2 ∂κ1 2 E (j) (κ 1 , κ) = 0 for the values of κ 1 which satisfy (3.5) with ω = ω (j) s (κ) that obviously implies a familiar square-root divergence when ω approaches ω We illustrate different types of singularities in Fig. 7. In particular, in Fig. 7a one can see the square-root divergencies (3.9), whereas in Fig. 7b aside from the square-root divergences (3.9) (solid and dotted curves) one can also see the dependence (3.10) (dashed curve). We notice that the ǫ − 2 3 singularity remains for other values of D and is also present when D → 0. For J = 1, γ = 0.5, D = 0, Ω = 0.5 it occurs at κ ≈ 1.68213734 while ω approaches ω (2) s (κ ≈ 1.68213734) = 0. Interestingly, this fact could not be detected in the earlier study on the zz dynamics in the anisotropic XY chain in a transverse field without the Dzyaloshinskii-Moriya interaction [38] since that study refers to the zero-temperature case when only the continuum j = 1 manifests itself in the transverse spin dynamics (see discussion after Eq. (2.4)). We also notice that the observation of the ǫ − 2 3 singularity may be difficult because of the fact that this peculiarity takes place only at one specific value of κ (in contrast to the ǫ − 1 2 singularity). However, for the values of wave-vector in the vicinity of this specific value one easily distinguishes a reminiscence of the ǫ − 2 3 singularity (see the dotted curve in Fig. 7b). Finally we note that for some of the lines characterizing the two-fermion excitation continua we can give simple analytical expressions. Thus, for j = 1 the maximum/minimum of E (j) (κ 1 , κ) occurs at κ 1 = 0 and κ 1 = −π and hence the corresponding boundary lines are given by E (1) (0, κ) and E (1) (−π, κ). We did not find simple analytical expressions for the boundary lines between the white and the dark-gray regions and for the nonzero lower boundary (see panel a in Fig. 6). For j = 3 the upper boundary is given by E (3) (0, κ) and E (3) (−π, κ). Finally, we emphasize a role of B (j) -functions (3.2) which are responsible for the specific features of the dynamic transverse spin structure factor S zz (κ, ω). The functions B (j) (κ 1 , κ) modify and add some additional structure to S zz (κ, ω) in the κ-ω plane (compare Fig. 4 and Figs. 5, 6d, 6e, 6f referring to the low-temperature limit and Fig. 3c and Figs. 6a, 6b, 6c referring to the high-temperature limit). In particular, the function B (2) (κ 1 , κ) removes the soft modes at κ = ± (κ + − κ − ) but not at κ = 0 from S zz (κ, ω) (see Fig. 4b). Furthermore, comparing Figs. 4a, 4c and 5a, 5c one sees that van Hove singularities along the lines ω = E (1) (0, κ), ω = E (1) (−π, κ) (panels a) and along the lines To summarize this Section, the two-fermion dynamic structure factors have a nonzero value only in a restricted area of the κ-ω plane (two-fermion excitation continua) and may exhibit the van Hove singularities not only with exponent 1 2 but also with exponent 2 3 . Moreover, at zero temperature the two-fermion dynamic structure factors may exhibit jumps at which their values abruptly increase by a finite value. xx and yy dynamic structure factors In the present Section we calculate the xx and yy dynamic structure factors. For numerical calculations it is convenient to rewrite (1.11) in the following form [36]). In contrast to transverse dynamic structure factor, the xx and yy dynamic structure factors are essentially more complicated quantities within the Jordan-Wigner method. Really, owing to a nonlocal relation between the x and y spin components and Fermi operators (1.2) the xx and yy time-dependent spin correlation functions are expressed through many-particle correlation functions of noninteracting Jordan-Wigner fermions. Let us now discuss the obtained numerical results. First of all we note that both dynamic structure factors S xx (κ, ω) and S yy (κ, ω) show similar behavior for the taken value of γ = 0.5; obviously they become identical in the isotropic limit γ = 0. We start with the dynamic structure factors at low temperatures. As can be seen in Figs. 8 and 9 (and Figs. 12a and 13a) these dynamic structure factors show several washed-out excitation branches which are roughly in correspondence with the characteristic lines of the two-fermion excitation continua (compare three dynamic structure factors in the panel a and the panels b and c in Fig. 14; note that these quantities are shown for κ that varies from −π to 3π). Thus, although S xx (κ, ω) and S yy (κ, ω) are many-particle quantities within the Jordan-Wigner picture and hence they are not restricted to some region in the κ-ω plane, their values outside the two-fermion continua are rather small. This observation (i.e. two-particle features dominate many-particle dynamic quantities S xx (κ, ω) and S yy (κ, ω) at low temperatures) agrees with our previous studies on isotropic XY chains [36,34] (see also Ref. [38]). The constant frequency scans for several values of the wave-vector displayed in Figs. 10,11 show the redistribution of spectral weight S xx (κ, ω) and S yy (κ, ω) as the Dzyaloshinskii-Moriya interaction D increases. We note that these frequency profiles exhibit one or several peaks that may be relatively sharp or broad. The Dzyaloshinskii-Moriya interaction affects the positions of the peaks, their shapes and even their number (see, for example, the dependences S xx (0, ω) vs ω and S yy (0, ω) vs ω displayed in Figs. 10a and 11a). Constant frequency (and wave-vector) scans can be obtained for quasi-one-dimensional compounds by neutron scattering or resonance techniques and our findings may be useful in explaining the experimental data for the corresponding materials. As can be seen from our results, the Dzyaloshinskii-Moriya interaction clearly manifests itself in the frequency or wave-vector profiles of the dynamic structure factors that can be used in determining the magnitude of this interaction. It should be remarked that the dynamic structure factors of quantum spin chains are often examined within the framework of a bosonization approach [44,23]. Note, however, that field-theoretical approaches do not apply to small length scales and short time scales when the discreteness of the lattice becomes important. Therefore, since these methods can describe only the low-energy physics, the high-frequency features nicely seen in Figs. 8 and 9 cannot be reproduced by these theories. As temperature increases, the low-temperature structure gradually disappears and the dynamic structure factors S xx (κ, ω) and S yy (κ, ω) become κ-independent in the high-temperature limit (see Figs. 12 and 13). This is in agreement with earlier studies in the infinite temperature limit [31]. To summarize, xx and yy dynamic structure factors at low temperatures are not restricted to the twofermion excitation continua and have (small) nonzero values outside these continua. They exhibit several washed-out excitations concentrated along the characteristic lines of the two-fermion excitation continua. The Dzyaloshinskii-Moriya interaction manifests itself in the constant frequency/wave-vector scans influencing the detailed structure of such profiles. In the high-temperature limit xx and yy dynamic structure factors become κ-independent. Conclusions In this paper, we have obtained the detailed dynamic structure factors S αα (κ, ω), α = x, y, z of the spin- The two-fermion dynamic quantities have nonzero values in a restricted region of the κ-ω plane; they may exhibit van Hove singularities (not only with exponent 1 2 but also with 2 3 ); moreover, they may exhibit finite jumps at zero temperature. The xx and yy dynamic structure factors involve many-fermion excitations. However, the two-fermion excitations dominate their low-temperature behavior: at low temperatures these quantities show several washed-out excitation branches which correspond to specific lines of the two-fermion excitation continua. The Dzyaloshinskii-Moriya interaction clearly manifests itself in the frequency/wave-vector profiles which makes it possible to determine the magnitude of this interaction by measuring the dynamic structure factors. Dynamical structure factors can be measured by neutron scattering. Another experimental technique which yields dynamic quantities is electron spin resonance (ESR). If a static magnetic field along z axis and the electromagnetic wave polarized in α ⊥ z direction are applied to the spin-1 2 anisotropic XY chain with the Dzyaloshinskii-Moriya interaction, the experimentally measurable absorption intensity is given by FIG. 6. Towards the properties of the two-fermion excitation continua; J = 1, γ = 0.5, D = 1, Ω = 0.5; left panels -infinite temperature limit T → ∞ (β = 0), right panels -zero temperature limit T = 0 (β → ∞); j = 1 (panels a and d), j = 2 (panels b and e), j = 3 (panels c and f).
2007-12-20T10:33:17.000Z
2006-06-06T00:00:00.000
{ "year": 2007, "sha1": "b208f1c0e74e8400931aee386ba6d8524960d3df", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0712.3361", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b208f1c0e74e8400931aee386ba6d8524960d3df", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
54503088
pes2o/s2orc
v3-fos-license
Nd : YAG láser in art works restoration Láser cleaning in works of art has a number of advantages over traditional techniques of restoration. In this article, the technique used and the physical mechanisms that explain the process of ablation of pollutants are described. The results obtained in the cleaning of statues of marble and alabaster are exposed as well as oil-painting restoration. In this last specific case, the Nd:YAG láser is used with successful results. INTRODUCTION The earliest record of the use of láser in art conservation is to be found at the beginning of the decade of the 70's (1).From the preliminary research works of Asmus (2) up to the present date, the láser cleaning have been made in marble and limestone, textiles, tapestries, leather, pottery, colored glass, bronze and aluminum, etc.In all cases, the cleaning consists of removing the superficial encrustations with the minimum damage for the base material (3). ADVANTAGES OF LÁSER CLEANING In the abrasive, chemical and manual-mechanical techniques of cleaning, environmental pollution and variation in the superficial profile of art work are provoked (3).Compared with other techniques, and based on the versatility, accurate control and minimum environmental damage of láser; it permits (*) IMRE.Univ.de la Habana.(Cuba), (lasertec@ffuh.fmq.uh.edu.cu).(**) Centro de Restauración, Conservación y Museología CENCREM (Cuba).(***) Universidad Autónoma Metropolitana UAM-Iztapalapa, México a selective elimination of dirtiness without mechanical contact with the surface, preservation of superficial relief and avoid continued action after the cleaning has finished. Main physical mechanisms In previous research works (2) the mechanisms of cleaning are related to the characteristic of láser.Watkings et al (3) put forward the mechanisms that involves selective evaporation, photo and thermal decomposition and ablation by shock wave in Q-Switch regime.In painting, láser excimer has been used (4 and 5), and this article gives a report on the application of Nd: YAG láser by using an ultraviolet wavelength.Radiation power, pulse length and wavelength make Nd:YAG láser the most used, specially in the cleaning of all kinds of stones.Table I shows the lasers used in this work and their more important features, as well as the right concrete application in art cleaning.The samples were treated and analyzed by means of techniques as optical microscopy with digitalization of image (NEOPHOT-21 microscope) and color pictures. CARRARA MARBLE AND ALABASTER The láser used possesses a high pulse power which permits to clean big áreas with one pulse.When using free generation (1 ms) the cleaning mechanism would be the selective vaporization of encrustations.For the case of Carrara Marble, the absorvance was for the pollutant layer = 0.6.This strong difference of absorption ((3) between the pollutant and the surface for 1,064 nm (0.2 and 0.6) enables to make the selective evaporation of the dirtiness without damaging the surface (3).In the case of alabaster, two pieces were treated: a dirty ornamental vase also covered with a thick layer of soot; and a Dante bust.The high energy and diameter of the láser beam used permitted to clean the bust in a 4 h work session. Oilpaintings Fotakis (4) reported the láser excimer cleaning of oil-paintings.In this case, two experiments were made.Removal of the grime layer at the back of an oíd portrait and removal of the varnish layer and dirtiness over the painting.The first was made with an anonymous 18th century portrait of the Mexican Guadalupe Virgin.The first step to take in the process of restpration is to elimínate the grime layer which covers the back of the cloth and reclothing again.The cleaning was very well done without any affection at a speed higher than that of the traditional methods and without using chemicals, thus contributing to the preservation of the portrait (Fig. 1). The second experiment consists of cleaning over the portrait and it was made with a Mexican paint from the beginning of the century.The most appropriate regime was achieved with 266 nm, pulse energy 100 mJ.For this wavelength, the cleaning process relies on non-thermal photoablation of contaminants due the strong absorption.Changing the energy density, the amount of material removing can be controlled.However, the generalization of Nd láser cleaning of paints may lead to complications similar to the difficulties with excimer lasers (6). SPECTROPHOTOMETRY REFLECTANCE OF SURFACE In all measured range (500-2500 nm) the reflectance of treated surface was higher than the reflectance of untreated área (Fig. 2).These results confirm that the cleaning process can be controlled by real time feedback reflectance measurements (5). CONCLUSIONS -The cleaning of Carrara marble due to environmental pollution and of alabaster due to soot pollution was made successfully by means of a Nd láser.-A layer of dirtiness from the back cloth of a painting using Nd láser in Q-Switch mode was removed.-The cleaning varnish layer and dirtiness adhered to the painting can be removed by means of cjoing the treatment with wavelength 266 nm without damaging the art work.Láser cleaning of art works permits to improve the process of restoration once it decreases the damage to the art work and it increases the speed of the process. Table II resumes the lasers, samples and results of the cleaning.
2018-12-02T12:29:22.978Z
2010-01-01T00:00:00.000
{ "year": 1998, "sha1": "40186a4c6d21890d962647716b165186e171a39a", "oa_license": "CCBY", "oa_url": "https://revistademetalurgia.revistas.csic.es/index.php/revistademetalurgia/article/download/668/680/685", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "40186a4c6d21890d962647716b165186e171a39a", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [] }
251562772
pes2o/s2orc
v3-fos-license
Intracranial Calcifications in Systemic Lupus Erythematosus We present an unusual case of a 37-year-old woman diagnosed with systemic lupus erythematosus presenting with right-sided weakness and altered mentation. On computed tomography and magnetic resonance imaging, marked intracranial calcifications were seen. These localized calcifications are speculated to be secondary to the necrotic focus of repeated episodes of vessel inflammation. However, the pathogenesis of cerebral calcifications is largely unknown. Introduction Neurological involvement in systemic lupus erythematosus (SLE) may present either as central nervous system (CNS) or neuropsychiatric symptoms such as cognitive impairment, psychosis, depression, stroke, seizure, movement disorder, or peripheral neuropathy [1]. The most commonly found abnormalities in neuroimaging are cerebral atrophy, infarction, or intracerebral hamartomas [2]. Calcifications of the cortex, basal ganglia, and cerebellum have been reported in very few cases, making it an unusual presentation in SLE. The underlying mechanism of these calcifications in SLE is unknown but may be dystrophic following microinfarctions due to primary vascular damage and ongoing venous inflammation. In the brain, 5.6% of cases had vascular involvement while histopathological findings were suggestive of non-inflammatory vasculopathy with secondary infarcts [3]. The basal ganglia are the most common site for localized calcifications, with lesser involvement of other regions such as the thalamus and cerebellum [4]. Case Presentation A 37-year-old woman was admitted to the hospital with complaints of fever, right-sided weakness of two weeks duration, inability to swallow, and altered confusional state for the past three days. She also complained of facial rash which was exacerbated by sun exposure, hair loss, and inflammatory small joint pains. On examination, she had a typical butterfly-shaped malar rash and non-scarring alopecia. The neurological examination revealed positive Babinski sign, rigidity, and hyperreflexia in both right upper and lower limb (power of 3/5) while the power on the left side was 5/5 according to the medical research council (MRC) scale. She also had gait ataxia with dysarthria. However, there were no meningeal signs or peripheral neuropathy. She had no history of bluish discoloration of fingers, skin tightening, dryness of mouth and eyes, seizures, or psychosis. There was also no history of prior CNS infections or any cranial irradiation. The patient was diagnosed with SLE, six months prior to the current presentation based on the Systemic Lupus International Collaborating Clinics (SLICC 2012) criteria (three clinical and one immunological). Hence, she was taking 5 mg of prednisolone along with 200 mg of hydroxychloroquine daily for SLE. Laboratory investigations ( Table 1) revealed a high titer for antinuclear antibodies (1:320 dilution by enzyme-linked immunosorbent assay (ELISA) and speckled pattern on indirect immunofluorescence test). The anti-double-stranded deoxyribonucleic acid (anti-dsDNA) titers by ELISA were, however, intermediate (65 IU/mL; Normal range: <30 IU/mL). Complement levels (C3: 102 U/mL; normal: 80-120 and C4 38 mg/dl; normal: 15-45 mg/dL) were within normal limits, excluding lupus flare. The antiphospholipid (aPL), anticardiolipin (aCL) antibodies, and lupus anticoagulant (LA) were negative. The complete blood count, liver, kidney functions, and serum electrolytes were within normal limits. There was no evidence of proteinuria on urine analysis. She had normal blood calcium, phosphorus, and vitamin D levels. Thyroid, parathyroid function tests, and 24-hour urinary calcium levels were also within normal limits. Plain computed tomography (CT) head revealed calcifications in subcortical white matter, bilateral corona radiata, periventricular region, bilateral basal ganglia, brain stem, dentate nuclei, and cerebellar folia ( Figure 1). The same findings were confirmed in non-contrast magnetic resonance imaging (MRI) brain, which was suggestive of hyperintense basal ganglia and dentate nuclei on T1-weighted images ( Figure 2). Similar blooming was seen in bilateral basal ganglia, dentate nuclei, cerebellar folia, and subcortical and juxtacortical white matter in bilateral cerebral hemispheres. In addition to this, MRI also showed multifocal subacute infarcts in the left thalamus, right internal capsule, anterior body of corpus callosum, periventricular white matter of left temporal lobe, left cerebellar hemisphere, and left cerebral peduncle. Considering the acute neurological event and her increased disease activity score, her dose of steroids was hiked up to 1 mg/kg/day. The patient had considerable improvement in the weakness of the right upper and lower limb (power 5/5) on day seven of follow-up. Discussion SLE is a chronic autoimmune disorder with up to 75% predilection for the nervous system; however, these may not be recognized due to the diverse and varied presentations [6]. Bilateral symmetric intracranial calcification is known to occur in several conditions like hypoparathyroidism, Fahr's disease, anoxic encephalopathy, and idiopathic ferrocalcinosis [7]. Certain hereditary diseases (Cockayne's syndrome, Albright's osteodystrophy, Down's syndrome), intoxications (lead and carbon monoxide), and CNS infections (tuberculous meningitis, herpes, or measles encephalitis) have also been associated with basal ganglia calcification [8]. Our patient did not have any endocrine or electrolyte abnormalities or a family history of any of these diseases. Hence, the above observations suggest that these intracranial calcifications in our patient likely resulted from vasculopathy of SLE. Limited cases of CNS calcifications in SLE have been described in the literature. For the diagnosis of calcification, CT and MRI brain imaging are both valuable radiological techniques. Irrespective of the etiology, calcified deposits display a uniform distribution and this may be attributed to the selective exposure of certain areas of the brain for the deposition of calcium. Since the calcium concentration seems to be higher in the basal ganglia compared to other areas of the brain, it is the most frequent site for localization [9]. Only a few cases of calcifications have been reported in the cerebellum, white matter, and cortex [10,11]. Matsumoto et al. reported a case of intracerebral calcifications and depicted that the calcifications were around the venous vessels in the core of the necrotic area. Their study suggested that the neurotoxic factors exuded from venous vessels result in calcifications [12]. Raymond et al. revealed the hypothesis of primary immunological vascular damage, which triggers microinfarctions with posterior dystrophic calcification [4]. Several SLE patients with calcium deposition have never had neurological manifestations, and no major disparities have been observed in the neurological picture in SLE patients with or without cerebral calcification. It can be a possibility that calcium deposition has no direct role in the clinical expression of CNS lupus. The pathogenesis of these findings is unclear and there is no confirmed hypothesis, but possibly the calcifications are a result of repeated episodes of venous inflammation with focal immunological demyelination. A probable mechanism can be the leakage of proteins from venous vessels with neurotoxic and pro-calcification properties. Conclusions Massive intracranial calcification can rarely be seen in cases of SLE, but the association between the neurological events and calcification in the specific areas of brain could not be proven in this case or in any of the prior case reports. The mechanism of intracranial calcification in SLE remains unclear, but it should be borne in mind that marked intracranial calcification can be observed in various rheumatological disorders such as SLE, systemic sclerosis, or dermatomyositis. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-08-15T15:07:31.356Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "a3233b31d190cd421f1ebffe0b2d7cbb3f122f0f", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/108542-intracranial-calcifications-in-systemic-lupus-erythematosus.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7ecab3edf4220f504692fefb744fbb3391929f0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
225135121
pes2o/s2orc
v3-fos-license
Energy-Efficient Cluster Head Selection via Quantum Approximate Optimization : This paper proposes an energy-efficient cluster head selection method in the wireless ad hoc network by using a hybrid quantum-classical approach. The wireless ad hoc network is divided into several clusters via cluster head selection, and the performance of the network topology depends on the distribution of these clusters. For an energy-efficient network topology, none of the selected cluster heads should be neighbors. In addition, all the selected cluster heads should have high energy-consumption efficiency. Accordingly, an energy-efficient cluster head selection policy can be defined as a maximum weight independent set (MWIS) formulation. The cluster head selection policy formulated with MWIS is solved by using the quantum approximate optimization algorithm (QAOA), which is a hybrid quantum-classical algorithm. The accuracy of the proposed energy-efficient cluster head selection via QAOA is verified via simulations. Introduction The present era is a turbulent period of technological advancement towards the noisy intermediate-scale quantum (NISQ) era and the 6G era [1,2]. With the development of NISQ devices, various fields, such as quantum communication, quantum machine learning, and quantum optimization, are evolving as components of 6G. In the field of quantum optimization, in particular, various studies have been conducted, based on technologies such as quantum adiabatic algorithm (QAA), variational quantum eigensolver (VQE), and quantum approximate optimization algorithm (QAOA) [3][4][5]. Among them, the QAOA, a special case of bounded-error quantum polynomial time (BQP) algorithm, can be applied to various fields because of its simple structure [6,7]. The structure of QAOA is divided into a parameterized quantum circuit part and a classical optimization part that determines the optimal parameters. Heuristic methods are mainly used in the classical optimization part, and control of hyperparameters is necessary to find the optimal parameter. Therefore, QAOA does not always guarantee quantum supremacy; however, it has the flexibility to adapt to variations of the target problem [8,9]. The main target problem of QAOA research is the maximum cut problem, and various studies have been conducted to empirically optimize this problem [10][11][12][13][14][15]. In addition, QAOA application studies have also been actively conducted [16,17]. Along with these various research attempts, QAOA is expected to be useful as a quantum heuristic optimizer in the near future. The wireless ad hoc network is an important research topic, even in the 6G era, because of its relevance to the internet of things (IoT) and autonomous driving [18][19][20][21][22]. In a wireless ad hoc network using limited resources, it is advantageous to construct a hierarchical network topology through Background For many decades, quantum computing researchers have aimed to find quantum algorithms that can surpass classical algorithms. Various attempts have been made, and a few quantum algorithms with quantum supremacy over classical algorithms in certain cases have been identified [5,27,28]. In the NISQ era, the discovery of potential quantum algorithms has been accelerated because of the evolved quantum processors and interactions with machine learning techniques [1]. In particular, QAOA is one of the lightest and most flexible hybrid quantum-classical optimization algorithms in the NISQ era and is suitable for application to various graph-based systems [5,29]. QAOA can control the resource (quantum operations and times) usage-performance trade-off by adjusting the circuit depth, so it can be used in various environments. If the proper quantum hardware is supported, QAOA is also suitable in sensor networks or embedded systems with limited resources and capabilities. Therefore, before discussing a hybrid quantum-classical approach to energy-efficient cluster head selection in limited systems, this section describes the QAOA and a graph-based MWIS formulation. Quantum Approximate Optimization Algorithm (QAOA) QAOA is a hybrid quantum-classical optimization algorithm that uses a parameterized quantum circuit composed of unitary operators [5,30,31]. In addition, the QAOA is a type of quantum heuristic algorithm which is known to perform well in some combinatorial optimization problems. The first step in QAOA is to map the objective function f (x) of the problem consisting of binary bit strings to the Hamiltonian H P , as follows: where H P is a problem Hamiltonian. The problem Hamiltonian H P can be divided into the objective Hamiltonian H O and constraint Hamiltonian H C , as follows: where ρ ∈ R + is a constant coefficient. The Hamiltonians H O and H C represent the objective and constraint, respectively, of the problem. The mixing Hamiltonian H M , a transverse-field Hamiltonian, is defined as follows: where σ x j is a Pauli-X operator applied on the jth qubit. The Pauli-X operator acts similar to the classical NOT operator, that is, it acts as a bit-flip. To construct a quantum circuit, H P and H M are converted into unitary operators, as follows: where γ and β are parameters and where U P (γ) and U M (β) are usually called the problem operator and mixing operator, respectively. In the QAOA circuit, the initial state |s can be a uniform superposition state, as follows: where n ∈ Z + . If the depth of a quantum circuit is defined as p ∈ Z + , the 2p parameters are represented as follows: Therefore, the parameterized state |γ, β , which is generated in a quantum circuit, is as follows: The expectation value of f (x) for the solution samples obtained via repeated measurements on |γ, β is as follows: The optimal parameters γ op and β op can be obtained from the classical optimization loop; therefore, the optimal solution can be computed from (9) via γ op and β op . A schematic of the QAOA is shown in Figure 1. The green part represents the classical optimization loop, for example, the stochastic gradient descent (SGD). Indeed, the optimization of parameters in the classical optimization loop part has a significant impact on the performance of the QAOA. The p value also has a significant impact on the performance of QAOA, and the proper p value is different depending on the problem. When the p value increases, the number of unitary operators increases and, thus, the number of quantum gates in the circuit increases. In other words, the accuracy of the computation can be increased but the gate noise increases as well. Therefore, in QAOA, proper design of the problem Hamiltonian, proper optimization of parameters, and proper setting of the p value are all important. Maximum Weight Independent Set (MWIS) Let us consider a weighted graph G = (V, E) with |V| = n nodes and |E| = m edges. Assume that the weight w j is at node v j ∈ V, where 1 ≤ j ≤ n. The independent set can be constructed by selecting only nonadjacent nodes. Among the possible independent sets, the MWIS has the largest sum of weights. In Figure 2b,c, 2 cases represent the following independent sets V 1 ⊂ V and V 2 ⊂ V, respectively: The sums of the weights of each set are as follows: There is no independent set having a sum of weights greater than 58; thus, V 1 is MWIS. Expanding from the example, the generalized formulation of the MWIS is as follows: max : where The MWIS formulation has been applied to various research fields, such as communication, machine learning, and computer vision [32][33][34][35][36]. Although the MWIS is non-deterministic polynomial-time (NP) hard, which requires an approximate solution, it is useful for modeling complex and large-scale structures. Energy-Efficient Cluster Head Selection Clustering is an essential technique for organizing networks more efficiently. In particular, the communication between clusters in a wireless ad hoc network which has a distributed control structure is performed by the cluster heads [26]. Therefore, it is very important to set the cluster head selection policy according to the purpose of the wireless ad hoc network. This section describes the clustering method via cluster head selection for an energy-efficient wireless ad hoc network. Clustering Wireless Ad Hoc Network The wireless ad hoc network is a multi-hop system of self-organizing wireless nodes that can communicate with each other without additional infrastructure [37,38]. Let us consider a wireless ad hoc network, as shown in Figure 3. Figure 3a,b represents the flat ad hoc network topology before clustering and the hierarchical ad hoc network topology after clustering, respectively. The numbers on the nodes indicate the weight of each node. In real-world applications, the weight can be a numerical representation of the feature; for example, the efficiency of energy-consumption, level of security, or robustness [26]. In the graphs covered in this paper, the weight of each node represents the efficiency of energy-consumption. When transmitting the same data, less energy is used and more stable transmission is possible on a node that has high efficiency of energy-consumption. Therefore, it is good to use nodes that have high efficiency of energy-consumption in wireless ad hoc network communication. In the wireless ad hoc network, a hierarchical structure such as that shown in Figure 3b has more advantages than a nonhierarchical structure such as that in Figure 3a for limited resource utilization [23,37]. In the hierarchical structure, all nodes are classified into cluster head, gateway, and normal nodes. The general clustering process in which the roles of the nodes are determined as follows: (i) The cluster head nodes are determined sequentially according to the policy. (ii) The clusters are created by grouping the cluster head nodes and adjacent nodes. (iii) Except for the cluster head node in each cluster, the nodes required for communication with other clusters are determined as gateway nodes. (iv) The remaining nodes that are not required for communication with other clusters are determined as normal nodes. Through this process, 10 clusters are identified in Figure 3b, marked with dotted lines. The completely separated clusters are marked with green dotted lines, and the others are marked with red dotted lines. The network topology can be completely changed according to the cluster head selection policy. In other words, the numbers and distributions of clusters depend on the cluster head selection policy. Therefore, the stability and performance of the network also depend on the cluster head selection. Cluster Head Selection Policy using MWIS As mentioned in Section 3.1, it is recommended to select nodes with high energy-consumption efficiency as cluster heads, for stable data transmission. In order to construct an energy-efficient network topology, the cluster heads should not be neighbors. Therefore, for an energy-efficient and stable network, the cluster head selection policy can be formulated with MWIS. The proposed cluster head selection policy is as follows: where v k = 1, if the kth node is selected as cluster head, 0, otherwise, E k,l = 1, if there is an edge between the kth node and the lth node, 0, otherwise. Note that v j ∈ {1, 0} is a binary decision variable of the jth node, V is the set of binary decision variables of all nodes in the network topology, w j ∈ R + is the energy-consumption efficiency of the jth node, and E is the adjacency matrix. The cluster head selection policy formulated with MWIS can be implemented using heuristic approaches, such as the greedy algorithm [39]. For example, the clusters in Figure 3b are the implementation results of the cluster head selection policy using MWIS. Clustering via the MWIS-based cluster head selection policy has several advantages over minimum ID clustering and maximum degree clustering [26]. One of the advantages of clustering via MWIS-based cluster head selection policy is that it produces fewer completely separated clusters. This can reduce the communication time and energy consumption when transmitting data between cluster heads. Energy-Efficient Cluster Head Selection via QAOA The cluster head selection policy proposed in Section 3.2 can be implemented via QAOA, a hybrid quantum-classical optimization algorithm, by proper Hamiltonian design [5,12]. In other words, the problem Hamiltonian, which represents the proposed policy, can be designed, and based on this, a QAOA circuit can be implemented. This approach with the QAOA circuit can have an advantage in speed over the classical MWIS-based clustering algorithm by rapid quantum computation by the principles of superposition and entanglement. In addition, the performance benefits can be also expected if advanced quantum hardware is supported in the future. Theoretically, QAOA increases the approximation quality corresponding to performance by increasing the circuit depth [5]. In the advanced quantum hardware that is free from the effects of gate noise, the circuit depth can be increased to a greater extent; thus, it can be possible to obtain a more accurate solution for the MWIS-formulated cluster head selection policy. Hamiltonian Design By (2), the problem Hamiltonian is designed by dividing into the objective Hamiltonian and constraint Hamiltonian. To match the optimization directions of the objective Hamiltonian and constraint Hamiltonian, both Hamiltonians are designed to be minimized rather than maximized. The mixing Hamiltonian is redefined with the symbols of (18). The objective Hamiltonian. Suppose that there is a Boolean function, f 1 (x), as follows: To obtain a Boolean Hamiltonian H 1 mapped from f 1 (x), the following equation can be constructed for a single qubit: where I is the identity operator, σ z is the Pauli-Z operator, and A and B are constant coefficients. For the same input, the expectation value of H 1 should be adjusted to the same value as the output of f 1 (x). Therefore, the system of equations for obtaining the values of A and B, according to Table 1, is as follows: The values of the constant coefficients A and B are obtained from (23) as 1 2 and − 1 2 , respectively. Therefore, H 1 mapped from f 1 (x) can be defined as follows, from (22): According to (24), the objective function (18) of the energy-efficient cluster head selection policy is mapped to the following Hamiltonian H O * : where σ z j is the Pauli-Z operator applied on the jth node. Because H O * should be maximized, the objective Hamiltonian H O that should be minimized is as follows: The constraint Hamiltonian. According to the constraint (19), all cases between two nodes are shown in Figure 4. The black nodes represent the cluster heads, and in the energy-efficient cluster head selection policy, it is a prohibition condition that two cluster heads are directly connected via an edge. The constraint function C(k, l) that extends this prohibition condition to the entire network topology can be defined as follows: Note that ∧ represents a Boolean AND operator and that k > l is a condition to avoid duplication of E k,l and E l,k , indicating the same edge. The definition of a Boolean AND function f 2 (x 1 , x 2 ) is as follows: To obtain a Boolean Hamiltonian H 2 mapped from f 2 (x 1 , x 2 ), the following equation can be constructed: where C, D, E, and F are constant coefficients and σ z 1 and σ z 2 are Pauli-Z operators applied on the first and second nodes, respectively. According to Table 2, configured to obtain the Hamiltonian H 2 that has the same expectation value as the output of f 2 (x 1 , x 2 ), the following system of equations can be constructed: The values of the constant coefficients C, D, E, and F are obtained from (30) as 1 4 , − 1 4 , − 1 4 , and 1 4 , respectively. Therefore, H 2 mapped from f 2 (x 1 , x 2 ) can be defined as follows, from (29): According to (31), the constraint function (27) of the energy-efficient cluster head selection policy is mapped to the following Hamiltonian H C * that should be minimized: In (32), σ z k and σ z l are the Pauli-Z operators applied on the kth and lth nodes, respectively. By removing the constant term of (32), the simplified version of the constraint Hamiltonian H C that should be minimized is obtained as follows: The problem Hamiltonian. From the definition of H O in (26) and H C in (33), the problem Hamiltonian H P that should be minimized is defined as follows: Note that ρ ∈ R + is a constant coefficient called the penalty rate. ρ determines the proportion of H C compared to H O , which determines the optimal value of H P . Circuit Implementation From the problem Hamiltonian H P in (34), which defines the energy-efficient cluster head selection policy, and the mixing Hamiltonian H M in (35), which provides various states via bit-flip, the unitary operators for constructing the quantum circuit are defined as follows: Note that U P (γ), U O (γ), U C (γ), and U M (β) are called the problem operator, objective operator, constraint operator, and mixing operator, respectively. In addition, γ and β are the 2p parameters of the QAOA circuit defined in (7). Note that the forms of Equations (37)- (39) are in consideration of the rotation-z (RZ) and rotation-x (RX) gates. The RZ gate RZ(θ), representing a single-qubit rotation about the z-axis, and the RX gate RX(θ), representing a single-qubit rotation about the x-axis, are defined as follows: According to Equation (37), U O (γ) can be implemented using the RZ gates. Moreover, U C (γ) can be implemented as a combination of the RZ gates and controlled-not (CNOT) gates, according to Equation (38). Therefore, U P (γ) can be implemented with U O (γ) and U C (γ), according to Equation (36). Subsequently, U M (β) can be implemented using the RX gates by (39). The proposed QAOA circuit for energy-efficient cluster head selection is shown in Figure 5. This circuit is an example applied to a simple network topology with 5 nodes and the following adjacency matrix E . Each node corresponds to each qubit, and the initial state is set to a uniform superposition state by the Hadamard gates. After passing the quantum gates that represent the unitary operators from the initial state, the measurement is performed on the created parameterized state. This sample circuit represents QAOA with depth p = 1; the parameters of unitary operators are expressed as γ 1 and β 1 . As p increases, the number of parameters increases, as in (7). As shown in Figure 5, U O (γ 1 ) requires as many RZ gates as the number of nodes. U C (γ 1 ) requires as many RZ gates and CNOT gates as 3 times and 2 times the number of edges, respectively. U M (β 1 ) requires as many RX gates as the number of nodes. The total number of RZ gates, CNOT gates, and RX gates required for the QAOA p circuit, which has a circuit depth of p, is p times greater than that of the QAOA 1 circuit. Figure 5. Proposed QAOA 1 circuit for MWIS with 5 qubits: The qubit q j corresponds to jth node. γ 1 and β 1 are circuit parameters, and w j is the weight of jth node. Each qubit is initialized to the uniform superposition state via the Hadamard gate. The objective operator U O (γ 1 ) is implemented using the RZ(γ 1 w j ) gates. The constraint operator U C (γ 1 ) is implemented as a combination of the RZ(−E k,l γ 1 ρ(w k + w l )/2) gates, CNOT gates, and RZ(E k,l γ 1 ρ(w k + w l )/2) gates, where k > l. In (42), only E 2,1 (= E 1,2 ) and E 5,3 (= E 3,5 ) are 1, so U C (γ 1 ) is implemented only between q 1 and q 2 and between q 3 and q 5 . The mixing operator U M (β 1 ) is implemented using the RX(2β 1 ) gates. At the end of the QAOA 1 circuit, measurements are performed. Simulation Results This section discusses the QAOA simulation results for the energy-efficient cluster head selection policy formulated using MWIS. The simulation is performed on the regular graphs that are suitable for wireless ad hoc networks. QAOA is one of the simple quantum algorithms that intuitively express the state with qubit rotation via the quantum gate [16,40]. This intuitive and simple process has something in common with the classical greedy algorithm. Therefore, the greedy algorithm, which is also useful for MWIS-based cluster head selection, is used as a comparison algorithm [26]. Simulation Method A set of 10-node 3-regular weighted graphs and a set of 10-node 5-regular weighted graphs were randomly generated with 1000 instances each. The range of the node weight representing the energy-consumption efficiency was from 1 to 10. The optimal solution of each graph was found by the brute-force search so that the following approximation ratio δ could be computed for each graph [41]. where f app is the solution obtained via the approximation algorithm, f op is the optimal solution obtained via brute-force search, H P is the problem Hamiltonian in (34), and γ and β are the parameters of the QAOA circuit. For each instance, the greedy algorithm and QAOA with depth p = 1, 2, · · · , 10 were evaluated in terms of the δ obtained from 1000 measurements. The simulation was performed using TensorFlow Quantum, and the Adam optimizer was used to optimize the parameters of the QAOA circuit [42,43]. Simulation Analysis The simulation aimed to find a valid circuit depth p for which the QAOA could outperform the greedy algorithm in MWIS-based clustering. Simulation on 3-Regular Weighted Graphs As shown in Figure 6a and Table A1, when focusing on the mean of δ on the 3-regular weighted graphs, it is observed that QAOA p defeats the greedy algorithm when p ≥ 4. Moreover, QAOA p has a larger minimum of δ than the greedy algorithm, except for QAOA 1 , QAOA 4 , and QAOA 5 . Based on the mean and minimum of δ, it is confirmed that QAOA p outperforms the greedy algorithm at p ≥ 6. In Figure 7a, most of QAOA p shows a better distribution than that of the greedy algorithm expressed in dark gray. In particular, QAOA p shows an overwhelming performance over the greedy algorithm at p = 9, 10 expressed in blue and dark blue, respectively. In Table A1, it can be numerically confirmed that the optimal solution ratio of QAOA p is 85% at p = 9, 10, which outperforms 68% of the greedy algorithm. Simulation on 5-Regular Weighted Graphs As shown in Figure 6b and Table A2, when focusing on the mean of δ on the 5-regular weighted graphs, it is observed that QAOA p defeats the greedy algorithm when p ≥ 3. Moreover, QAOA p has a larger minimum δ than the greedy algorithm, except for QAOA 1 , QAOA 2 , and QAOA 3 . Based on the mean and minimum of δ, it is confirmed that QAOA p outperforms the greedy algorithm at p ≥ 4. In Figure 7b, most of QAOA p shows a better distribution than the greedy algorithm expressed in dark gray. Similar to the 3-regular graphs case, QAOA p shows an overwhelming performance over the greedy algorithm at p = 9, 10 expressed in blue and dark blue, respectively. In Table A2, QAOA 9 and QAOA 10 show the optimal solution ratios of 69% and 69.9%, respectively. These are overwhelming accuracies when compared to that of the greedy algorithm with an optimal solution ratio of 41%. Summary and Further Discussion On the 3-regular weighted graphs and 5-regular weighted graphs, performance analysis was performed for QAOA p (1 ≤ p ≤ 10) and the greedy algorithm. As p increases, the mean of δ tends to increase, the standard deviation of δ tends to decrease, and the optimal solution ratio tends to increase. In other words, it is experimentally proven that the MWIS-based clustering accuracy via QAOA p increases as p increases. In particular, QAOA 9 and QAOA 10 show overwhelming accuracies over the greedy algorithm. However, it is not a good choice to increase p recklessly. Because each time p increases by 1, the number of parameters in the circuit increases by 2, increasing the time spent on parameter optimization and increasing the risk of falling into a local optimum. Therefore, finding the appropriate p, which depends on the graph type of the network, is the key to clustering via QAOA p . As a topic of further discussion, there is an interesting point about the measurement of QAOA p . The best solution for each instance can be defined as the value closest to or the same as the optimal solution, from among the outputs of the measurements. Therefore, the minimum number of measurements required to obtain the best solution for each instance can be computed by dividing the number of total measurements by the number of best solutions. N(p), which is the mean of the minimum number of measurements required to obtain the best solution for all instances at the circuit depth p, is as shown in Figure 8. On both 3-regular weighted and 5-regular weighted graphs, N(p) shows a tendency to decrease as p increases. Considering that the optimal solution ratio tends to increase as the circuit depth p increases, at large p, the possibility that the best solution and optimal solution are the same is high. Therefore, at large p, the optimal solution can be obtained even with a small number of measurements. This shows the potential for an efficient design of QAOA p by reducing the number of measurements at large p. Conclusions and Future Work This paper proposed an energy-efficient clustering method with a hybrid quantum-classical approach. First, the cluster head selection policy was modeled via an MWIS formulation. Subsequently, the objective and constraint of the modeled policy were mapped to a designed objective Hamiltonian and constraint Hamiltonian, respectively. Based on the designed Hamiltonians, the QAOA p circuit that implemented an energy-efficient cluster head selection policy was proposed. According to the simulation results, the proposed QAOA p outperformed the greedy algorithm at p ≥ 6 on the 3-regular weighted graphs and p ≥ 4 on the 5-regular weighted graphs. In particular, QAOA 9 and QAOA 10 showed the highest performance. Finally, it was experimentally proven that the accuracy of the cluster head selection via QAOA p tended to increase as p increased. One of the future research directions will focus on improving the performance and efficiency by optimizing the gate configuration of the constraint operator part. A larger number of nodes with a large degree would lengthen the circuit of the constraint operator part. As a solution, a parallel gate configuration of the circuit was considered. By separating the set of nodes into subsets via graph preprocessing, the circuit of the constraint operator part could be divided into sub-circuits. Subsequently, a parallel gate configuration could be created in which the CNOT gates on the sub-circuits ran at the maximum simultaneously. This optimization of the gate configuration could certainly shorten the circuit. Therefore, one of our next works would be a concrete implementation of the parallel gate configuration. Another future research direction will focus on data-intensive performance evaluation with various types of real quantum computers or better testbeds. Using a superconducting quantum computer, a photonic quantum computer, a trapped ion quantum computer, or a better testbed, various clustering algorithms will be compared with our MWIS-based clustering via QAOA. Considering the scalability, the experiment will be performed with more nodes. An analysis of the energy-consumption according to the number of gates and qubits for each type of quantum computer will also be performed to quantify the required energy. In other words, another of our next works would be to conduct more realistic performance evaluations for real-world implementation. Author Contributions: J.C. was the main researcher who initiated and organized the research reported in the paper, and all authors including S.O. and J.K. were responsible for analyzing the simulation results and writing the paper. All authors have read and agreed to the published version of the manuscript. Funding: This research was supported by NRF-Korea (2019M3E4A1080391 and 2019M3E3A1084054). Acknowledgments: J. Kim is the corresponding author of this paper. Conflicts of Interest: The authors declare no conflict of interest. Appendix A. Details on Simulation Results Here, there are supplementary data tables of the simulation results in Section 4.
2020-10-28T19:09:54.153Z
2020-10-13T00:00:00.000
{ "year": 2020, "sha1": "88197aa7087bd5779a637f29568d0b965652bed1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/9/10/1669/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "88b12368093bdb8c6300f760af342121bcc1dc68", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
235045980
pes2o/s2orc
v3-fos-license
Semi-Active Suspension Control Design via Bayesian Optimization : The fine tuning of semi-active suspension control systems for road vehicles is usually a costly and burdensome task, needing control expertise and many hours of professional driving. In this paper, we propose a data-driven tuning method enabling the automatic calibration of the parameters of the suspension controller using a small number of experiments and exploiting Bayesian Optimization tools. The effectiveness of the proposed approach is validated on a commercial multi-body simulator. As a side contribution, the approach is shown to be robust with respect to variations of the testing conditions. INTRODUCTION End-of-Line (EoL) calibration of semi-active suspension systems for road vehicles is known to be a costly and heavy task, usually needing a close collaboration among professional drivers, control system experts and test engineers, as well as many hours of driving. For this reason, in Milliken et al. [1995], the tuning of the damper was defined as a black art. A popular option to make EoL calibration an easier and more efficient task is to rely upon Hardware-in-the-Loop (HiL) simulators, where the driver seats in the car cockpit and drives within a virtual environment, while the variations of the road profile are emulated by a robotic arm, see Schuette and Waeltermann [2005]. Another alternative to real-world tests is the so-called 4-poster test system, where the car is placed on four moving platforms that emulate the road excitation, Chindamo et al. [2017]. Despite their major advantages in terms of costs and repeatability of the tests, such procedures are clearly limited by the lab environment and still require domain experts to perform a suitable calibration. For the above reasons, in this paper we propose a datadriven calibration method, in which the problem is formulated as a sequential decision making task, see Barto et al. [1989], allowing the test engineer to perform fully automatic tuning of the suspension system. The input of the algorithm will only be the data measured by on-board sensors, while the control parameters will be returned as outputs with no need of human intervention, by minimizing a pre-defined performance-oriented objective function (e.g., addressing comfort or road handling) via Bayesian Optimization, see Snoek et al. [2012], Klein et al. [2016]. We will argue that, using the latter tool, only a limited amount of driving time is required, thus making real experimental tests cost-competitive with respect to HiL and other indoor alternatives. This would not be true for other global optimization approaches, e.g., Particle Swarm or Genetic Algorithms, since they require a high number of experiments, as discussed in Alkhatib et al. [2004]. For more details on Bayesian Optimization, see the surveys in Shahriari et al. [2015], Frazier [2018]. To summarize, the novelty of this work consists of a fully automatic calibration protocol of a fixed structure suspension controller for a road vehicle. The protocol defines the working conditions which the test must follow, the definition of an objective function corresponding to the major key performance index and a smart sampling strategy based on Bayesian Optimization, suggesting the calibration parameters which shall be tested to rapidly converge to the optimum. The remainder of this paper is as follows. In Section 2, the calibration problem is formally stated for a parametric semi-active suspension controller and a specific layout is selected to illustrate the performance of the proposed algorithm. The automatic tuning algorithm based on Bayesian Optimization is illustrated in detail in Section 3. Simulation results, including a sensitivity analysis with respect to small variations of the testing conditions, are reported in Section 4. The paper is ended by some concluding remarks. PROBLEM STATEMENT Suspensions represent a key technology in road vehicles for both comfort and performance, see Savaresi et al. [2010]. A suspension device typically consists of a spring and a damper connected between the wheel and corner of the vehicle. In semi-active suspensions, the damping coefficient can be adjusted in real-time. This allows to combine the superior high-frequency filtering of a lowdamping configuration and the good vehicle stability of a stiff damper. A semi-active damper is a strictly passive system (i.e., no energy is fed into the system) characterized by a controllability region defined in the I and III quadrants of the Force -Speed plane, as in Figure 1. This region defines the boundaries of the minimum and maximum damping force F d , which the device can exert as a function of the elongation rate ∆ż. The damping force is therefore given by where Φ is a non-decreasing function which shapes the damping characteristics (intrinsic to the physical hardware) and C ref is the control signal which denotes the damping coefficient defined in the range [0, 1] (interpolating between minimum and maximum). Ideally, Φ should be linear in ∆ż, however for an actual physical device the damping characteristics looks more similar to the function shown in Figure 1. The objective of the controller consists in choosing the damping coefficient achieving the optimal comfort at any given time. The most popular control algorithm in the semi-active suspension control literature is the so-called skyhook, see Karnopp [1995], which is here presented in its variant described by: whereż c represents the vertical speed of the chassis. The performance of the control algorithm thus depends on the control parameters (C, K), where the former specifies a nominal damping ratio, whilst the latter the sensitivity to the road excitation. Notice that if K = 0, the damper behaves as a passive suspension. As the control law is formulated in (2), the parameters (C, K) are adimensional. The aim of this paper is to present a calibration procedure which allows to find the optimal values of the pair (C, K) from on-board measurements only, with limited experimental effort. In general, the proposed methodology could be applied to any parametric control law other than (2). The Considered Key Performance Index The main purpose of a suspension system is filtering vibrations due to the road excitation. The quality of the ride depends on the bandwidth where these vibrations occur, and there exists a major trade-off in the calibration of a suspension system with respect to its filtering capabilities. In Figure 2, the spectrum of the vertical accelerations, relative to an experiment on a test road, is displayed for three configurations: passive low-damping, passive highdamping and semi-active damping. The low-damping configuration achieves the best highfrequency filtering effect, but is poorly stable around the body resonance (around 2 Hz). On the contrary, the highdamping shows a good damping effect, but to the detriment of high-frequency vibrations. The skyhook logic, when accurately calibrated, can achieve the best compromise between the two extreme passive configurations. The manner in which vibrations affect comfort is dependent on the vibration frequency spectrum. In ISO 2631ISO -1:1997, an evaluation of human exposure to whole-body vibrations is presented. The standard defines a complex key performance index consisting of tri-axial (longitudinal, lateral and vertical) and rotational (roll, pitch and yaw) accelerations; each component is weighted to take into account the range of frequencies which can be perceived by the human body. Eventually, the aforementioned terms are combined in a weighted summation whose result is the final ISO Index, defined as: where In the above expressions, A i is the i th acceleration (longitudinal, lateral, vertical and rotational), W i is the frequency weight associated to that term, J i the root mean squared of A w i and J the ultimate condensed ISO Index. The reader is referred to ISO 2631ISO -1:1997 for the exact expressions of the frequency weights W i and gains k i . The quality of a given calibration of the suspension controller will be evaluated from now on after an experiment over a test road as specified by (3). THE PROPOSED DATA-DRIVEN APPROACH This section introduces the proposed Bayesian Automatic Calibration (BAC) protocol for a parametric suspension control system like the skyhook controller of (2). Experimental Protocol The experiments shall be performed on a single test road representative of the dynamics of interest. This is a critical point, since the result of the optimization strongly depends on the road excitation; in other words, the optimal configuration for a driving cycle on a highway and off-road are expected to differ significantly. Hence, the choice of the road profile has an important impact on the outcome of the optimization and shall be carefully evaluated in the design phase. In particular, each experiment must be performed over the same road segment at constant velocity in order to ensure the same frequency excitation. Performance Evaluation The evaluation of each experiment is assessed at the end of the experiment itself, based on the ISO Index sensed on the driver seat via an inertial measurement unit (IMU). It is important to remark that any other index based on sensors data could be employed to target a different objective. The objective of the calibration consists in finding the optimal configuration for the control parameters, which minimizes the ISO Index (thus maximizing the riding comfort) in the least possible number of iterations. Methodology An effective technique to solve these problems would be to formulate an optimization problem including the model of the vehicle and the information about the road profile. However, models are always only an idealization of the real system. For this reason, we will follow a different, purely data-driven rationale, consisting of two main ingredients: a surrogate function to estimate the objective function from data, and an acquisition function to sample the next observation, see Bemporad [2019]. The surrogate function shall be ideally model-free or nonparametric, unless one desires to constrain the objective function to a certain class (e.g., quadratic). In Bayesian Optimization (BO), the objective function is assumed to be drawn from a Gaussian Process (GP), where each observation is a random variable normally distributed and jointly Gaussian with one another. In particular, each pair of observations is bonded by a covariance function encoding the belief that closer points shall have higher correlation than those far apart; typical choices are the Gaussian and Màtern kernels. Hence, given a set of observations, it is possible to fit a GP whose posterior mean is the expected estimation of the objective function, and its variance represents a confidence interval for the estimation. This means that, for any given set of control parameters, it is possible to compute the estimated ISO Index and a goodness indicator of this estimate. Given this posterior model, there are several options for the acquisition function which aims at sampling the next observation where there is a higher probability to find a minimum, according to the posterior mean and variance. The most popular algorithms are Expected Improvement and Upper Confidence Bound which can be found declined in different variations to tackle the so-called explorationexploitation trade-off. Algorithm A pseudocode summarizing this procedure is shown in Algorithm 1. The initialization is performed by picking at random (or any best guess policy) a set of control parameters (C, K), which are evaluated by performing an experiment and then computing the ISO Index defined in (3). The objective function is then estimated by fitting a Gaussian Process (GP). Eventually, the next set of control parameters are chosen by maximizing the expected improvement (EI), which identifies the region in the parameters space, where it is most likely to find the minimum according to such an acquisition function; the parameters are then evaluated by performing another experiment and the ISO Index is computed and updated accordingly. This class of optimization problems does not rely upon a stopping condition since they are based on the assumption that the evaluation of an experiment is expensive and, thus, there shall be a budget cap to the maximum number of experiments to be performed. Hence, the procedure in Algorithm 1 is repeated until the allowed maximum number of iterations is reached. SIMULATION RESULTS In this section, different data-driven calibration strategies are compared: i) Grid Search, ii) Random Search and iii) Bayesian Automatic Calibration (BAC). These strategies are the best candidates to solve the global optimization problem in the scenario described here, where the evaluation of an experiment is an expensive task. Grid Search represents the current calibration paradigm where the engineer tests each possible combination of the control parameters; it is the least efficient strategy which requires many experiments if the parameters space is fitted on a thin grid. Random Search is a standard benchmark Fig. 3. The luxury sedan vehicle employed in the simulations of Section 4. in literature, where the parameters are sampled at random from a uniform distribution in the parameters space; it is more efficient than Grid Search, but the optimum is not guaranteed to be found. BAC is the candidate framework proposed in this paper introduced in Section 3. Simulation Setup The experimental simulation of the proposed framework is performed on a full-fledged commercial vehicle simulator which can model complex multi-body dynamics. The target vehicle is the luxury sedan illustrated in Figure 3, where the main parameters of interest are: • sprung mass: 1370 Kg • unsprung mass: 160 Kg • spring stiffness: 153 N/mm The vehicle is modeled as a MIMO system whose inputs are the damping forces at the four corners and the outputs are the accelerations the vehicle is subject to (used for the ISO Index evaluation), the elongation rates ∆ż and the vertical corner velocitiesż c (used for feedback in control). The control system is implemented in a Matlab/Simulink environment following Equations (1)-(2). The candidate road profile is representative of a rough pavement with low-frequency valley, exciting a broad range of frequencies, traveled at the constant speed of 80 km/h. Main Results The utilization of a simulation environment made it possible to find the actual true minimum of the objective function via Gradient Descent. This is useful for benchmarking the different global optimization strategies. The Grid Search strategy consisted in testing C in [0, 1] with step 0.05, and K in [0, 50] with step 1; therefore, the grid accounts for 1071 cells, corresponding to the equivalent number of experiments to be performed in order to find the optimal value. A projection of the ISO Index evaluated at each point of the grid is shown in Figure 4, where the green cross represents the true optimum computed via Gradient Descent. In Random Search, C and K are drawn from uniform distributions, in [0, 1] and [0, 50], respectively. The maximum number of iterations has been set to 30; after performing 30 experiments, the one whose ISO Index is smaller is chosen as the best calibration. The proposed BAC strategy can identify the region where the optimum is located after only 30 experiments. The chronicle of the observations follows Algorithm 1, and it can be interpreted as follows: at first, pairs at the boundaries of the parameters space are evaluated to learn the approximate shape of the objective function; then, the observations are sampled in the neighborhood of the region where the optimum is most likely to be located. Figure 5 shows the ISO Index estimated after 30 iterations; the observations are represented by yellow dots and are arranged close to the estimated minimum. The algorithm assigns an indicator for the goodness of the estimation, indicated by σ in Algorithm 1, shown in Figure 6: the points where the objective function has been observed are assigned a low uncertainty (ideally this should be zero for a deterministic process), whereas areas far apart from an observation are more uncertain. The main feature of BAC consists in an efficient sampling strategy, which does not consider the areas where the minimum is not likely to be found, even though the estimation of the actual objective function is poor. For the comparison to be fair, since Random Search and BAC are stochastic methodologies, 100 Monte Carlo simulations have been considered. The best candidate calibration (for each simulation) is reported in Figure 7. BAC is present in two configurations: one where each simulation consists of 30 iterations, and another consisting of 100. Random Search has the highest variance, whereas BAC achieves a more stable solution whose deviation from the optimum is narrower as the number of iterations grows. The results are summarized in Table 1 where, among the 100 runs, the median and worst-case calibration are considered, this latter to avoid a lucky strike due to randomness. The reason why the results for BAC after 30 iterations (K = 20.91) is far from the optimal value (K = 8.49) is that it lies in a region where the gradient is quasi zero and, therefore, more iterations would be needed to converge towards the true optimum. If the number of iterations was translated to actual time, assuming 120 s for each experiment and 8 working-hours per day, the Grid Search approach would require 4 full days of work, whereas BAC would cut that down to 3 hours. Sensitivity to Road Perturbations The results presented in the previous section were obtained by simulating a perfectly repeatable experiment, where two iterations with the same parameters lead to the same value for the cost function J. In reality, this is seldom the case since it is impossible to perform such an experiment with very high accuracy, and a slightly different trajectory or velocity may bias the objective function. It is therefore of utmost importance for the proposed methodology to be robust with respect to (small) perturbations of the experiment. Therefore, in the remainder of this section, the Bayesian framework will be tested in a scenario where the nominal road profiler(t) is perturbed as in r(t) =r(t) + ξ(t), where ξ is a frequency weighted white noise We will not consider here the case where the frequency composition of the road profile is changed, as it would be equivalent to traveling on a totally different road; further details on this matter can be found in ISO 8608:2016 [E] and Agostinacchio et al. [2014]. Figure 8 shows a detail of three realizations of the perturbed road profile which shall resemble three different experiments over the same road segment in a real world scenario; the perturbation on each iteration is independent from the previous. The performance of the proposed algorithm in this perturbed setup is shown in Figure 9. Although the estimation of the actual objective function is worse than the previous (nominal) case in Figure 5, the sampling strategy is still able to find control parameters within a neighborhood of the true optimum. In general, a higher number of experiments are needed to reach the optimum, since the GP must estimate the variance of an observation which is no longer deterministic. The other methods discussed in Section 4, Grid Search and Random Search, are not robust against these perturbation per se, although it may be possible to estimate a confidence interval to the detriment of a dramatic increase in the number of evaluations. CONCLUSIONS This paper presented a novel approach for the automatic calibration of a fixed structure parametric suspension control system. It is shown that the proposed BAC protocol, based on Bayesian Optimization, requires (one to two orders of magnitude) less iterations than Grid Search to reach the region of convergence where the optimum lies. Future research will be devoted to an experimental validation of the proposed approach.
2021-05-22T00:05:45.228Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "b061482accff29d262652c1b6d69a6aee01a967c", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ifacol.2020.12.1374", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "90ba916eb6e471e87ea981bc39d781af438c84f1", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
246814916
pes2o/s2orc
v3-fos-license
Enabling Self-Directed Academic and Personal Wellbeing Through Cognitive Education Background The international crisis of declining learner wellbeing exacerbated by the COVID-19 pandemic with its devastating effects on physical health and wellbeing, impels the prioritization of initiatives for specifically enabling academic and personal wellbeing among school learners to ensure autonomous functioning and flourishing in academic and daily life. Research emphasizes the role of self-directed action in fostering wellbeing. However, there is limited research evidence of how self-directed action among school learners could be advanced. Aim We explore the effectiveness of an intervention initiative that exposes teachers to foregrounding Cognitive Education – the explicit and purposeful teaching of thinking skills and dispositions to learners that would advance self-regulated action - to establish the latent potential of the intervention for assisting learners to develop self-regulating abilities that progressively inspires increased self-directed action. Method We illuminate the qualitative outcomes of an exploratory pilot study with a heterogeneous group of willing in-service teachers from two public primary schools (n = 12), one private primary school (n = 3), and one pre-school (n = 2) in South Africa who received exposure to an 80-h intervention that comprised seven study units. The article delineates the experiences of the teachers concerning their participation in the intervention as reflected in their written reflections, as well as their perceptions about the value of the intervention probed with semi-structured one-on-one interviews after completion of the intervention. Results The findings revealed that exposure to the intervention holds benefits for equipping teachers with teaching strategies to create classroom conditions that nurture the development of thinking skills and dispositions that are important for self-regulating, and ultimately self-directing academic and personal wellbeing. Conclusion Cognitive Education is a form of strengths-based education that can play an indispensable role in enabling self-directed academic and personal wellbeing among school learners. INTRODUCTION Enabling self-directed academic and personal wellbeing among learners at school needs to be established as an education priority (Katja et al., 2002;Konu et al., 2002) nationally and internationally to set up a positive foundation for autonomous functioning and flourishing in adulthood (Suldo et al., 2006;Seligman, 2011;Eryilmaz, 2012;Fomina et al., 2020). Due to a continuous decline in child health and wellbeing globally (Neves and Hillman, 2018;Riva et al., 2020;Paulson et al., 2021), a focused interest in wellbeing has surfaced on the policy agendas of many nations (Shirley, 2020). This is plausible considering that international research across eight countries indicates that a third of the school leavers entering university screen positive for emotional wellbeing problems such as depression and anxiety (Auerbach et al., 2018). With specific reference to South Africa, the country where the research was conducted, the decline in the mental health and wellbeing of school leavers has become serious (Eloff and Graham, 2020), with 11.2% of school leavers entering university experiencing emotional and mental wellbeing problems (Bantjes et al., 2016). Particularly, since the emergence of the coronavirus pandemic across the world an increased focus on academic and personal wellbeing has emerged due to the unfavorable and devastating effects of the coronavirus pandemic on among others, mental and physical health and wellbeing (Abbas et al., , 2021aAqeel et al., 2021;Khazaie et al., 2021;Lebni et al., 2021;Liu et al., 2021;Maqsood et al., 2021;Su et al., 2021;Wang et al., 2021). The devastating effects compound existing academic and personal wellbeing difficulties Aqeel et al., 2021;Maqsood et al., 2021). Some of the devastating effects that are of relevance for the article are activated by many unforeseen changes to learners' daily life routines, such as the closure of educational institutions and the switch to virtual education and online learning , as well as social distancing (Aqeel et al., 2021;Su et al., 2021;Wang et al., 2021). The ability to self-direct thinking, feelings, mood, and functioning in dealing with unforeseen changes and situations that could affect ones' wellbeing is accentuated in the literature (Brockett, 2006;Ouweneel et al., 2011;Villavicencio and Bernardo, 2013). Nevertheless, there still seems to be little emphasis on how to support school learners in developing strengths concerning the thinking skills and dispositions they require to become self-directed (Booyse, 2016;Harrington, 2018). Besides, Kazachikhina (2019) confirms that little attention is paid to explicitly encouraging self-directed learning. Learners for example lack metacognitive skills to self-regulate learning and find it difficult to reflect on and direct their learning to ensure progress (Fashant et al., 2020). This problem is compounded by teachers who still appear to be using teaching approaches that no longer serve the self-directed roles learners will have to play in the 21st century (Jansen, 2012;Pretorius, 2014;Eyre, 2016;Lotz, 2016). Patrinos (2020) adds that learners need to possess skills and dispositions to direct and manage their learning progress, establish worthwhile relationships, enjoy a successful and high-quality life, and contribute to a meaningful and reliable society. It is important for teachers, as role models to learners, to be knowledgeable about teaching approaches that are effective to transform learners from dependent to self-directed (Taylor, 2011;Booyse, 2016;Herlo, 2017;Kazachikhina, 2019), as the authors believe it is unlikely that learners will automatically become self-directed. Teachers are obliged to embrace the urgency to reform their teaching approaches to purposefully foreground intellectual and emotional learning (the development of thinking skills and dispositions) that would support the enabling of self-directed academic and personal wellbeing (Klaus, 2015;Bailey, 2016;Obied and Gad, 2017;Uribe-Enciso et al., 2017;Coberley-Holt and Elufiede, 2019). To this end, the authors postulate that a classroom environment that foregrounds Cognitive Education could dispose learners to conditions that aim to purposefully/intentionally capacitate learners to acquire the thinking skills and dispositions to become self-directed, autonomous thinkers, and lifelong learners (Anderson, 2010;Moonsamy, 2014). In so doing, the authors contend that the capacity for guaranteeing that the learners can flourish and lead productive and satisfying lives will be built. Problem Statement Although the connectedness between self-directed learning, academic, and personal wellbeing as well as Cognitive Education could be regarded as logical, Sebotsa et al. (2019) contend that the development of self-directed learning, in particular, seems to be absent in many South African schools. Additionally, Nasri (2017) asserts that although a plethora of research documents the role of learners in the context of self-directed learning, there appears to be a lack of research that probes the teacher's role in the context of developing self-directed learning. This, according to Sebotsa et al. (2019) could be linked to among others, teachers' themselves not being selfdirected, and therefore not enhancing self-directed learning in their classrooms. Geared toward a possible solution for the mentioned problems, this article aims to answer the following research question: How might an intervention in Cognitive Education support in-service teachers in enabling self-regulated academic and personal wellbeing among school learners? To debate the stance for a cognitive approach to education, the authors organized the article as follows. Firstly, a literature review addresses the following objectives, namely to (i) deconstruct the components of academic and personal wellbeing, (ii) deconstruct the component of self-regulated and selfdirected learning; (iii) establish the association between selfdirected learning and academic and personal wellbeing, and (iv) delineate the contribution of Cognitive Education toward enabling self-directed academic and personal wellbeing. The literature review is followed by an exposition of the research materials and methods to explore the contribution of a Cognitive Education intervention toward enabling self-regulated academic and personal wellbeing. Thereafter, a presentation of the research results and a comprehensive discussion of the research findings follow. A conclusion that outlines the contributions of the research rounds off the article. LITERATURE REVIEW The literature review presents a succinct overview of the main concepts that stood central to the research reported in the article. Deconstructing the Components of Academic and Personal Wellbeing Academic wellbeing among learners could be viewed from a positive and negative angle. Firstly, in a positive sense academic wellbeing is linked to school engagement which refers to displaying energy at schoolwork, experiencing school work as meaningful, being immersed, engaged, and involved in school work, and having an achievement-goal orientation (Huppert and So, 2009;Lewis et al., 2011;Ouweneel et al., 2011;Tuominen-Soini et al., 2012;Salmela-Aro and Upadyaya, 2014;Wang and Degol, 2014;Wang and Fredricks, 2014;Wang et al., 2015;Rimpelä et al., 2020;Tuominen et al., 2020). Secondly, in a negative sense, academic discontent is related to school burnout, which testifies to weariness toward school work, being pessimistic about the meaning of school, and a feeling of inadequacy concerning progress and performance (Salmela-Aro and Upadyaya, 2014;Rimpelä et al., 2020). Greater concerns about learners' academic wellbeing have been voiced since the COVID-19 pandemic, as virtual education replaced contact education. Despite holding promises for academic wellbeing, Abbas et al. (2019) contend that virtual education also poses negative threats to academic wellbeing. In a negative sense, the unfamiliarity with virtual education, and a lack of proper access to systems that offer e-learning might contribute to anxiety, frustration, stress, and feelings of incapacity that could lead to academic discontent, feelings of dissatisfaction, and academic ineffectiveness. Also, learners might require time to build self-confidence in feeling comfortable with the shift to virtual learning, which might affect emotional wellbeing. The disruption in normal school routine with less face-to-face interaction and more engagement with networking websites and social media could distract focus from academic work (Maqsood et al., 2021), leading to a decline in academic performance Aqeel et al., 2021). Conversely, if learners know how to engage adequately with virtual learning, networking websites, and social media their academic wellbeing could undoubtedly be enhanced as they become more immersed in learning experiences through the possibility of interactional communication, collaboration with others, the sharing of information, and receiving support from others during learning (Abbas et al., , 2021b. Personal wellbeing constitutes psychological, emotional, and social aspects (Gräbel, 2017), and which, according to Lebni et al. (2021), collectively holds significance for mental wellbeing. Lebni et al. (2021) view mental wellbeing as foundational to upholding productivity and accomplishment in society. They, therefore, accentuate the importance of individuals who can reflect on their thinking, feelings, moods, and functioning in daily life, can cope with the demands of life, and contribute to their communities. Lebni et al. (2021) in particular emphasize the importance of mental wellbeing to ensure physical health, manage stress, establish relationships with others, and make healthy lifestyle choices. This close association between physical health and wellbeing, emotional wellbeing, and social wellbeing is also embraced by Pouresmaeil et al. (2019). Emotional or hedonic aspects of wellbeing involve satisfaction with one's own life, as well as striving toward positive emotions (resilience, self-motivation, self-esteem, self-efficacy, passion, curiosity, pleasure, enjoyment, and enthusiasm) (Ryan and Deci, 2001;Schimmack and Diener, 2003;Wang et al., 2007;Seligman, 2011;Fomina et al., 2020). Dealing with negative or dysfunctional emotions (stress, depression, anxiety, aggression, and procrastination) (Park et al., 2012;Hardy et al., 2013;Firoozabadi et al., 2018;Zhao et al., 2019) is cardinal in ensuring emotional wellbeing. Reflecting on the foregoing description of academic wellbeing, it seems fair to conclude that experiencing positive emotions and academic wellbeing have strong links with each other. Social wellbeing relates to holding positive attitudes toward others (Keyes, 1998;Ryff and Singer, 2006) and displaying a strong social connectedness (Olsson et al., 2013) in one's environment. A lack of social wellbeing could manifest as avoidance behavior, social isolation, sadness, and self-doubt (Saeri et al., 2017). More than ever before, a concern for the social wellbeing of learners needs to be accentuated, because of social isolation and distancing that have been activated by the COVID-19 pandemic (Aqeel et al., 2021;Su et al., 2021;Wang et al., 2021). Social isolation and distancing could especially compound feelings of loneliness. In particular, the effects of social isolation on the academic and personal wellbeing of a learner growing up in an abusive family could discourage emotional wellbeing due to increased feelings of frustration, stress, and anxiety (Aqeel et al., 2021), which, on the account of the authors, should not be underscored. The research of Fattahi et al. (2020) in particular prioritizes the need for psychosocial wellbeing above education needs. Khazaie et al. (2021) and Lebni et al. (2021) in particular alert to the fact that the maladaptive and addictive use of the internet plays a prominent role in the virtual learning environment initiated by the COVID-19 pandemic could in particular contribute to the manifestation of psychosocial wellbeing problems. Additionally, a sedentary lifestyle could become the standard way of living, affecting physical health and wellbeing . Abbas et al. (2021b) contend that the internet makes it possible to virtually make social contact with friends and family and aids in obtaining useful health-related information to alleviate the stress associated with the increased fear of COVID-19 being life-threatening. However, many internet sites propel negativity that can cause emotional stress, anxiety, and tension Su et al., 2021), therefore impeding emotional wellbeing. Also, the lack of face-to-face contact could complicate the development of social skills and dispositions to establish relationships in the real world Khazaie et al., 2021). Considering the foregoing descriptions, one could, in essence, conclude that both academic and personal wellbeing evolves around the core elements of wellbeing identified by Seligman (2011, p. 16), namely, positive emotions, positive engagement, positive relationships, finding meaning in and making meaning of life situations, and accomplishment (achieving something successfully). Deconstructing the Components of Self-Regulated and Self-Directed Learning Pouresmaeil et al. (2019) convey the 21st-century health goals of the World Health Organization for the improvement of wellbeing among the young that encompasses the acceptance of greater social obligation, and in particular personal responsibility for living, perseverance to cope with stress, and establishing meaningful relationships. In the authors' opinion, achieving the mentioned goals requires the ability to increasingly selfdirect behavior and action toward ensuring wellbeing. Helping young people in particular to enable academic and personal wellbeing, self-directed learning appears to be beneficial, as it capacitates learners to autonomously take control over their intellectual/cognitive, motivational, emotional/affective, and environmental/contextual situatedness across changing circumstances and contexts (Sandhu and Zarabi, 2018) to protect their wellbeing (Schimmack and Diener, 2003;Karademas, 2006;Ryan and Deci, 2011;Moreira et al., 2015). To this end, Knowles (1975), a pioneer in the field of self-directed learning, defines self-directed learning as a process in which learners independently diagnose their intellectual and emotional learning needs, identify and formulate learning goals, gather resources to support learning, select and implement learning strategies, and evaluate learning outcomes. Capacitating learners to become self-directed learners who are able learners to take ownership of responsiveness to learn (Long, 1989;Garrison, 1997), requires the development of core critical thinking skills such as analysis, evaluation, making inferences, explanation, interpretation, reflection, as well as self-regulation processes that lie at the core of being able to self-direct one's actions and behavior (Bailey, 2016;Uribe-Enciso et al., 2017;Coberley-Holt and Elufiede, 2019). Moreover, the development of dispositions, such as perseverance, curiosity, inquisitiveness, empathy, integrity, humility, fairness, open-mindedness, a questioning attitude, and systematic working ways, need to be nurtured, as they are viewed as important characteristics of a self-directed learner (Seligman, 2011;Guglielmino, 2013;Barrett, 2014;Obied and Gad, 2017). Promoting self-regulation in a classroom is multi-dimensional in nature and focuses on the application of strategies related to each of the following key components that need to be self-managed and regulated during learning: (i) Cognition (conceptual knowledge, knowledge about learning strategies and their application, critical thinking, and problem-solving skills) (Kellenberg et al., 2017;Schunk and Greene, 2017); (ii) metacognition (observing, reflecting, and thinking about one's understanding and efforts to complete tasks, as well as possible adaptions of the learning process (Kellenberg et al., 2017;Escorcia and Gimenes, 2020); (iii) motivation (regulating the desire to engage in learning and achieve goals and one's beliefs about one's success) (Kellenberg et al., 2017;Palfreyman and Benson, 2019), (iv) emotion/affect (regulating one's feelings and emotions about engaging in learning) (Kellenberg et al., 2017;Schunk and Greene, 2017); and (v) context/environment (managing the optimal use of resources for learning and the surroundings where learning takes place) (Escorcia and Gimenes, 2020). Primarily, the self-regulation process applied to each of the aforementioned components involves metacognitive action to plan, monitor, and evaluate strategies to ensure successful learning (the cognitive component) as well as the managing of motivation levels, emotions, and environmental constraints that might obstruct successful learning. Fostering the ability to self-regulate cognition, metacognition, motivation, emotion/affect, and the context/environment in the context of classroom learning requires support and supervision from teachers. Teachers need to model desirable strategies to selfregulate behavior to learners and create conditions for learners to practice self-regulation, that eventually would lay the foundation for a more autonomous, self-directed, and unsupervised ability (Herlo, 2017;Kazachikhina, 2019;Oates, 2019) to self-manage cognition, metacognition, motivation, emotion/affect, and the context/environment during learning (Hammond and Collins, 1991;Brookfield, 1993;Caffarella, 1993;Zimmerman, 2000;Du Toit-Brits, 2018;Sandhu and Zarabi, 2018). In the opinion of Pandolpho (2018), self-directed learners are in charge of their learning, experience a greater sense of belonging, feel more respected as the authors of their own stories, and take ownership of the achievements/victories and failures that occur on their learning journeys. Conley (2014Conley ( , p. 1020 continues, and reports that the enabling of ownership during learning enhances persistence, motivation and engagement, goalorientation and self-direction, self-efficacy and self-confidence, as well as metacognition and self-monitoring behavior. Building on the argument of Conley (2014) and Pandolpho (2018), the authors believe that promoting self-directed ownership during learning is undoubtedly foundational to enabling academic and personal wellbeing. A large amount of overlap seems to exist between selfdirected learning and self-regulated learning. For this research report that emphasizes the nurturing of self-directed learning through Cognitive Education, the authors present the following pointers to illuminate the relationship between the two concepts. The roots of self-regulated learning are found in school learning with children and adolescents, while self-directed learning is rooted in adult education and education outside school (Cosnefroy and Carré, 2014). Nevertheless, the authors argue that the pressing need for prioritizing learner wellbeing, securing a more autonomous workforce, and ensuring lifelong learning in the 21st century underscores the urgency to employ Frontiers in Psychology | www.frontiersin.org the school curriculum as a driver for the enabling of selfdirected learning too. Flowing from the brief background introduction to selfregulated and self-directed learning, one concludes that selfregulated learning and self-directed learning involve active, independent, and goal-directed learning for which purposeful mental actions, processes, and decisions are required, and comprise an element of student control and ownership (Cosnefroy and Carré, 2014). Some of the important differences between the two concepts as observed by Carré and Cosnefroy (2011) and Cosnefroy and Carré (2014) that are relevant for the article involve the following: in the context of self-directed learning, learning tasks are always defined independently by the learner, thus implying self-regulation ability and selfdetermination. In contrast, self-regulated learning often involves tasks generated by the teacher, signifying that self-regulation could also be controlled externally by a teacher for example. Self-regulated learning can involve learner self-determination but never fully implies self-directed learning. For this reason, self-regulation should be viewed on a continuum from lowself-regulation (external teacher control is evident) to high selfregulation (learner choices and decisions play a determining role during learning). Self-directed learning and self-regulated learning involve the ability to self-regulate decisions about cognition, metacognition, motivation, emotion/affect, and the context/environment in the context of classroom learning, with self-directed learning focusing exclusively on independent learner initiated decisions and self-regulated learning on a combination of teacher controlled decisions and learner initiated decisions. Establishing the Association Between Self-Directed Learning and Academic and Personal Wellbeing Becoming self-directed in reflecting about, observing, and adapting one's cognitive, motivational, emotional/affective, and contextual/environmental efforts and decisions during learning could assist in enabling academic and personal (psychological, emotional, and social) wellbeing (Ouweneel et al., 2011;Villavicencio and Bernardo, 2013). In the opinion of the authors, directing the self (motivation and emotions) during the learning process would be beneficial toward fostering emotional and psychological wellbeing. Being able to self-direct the cognitive dimension of the learning process implies defining a task, setting goals to achieve, selecting strategies to achieve the goals -which might often involve working with others -as well as managing the actual task performance and making adaptations if necessary. Cognitive engagement contributes to learners becoming immersed, engaged, and involved in achieving learning goals, which could among others, boost performance, meaningful learning, social connectedness, success, self-efficacy, and pleasure; features of academic, psychological, emotional, and social wellbeing. Autonomously managing the context/environment in which learning takes place is likely to ensure the optimal use of resources that could contribute to the elimination of cognitive, motivational, and emotional obstacles to successful learning, thus contributing to elevating academic and personal wellbeing. The Contribution of Cognitive Education Toward Enabling Self-Directed Academic and Personal Wellbeing To apply a cognitive approach to teaching, teachers must have a better understanding of the processes required to adapt their teaching practices to enhance the cognitive potential (thinking skills and dispositions) of learners that would benefit their ability to become effective at supervised self-regulated learning, which is considered a prerequisite for becoming autonomous and self-directed. The theoretical conceptualization of Cognitive Education hinges on three pillars, namely, teaching FOR, OF, and ABOUT thinking (Anderson, 2010). Teaching FOR thinking involves the creation of school-wide and classroom conditions that support the development of thinking skills and dispositions that are also important for enabling self-directed academic and personal wellbeing. Teaching OF thinking accentuates the explicit teaching and modeling of thinking skills and dispositions to learners that would encourage involvement in supervised selfregulated learning to become self-directed learners in the future. Educators who focus on the teaching OF thinking guide learners on how to become effective self-regulated thinkers who will be prepared to take on and abolish challenges that threaten their academic and personal wellbeing throughout their lives (Pajares, 2001;Booysen et al., 2017). A strong focus is placed on "how" subject content is taught. Cognitive Education assumes a constructionist (Mezirow, 1997), transformative (McGonigal, 2005;Herlo, 2017), and experiential approach (Jensen, 2005) to teaching and learning, where teaching and learning are inquirybased, anchored in real-world problems, and learners build their academic capability, guided by teachers, to become progressively independent, critical, and confident participants who can selfdirect the learning process (Wegerif, 2013;Green and Murris, 2014). Different strategies that promote inquiry-based learning could be utilized, such as questioning (Green and Murris, 2014), problem-based learning (Dostál, 2015), didactic play (Bodrova and Leong, 2012), the use of stories (Van Aswegen, 2015), De Bono's six thinking hats (De Bono, 1992), cooperative learning (Weidner, 2003), dialogic education (Alexander and Wolfe, 2008), argumentation (Van den Berg, 2010) and Thinking Maps (Hyerle and Alper, 2011;Hyerle, 2014). As part of teaching ABOUT thinking, teachers help learners to become aware of, and apply the metacognitive thinking processes involved in self-regulating behavior, namely planning, monitoring, and evaluating learning, thus emphasizing the role of self-reflection during learning (Anderson, 2010). By applying self-regulation processes, learners become acquainted with regulating and eliminating the motivational -, affective -, and behavioral processes, as well as conditions in their environment that might obstruct academic success and wellbeing (Moonsamy, 2014). In a nutshell, Cognitive Education is characterized by instructional processes that enable learners to assume responsibility for regulating their academic and personal wellbeing during learning with the support of the teachers. Learners gradually learn to take complete ownership for modulating emotions, thoughts, behaviors, and the environment to maximize effective outcomes without support (Williams et al., 2008), consequently to be regarded as self-directed learners. For advancing self-directed learning, Cognitive Education emphasizes the development of critical thinking skills such as analysis, evaluation, making inferences, explanation, interpretation, and the metacognitive skill to self-regulate (Bailey, 2016;Coberley-Holt and Elufiede, 2019). Additionally, Cognitive Education supports the development of dispositions such as perseverance, curiosity, inquisitiveness, questioning, and systematic working ways, which are viewed as important characteristics of a self-directed learner (Guglielmino, 2013;Barrett, 2014;Obied and Gad, 2017). Both the critical thinking skills, as well as the dispositions are cornerstones for achieving academic and personal wellbeing. Besides, the progressive development of self-directedness and autonomy facilitated during Cognitive Education permits learners to experience positive emotions, heightened interest and engagement in activities, prepare learners to identify purpose or meaning in their work, establish positive relationships with peers, and develop greater self-determination, vitality, resilience, optimism, and self-esteem that would magnify personal wellbeing and feed into greater success academically (Huppert and So, 2009;Seligman, 2011;Teal et al., 2015;Pandolpho, 2018). Emanating from the foregoing discussion, the authors postulate that self-directed learning is enabled by strengthening learners' ability to progressively advance at self-regulating the cognitive, metacognitive, motivational, emotional/affective, and contextual/environmental determinants that play a role in learning. The more skilled and proficient learners become at demonstrating self-regulating behavior in teacher-supervised environments, the more favorable the chances are for their being prepared to become unsupervised, self-directed learners who can autonomously reflect on and make decisions about their functioning in and dealing with various school and liferelated situations. The Cognitive Education Intervention Initially, the intervention was predominantly developed to equip teachers with knowledge and skills that would provide all learners with an opportunity to experience teaching and learning that would enable them to acquire the thinking skills and dispositions to become self-regulated lifelong learners and problem-solvers in the 21st century. However, on completion of the data analysis, the authors uncovered the prospects that Cognitive Education also holds for inspiring self-directed academic -and personal wellbeing. Apart from the role of social media to address the health and wellbeing challenges arising from the COVID-19 pandemic (Liu et al., 2021), the Cognitive Education intervention is an initiative that places the focus on the role of education to promote desirable behaviors directed at elevating wellbeing (Azadi et al., 2021) and contribute toward sustainable efforts that could bolster and strengthen learners' self-directed behavior to affect their wellbeing (Paulson et al., 2021). The design and implementation of the intervention were underpinned by the pillars of Cognitive Education, namely teaching FOR, OF, and ABOUT thinking (see Table 1). The heart of the intervention encompassed the modeling of the following teaching strategies to the in-service teacher participants to develop the thinking skills and dispositions to promote self-regulated action, namely, Thinking Maps (Hyerle, 2014), De Bono's thinking hats (Evans and Carolan, 2014), Habits of Mind (Costa, 2009;Anderson, 2010), cooperative learning (Booysen and Grosser, 2014), the Q-Matrix (Wiederhold and Kagan, 2007), problem-based learning (Hmelo-Silver, 2004), and Bloom's revised taxonomy (Kratwohl, 2002). The application of all the strategies initially involves a teacher-regulated environment to encourage the development of thinking skills and dispositions to employ during learning that could be beneficial for ensuring the planning, monitoring, and evaluation of conditions associated with the cognitive component of self-regulated learning. Gradually, as learners develop more confidence in applying the strategies independently, it is hoped that the teacher-directed learning environment will be replaced with a learner-regulated environment that allows learners to independently apply the acquired thinking skills and dispositions to plan, monitor, and evaluate their learning. Although each of the strategies presents several strengths and weaknesses for enabling self-directed learning, the strengths of each strategy for enabling self-directed academic and personal wellbeing will be singled out. Thinking Maps involve the visual application of eight important cognitive processes that are required for effective self-directed learning that ensures positive engagement and the making of meaning across any subject field (Hyerle, 2014). Each Thinking Map represents a different cognitive/thinking process. These processes are: defining in context (to label or to define), describing qualities, properties, characteristics or attributes, to compare or contrast -looking for similarities and differences, to classify, categorize and group, to identify partwhole relationships, to sequence and order, to identify cause and effect relationships, and to identify analogies (simile, metaphor) (Hyerle, 2014). Learners learn how to independently select and construct appropriate Thinking Maps during learning, which, according to the authors, encourages academic wellbeing by promoting autonomous and self-directed engagement during learning. Learners who become successful at independently selecting and applying the thinking processes encapsulated in the Thinking Maps could achieve greater success in their academic work which could impact their self-esteem and feelings of self-efficacy, as well as raise the levels of enjoyment experienced during learning, thus contributing to their feelings of emotional wellbeing. Through the use of purposeful questioning the six thinking hats strategy enhances the flexible use of different modes of thinking (factual, evaluation, critical thinking, creative thinking, synthesis, and argumentation) to self-direct positive learning engagement. The different modes of thinking are connected to a specific color hat, and learners practice the various modes TABLE 1 | Structure of the cognitive education intervention and its relevance for enabling self-regulated and self-directed learning. Study units of the Cognitive Education Intervention Relevance of the study units for enabling self-regulated and self-directed learning Sensitizing teachers to recognize the importance of Cognitive Education across the school curriculum for promoting the skills and dispositions learners require to become self-regulated and self-directed learners Study unit 3: Cognitive development processes Outcomes: (i) Identify and classify the processes and characteristics of cognitive development: from toddlers to adolescents to adults. (ii) Recognize how the characteristics of cognitive development influence instructional design in the classroom Making teachers aware of age-related cognitive demands when planning instruction that aims to enhance the skills and dispositions learners require to become self-regulated and self-directed Study unit 4: A mediated learning approach to advance Cognitive Education Outcomes: (i) Understand and apply the theoretical principles of mediated learning during teaching to advance cognitive development. (ii) Compare the application of a mediated learning approach with traditional transmission and reception teaching Providing teachers with a theoretical framework consisting of twelve criteria for embedding their teaching and creating learning activities that would ensure the development of the skills and dispositions learners require to become self-regulated and self-directed Study unit 5: The thinking school and the thinking classroom Outcomes: (i) Determine ways to create a "Thinking School" and distinguish factors that can hamper the journey in becoming a "Thinking School." (ii) Manage the implementation of a thinking approach across classrooms in schools and colleges. (iii) Clarify the role of the teacher in establishing a "Thinking Classroom." (iv) Identify and eliminate factors that can hamper effective thinking and learning in the classroom and at home Teachers are provided with practical suggestions of how to create a classroom climate and an environment that invites the development of the skills and dispositions that self-regulated and self-directed learners require Study unit 6: Approaches/strategies/activities to teach thinking skills and dispositions Outcomes: (i) Understand, apply and infuse a variety of teaching approaches/strategies into ongoing teaching and learning activities to enable learners to acquire learning content at the different cognitive levels of Bloom's revised taxonomy, as envisaged in the objectives of the CAPS curriculum (Strategies modeled to the teachers: De Bono's six thinking hats, The Q-Matrix, Problem-based learning, Thinking Maps, Cooperative learning, Habits of Mind, and Bloom's revised taxonomy (ii) Evaluate the effectiveness of a specific teaching strategy/activity to advance skills and dispositions This unit comprised the practical part of the intervention and constituted the part on which the research reported in this article, focused Seven teaching strategies that develop the skills and dispositions required of a self-regulated and self-directed learner were modeled to the teachers As part of the practical component of the intervention, the teachers applied the various strategies in their classrooms, after which data were collected to establish the merits and demerits of the strategies to advance the development of skills and dispositions required for enabling self-directed learning Study unit 7: Cognitive principles and assessment Outcomes: (i) Understand the principles of Bloom's revised taxonomy for teaching, learning, and assessment to allow learners the opportunity to become cognitively engaged Teachers were guided in recognizing the merits of Bloom's Taxonomy not only for directing assessment but also for directing teaching that would advance the development of the skills and dispositions required for self-regulated and self-directed learning of thinking by switching to the different colored hats during teaching (De Bono, 1992;Evans and Carolan, 2014). The six hats strategy makes it possible for learners to gradually through self-questioning, further their immersion and engagement in discovering depth in their thinking about subject content that could advance a better understanding of information that is likely to impact academic achievement favorably and in turn, elevate feelings of academic wellbeing and stimulate positive emotions related to self-efficacy, and pleasure and enjoyment in learning. The Habits of Mind strategy (Costa, 2009;Costa and Kallick, 2009) plays an important role in the development of important intellectual dispositions/attitudes and positive emotions whilst engaged in learning, thinking, and decision making. Habits of Mind refers to mindsets or mental and emotional moods that enhance the quality of task completion, decision making, and problem-solving in any context, thus being beneficial toward academic and personal wellbeing. According to Costa and Kallick (2009), the Habits of Mind can be clustered according to five groups, all of which aim to further selfdirected action namely: (i) resilient that involves being able and willing to persist, work, and communicate with accuracy and precision. Resilient behavior capacitates one to not easily get overwhelmed, weary and pessimistic when faced with personal crises and academic challenges, but without support navigate toward gathering resources to overcome crises and challenges, therefore possibly advancing psychological wellbeing by inspiring feelings of adequacy and personal growth; (ii) resourcefulbeing resourceful involves being creative, flexible, innovative, and open-minded in self-governing the elimination of obstacles that obstruct academic and personal wellbeing that likely will contribute to feelings of self-determination and autonomy as attributes of psychological wellbeing, and self-efficacy as an attribute of emotional wellbeing; (iii) reasoning that comprises the ability and preparedness to engage in metacognitive selfreflective and self-regulated action that is foundational to selfdirect the planning, monitoring, and evaluation of behavior and decision making required to promote academic and personal wellbeing; (iv) reflective that refers to the unsupervised ability to eagerly discover humor, react with wonderment and awe, and to remain open to learning, that could consequently boost energy, curiosity, enthusiasm as attributes of academic wellbeing, and meaningfulness as an attribute of emotional wellbeing; and (v) responsible that includes a keenness to ensure the quality of one's work by avoiding impulsiveness, a desire to be empathetic and understanding, and open to collaboration and taking calculated risks. Academic success might increase from a responsible disposition toward one's work, subsequently advancing academic wellbeing. Social wellbeing could flourish by encouraging collaboration that promotes social inclusion and connectedness. Finally, emotional and psychological wellbeing could thrive when risk-taking leads to goal achievement that bolsters self-efficacy and personal success, respectively. Cooperative learning plays a role in developing the social dimension of personal wellbeing, or the nurturing of positive relationships during learning. Social interaction creates opportunities for learners to learn how to engage in the autonomous cognitive processing of information that cultivates academic engagement, the development of self-confidence in one's independent efforts, receiving emotional support from peers, experiencing a sense of belonging, and being part of opportunities to share, evaluate, and communicate information with clarity and precision (Booysen et al., 2017). A better understanding of information due to active and collaborative engagement during learning that could be considered as an outflow of cooperative learning could in all likelihood contribute to greater academic accomplishment that could effectuate academic wellbeing. Academic wellbeing on the other hand could advance feelings of personal success, autonomy, self-esteem, and self-efficacy as facets of psychological and emotional wellbeing. Additionally, engaging in social learning allows learners to experience social inclusion, acceptance, and connectedness as well as the acquisition of important dispositions such as empathy, humility, and open-mindedness (Johnson and Johnson, 2006;Booysen et al., 2017). Problem-based teaching is learner-centered teaching and learning, and learners autonomously learn about a subject by doing independent problem-solving in collaboration with others. The goals of problem-based teaching are to help the learners develop flexible knowledge, effective problem-solving skills, the ability to self-direct learning, effective collaboration skills, and intrinsic motivation (Hmelo-Silver, 2004, p. 235). Besides, learners develop skills and dispositions to critically and respectfully engage with others in meaningful dialogue about various knowledge claims and communicate their views with clarity and precision (Costa and Kallick, 2009). The authors maintain that problem-based teaching could therefore contribute to presenting learning opportunities through which the aforementioned skills and dispositions would likely contribute to qualities of academic as well as psychological, social, and emotional wellbeing. Some of these qualities refer to experiencing learning as meaningful, autonomous decision making, mastery of knowledge, social connectedness, positive attributes to work with others, and the recognition of one's contribution that reinforces self-esteem. The Question Matrix (Q-Matrix) encourages learners to think and act critically about the information they are processing by varying the questions posed to learners, thereupon creating opportunities for deeper meaning-making and understanding of information that may be beneficial to supporting academic wellbeing. Developing a questioning attitude signals selfdetermined involvement in mastering learning material in learning that hopefully contributes to experiencing learning as a meaningful and purposeful building block to foster academic and psychological wellbeing. Literal questions that expect learners to identify facts taken from information are posed by using question (Wiederhold and Kagan, 2007). The intervention guided the teachers to connect the theory behind the cognitive process levels of Bloom's revised taxonomy to the teaching of specific subject content (Ormell, 2019), thus moving beyond using the taxonomy as a theoretical tool to guide the assessment of teaching activity. Teachers are steered to let the cognitive levels in the taxonomy become the driving force of teaching so that learners are empowered to acquire depth of thinking about subject content before the assessment of thinking (Booysen et al., 2017), and in all probability empowering learners to achieve improved academic performance. The authors believe that acquiring a greater depth of thinking about subject content could build up to improved academic success and mastery of subject content testifying to academic wellbeing. Academic wellbeing in turn could further feelings of self-esteem and personal success; contributing to individually improved emotional and physical wellbeing. Presentation of the Intervention The North-West University, South Africa, and the South African Council for Educators' accredited the intervention at Level 6 of the National Qualifications Framework Level (Level 6 is equal to obtaining National Diplomas and Advanced Certificates), for which teachers received 25 continuous professional development points and a certificate endorsed by the North-West University on the successful completion of the intervention. The intervention comprised a 40-h facilitated theoretical component that consisted of seven study units, each with a self-directed, practical performance-based assignment (seven assignments in total that included 40 h of practical work) that had to be completed and passed with 50%. A prescribed textbook edited by Green(ed.) (2014) supplemented the intervention material contained in a comprehensive study guide. The practical assignments expected of the teachers to apply what they acquired during the theory sessions in their classrooms and to submit evidence thereof for assessment purposes. Table 1 summarizes the material covered during the intervention and clarifies the relevance of the various study units for enabling self-regulated and self-directed learning. On a rotation basis, six cognitive education specialists were responsible for the facilitation of the intervention content to the teacher participants. Lectures were presented to the teacher participants by employing the strategies that were included in the intervention material. In other words, the facilitators modeled to the teachers what is expected of a teacher in the classroom who is serious about adopting a cognitive approach to teaching. Also, reflective questioning was used to encourage the teachers to think deeper about the information presented to them, and to prompt them to scrutinize their answers to the questions posed to them during the facilitation sessions for clarity, depth, relevance, and completeness. Collaboration stood central to the implementation of the intervention. Teachers were often requested to work in groups or pairs, as teachers had to be sensitized to the importance of the social nature of learning for the development of the thinking skills and dispositions required for self-directed learning. Although the intervention is specifically aimed at enabling self-directed learning among learners in a classroom, the intervention exposed the teachers to a training opportunity that also focused on the development of their ability to self-direct their learning in preparing for the facilitation sessions and in making decisions regarding the application of the information required during the facilitation sessions to their practical assignments. Research Methods and Data Collection Instruments The research comprised qualitative, phenomenological research that gauged participants' immediate experiences (Leedy and Ormrod, 2013) after the intervention using 1-h individual, semistructured tape-recorded interviews. Semi-structured interviews allowed the teachers to reflect in an unstructured way about the questions that were phrased with a specific purpose (Prior, 2020). After the completion of each study unit, participants were requested to write reflections detailing the benefits that the training material held for enhancing the quality of their teaching practices. Research Participants The authors made use of non-probability sampling and approached in-service teachers who would be willing to take part in the intervention. A heterogeneous group of willing in-service, experienced and inexperienced White and Colored (Coloreds are a multiracial ethnic group native to southern Africa) female teachers from two public primary schools (n = 12), one private primary school (n = 3), and one pre-school (n = 2) in South Africa formed part of the intervention training. None of the teachers had previous exposure to training in Cognitive Education. The participant numbers were limited due to the intensive nature of the intervention, and to ensure more reliable findings concerning the effectiveness of an intervention (Mouton, 2009). Rigor To ensure the rigor of the data analysis and the findings of a qualitative study, the authors considered criteria for credibility, dependability, confirmability, and transferability (Lincoln and Guba, 1985). The authors ensured credibility by obtaining data saturation and providing a thick description of what transpired from the data. To uphold credibility, dependability, and confirmability, and inter-rater reliability, all three authors were independently involved in the open coding, axial coding, and identification of themes and sub-themes for specific sections of the data on a rotation basis to make comparisons for agreement. The use of existing codes from the literature focused and guided the coding process, and on the account of the authors, contributed to reducing disagreement about the selection of codes. Similarly, all authors were involved in identifying and verifying the trends that emanated in the written reflection data to ensure that interpretations were based on empirically grounded data and not personal insights, thus discouraging researcher bias (Lincoln and Guba, 1985;Creswell, 2009). The authors presented detail about the biographical variables and context of the participants, to allow judgments about transferability to be made by researchers who wish to duplicate the research in other contexts with participants who have a similar background. Data Analysis Procedure A deductive and an inductive thematic content analysis approach was used to analyze the interview data. The voice recorded data were transcribed verbatim, and the verbatim data were scrutinized to obtain impressions of depth in the data, followed by open-coding segments of the data; thus looking for concepts and ideas in the participants' responses that answered the interview questions. For this purpose, the authors worked deductively as they identified existing codes from the literature review about cognitive education, and self-directed academic and personal wellbeing (Nieuwenhuis, 2016) that were brought into connection with the verbatim data. The authors, however, remained open to discovering unexpected and interesting codes inductively from the data that might reflect new insights and enrich the set of deductively identified codes. The existing codes from the literature that guided the coding process comprised the following aspects, namely evidence of (i) the types of thinking abilities or thinking skills displayed by learners; (ii) the qualities of the teachers' teaching and the classroom environment; (iii) the dispositions, attitudes, and values displayed by the learners; (iv) teachers' attitudes and beliefs about their role during teaching; and (v) the role that learners play during teaching. An unexpected and surprising code that the authors did not anticipate related to the enhanced emotional wellbeing of the teachers, discussed as Theme 4 in the section "Results. " After the open coding, axial codes were created by listing all the open codes and grouping similar and recurring open codes with a suitable label. Axial coding made it possible to uncover explicit links between the data. The process was iterative and relied on a constant comparison of the various axial codes. Similar or related axial codes were color-coded, which provided the authors with an indication of possible core emergent themes; patterns in the data that came up repeatedly (Merriam, 2009). Within each of the themes, sub-themes that shared the same focus as the theme, but emphasized a specific element concerning the theme, were constructed. The themes and sub-themes that emerged from the data are highlighted in the section "Results" of the article, and appropriate verbatim extracts from the data are included to illustrate and substantiate the themes (Prior, 2020, p. 548). The data obtained with the written reflections were wideranging, which complicated the determination of themes. Consequently, the authors decided to quantify major trends that reflected predominantly positive or negative opinions in the data (Villez, 2014;Nieuwenhuis, 2016) concerning the three questions posed to them. The trends enabled the authors to spot the benefits of the intervention on which future implementation could be built, as well as the needs and expectations voiced by the participants after the intervention that could inform adaptations to the future implementation of the intervention. Ethical Clearance Ethical clearance was obtained from the university where the research was conducted. Informed consent was obtained from all the participants before the research commenced, where they confirmed that they understood what the research was about, why they were selected, and what their involvement would entail. Participation in the research was anonymous and voluntary and participants were assured that their responses would be treated confidentially. RESULTS Of the thirteen interview questions posed to the teacher participants, responses obtained for three of the questions related to structural and logistical matters, and are not included in the section "Results." The responses obtained for the remaining questions that directly align with the focus of the article could be clustered according to five main themes and their related subthemes. All questions posed to the participants purposefully did not emphasize the role of Cognitive Education toward developing self-regulated or self-directed academic and personal wellbeing to stay clear of steering participant responses to what the authors hoped to derive from the participants' perceptions. Themes Extracted From the Interview Data The authors postulate that Cognitive Education could be regarded as a key to encouraging self-directed academic and personal wellbeing. For this reason, it was important to establish whether the responses of the teachers to the different interview questions supported the authors' reasoning. Theme 1: The Participants' Understanding of Cognitive Education After the Intervention The understanding of the teachers pointed to the development of specific thinking skills which was considered as a sub-theme to qualify the understanding of the teacher participants. The following are examples of the most relevant responses to support the deductions made by the authors. The Development of Thinking Skills From the responses, it was encouraging that the understanding of all the participants revealed that they understood Cognitive Education to involve the development of thinking skills to promote independent, critical, and self-reflective thinking that is important for self-directed learning and daily life problemsolving. Teaching should therefore involve more than just the acquisition of factual knowledge. The intervention motivated learners to think for themselves (P: 6; P: 12) 1 . The learners acquired more than just knowledge, they acquired different thinking skills and processes to apply. . .. in real life (P: 8), processes that they can use to solve problems (P:10; P: 12). processes they use daily (P: 10). Also, the intervention enabled learners to master skills to think, communicate, [and develop] social skills rather than knowledge only (P: 17), as well as promote critical thinking (P: 13). Cognitive Education encourages independent and creative thinking and reduces rote learning: Cognitive education is where the child's thinking needs to be developed and it is good if the child can think on his own and give his own ideas for what he should do (P: 11). Cognitive Education makes it possible for learners to think further, to be able to apply thinking to daily lives. Cognitive Education focuses not only knowledge or rote learning (P: 11). Reflecting on the responses, the authors carefully conclude that the understanding of the teachers testify to the possibilities that Cognitive Education holds for developing thinking skills that are required to facilitate self-directed learning. Theme 2: Understanding the Effect/Influence of Cognitive Education After the intervention, all the teachers' understanding of the effect of Cognitive Education pointed to the acquisition of thinking skills, in particular, skills such as creative, analytical, reflective, and evaluative thinking that testify to deeper levels of thinking. The authors argued that the effect of Cognitive Education on deep-level thinking could be reported as a sub-theme concerning the teachers' understanding of the effect/influence of Cognitive Education. Cognitive Education Promotes Deeper Levels of Thinking Some of the most relevant responses cited the following: Deeper levels of thinking refer to more than just the acquisition of facts, it refers to encourag[ing] learners to think deeper and to apply facts. It is preparing learners for life (P: 5), to know how to make choices (P: 6), and to be quick to find solutions to problems (P: 8). Another sub-theme identified in the data addresses the beneficial role that Cognitive Education seems to play in the development of critical and creative thinking. Cognitive Education Promotes the Development of Critical and Creative Thinking Skills that drive critical and creative thinking processes such as analysis, evaluation, and reflection, likely benefit from Cognitive Education. The teacher responses confirmed that Cognitive Education enables metacognition, namely thinking about [one's] own thinking, [how] to analyze and to reflect, to have insight in [one's] own thinking processes. Learners learn how to evaluate it [knowledge], and then reflect on it [knowledge] (P: 17). Cognitive Education makes it possible that [learners] can think outside the box and think higher and think beyond what they really think (P: 17), and not think in just one direction (P: 10). Cognitive Education has ways and means to help children to use thinking skills in creative ways to achieve academic success, to perform better, to think for themselves (P: 12), and to start thinking differently to what other people do (P: 14). On the account of the authors, teachers' understanding suggests that Cognitive Education could advance self-directed learning, by activating deeper levels of thinking and stimulating the development of critical and creative thinking abilities that hold value not only for an academic context but also for dealing with challenges and personal crises in real-life situations, consequently building capacity to enable academic and personal wellbeing. Theme 3: The Effect of the Intervention on Teachers' Attitudes and Beliefs About Teaching and Education From the perceptions of all the participants the authors deduced one important message that could be viewed as a sub-theme, namely that the intervention enabled a more flexible and differentiated teaching approach that allowed greater learner involvement during teaching. Promoting Flexible and Differentiated Teaching and Education The strongest evidence for supporting the messages that attest to promising possibilities for enabling self-directed learning geared toward academic and personal wellbeing includes the following examples. Cognitive Education increases learner involvement: I always thought it was the teacher that needs to do all the talking and learners should listen. Everything should actually revolve around learners and not only the teacher talking. Learners should also give their input (P: 5). Greater learner involvement also seems to contribute to engaging learners in thinking activities: [the intervention] allow[s] the children to think creatively and think about their thinking (P: 16). Additionally, teachers feel they have acquired strategies to accommodate different age groups during their teaching: I learned how to use [teaching] strategies at the level of young children. The way I present my lessons is more challenging. I have realized that you can teach thinking to young learners (P: 8). Cognitive Education also makes it possible to cater for different ability groupings: [I] focused more on three groups in my class, academic strong, average, and learners with learning problems (P: 11). Teachers seem to have become more thoughtful about their teaching practices: [I] think about [my]own teaching practice and its relevance, we as teachers should change, and ways are available to look at teaching and learning differently (P: 10). Teachers realized that with different teaching strategies at one's disposal teaching can be presented in different ways to assist all learners to be more successful academically: I think more about my daily planning and how to treat different learners to perform better (P: 12); and [how] to use a variety of methods to teach all children (P: 16). In comparison to the present curriculum according to which the teachers plan their teaching, one response testified that the Cognitive Education intervention allows more flexible teaching: I see how fixed our curriculum is. It is not flexible at all. I also see how teaching is all about the content. The direct approach. It's all outcomes-based with results. It really is just all results-driven and I have learned so much from this course. I knew it was wrong but just to hear from professionals how wrong it actually is, changed my attitude (P: 13). Applying a differentiated and flexible approach to teaching offers extended opportunities to all learners that could stimulate increased involvement as well as increase motivation and enjoyment during learning, subsequently presenting a stronger foundation on which academic success can be built and emotional wellbeing reinforced. Theme 4: The Effects of the Intervention on Teachers Overall, the intervention appeared to have positive effects on all the teachers who took part, and they reported increased competence, self-confidence, self-efficacy, and motivation after completing the intervention. The aforementioned attributes attest that teachers' emotional wellbeing was fueled by the application of the teaching strategies they acquired through the intervention. Some of the most significant responses presented the following evidence as part of a first sub-theme. The Cognitive Education Intervention Enhances Teachers' Emotional Wellbeing The empowering effect of Cognitive Education for increased teacher competence, self-confidence, self-efficacy, and motivation seems to have some positive outcomes for the teaching practices of the teachers: I feel I'm much more equipped to teach. . .. . . in the sense of leading a child with that what teaching for, of, and about thinking [is]. So I think my competence has changed, and I have more self-esteem in class (P: 7); my increased ability and self-confidence empowers me, and my self-confidence improved a lot (P: 13). The empowering effect experienced by the teachers maybe have a beneficial outcome for learners too concerning elevated interest in learning and optimizing potential: The children have become more confident in the way they relate to my teaching. They [the learners] find the lessons more interesting, as I am able to make the lessons more interesting. I can present a lesson in different ways. Previously I used one strategy in a lesson. I am more competent and it places me on another level (P: 8). I can add value to learners who struggle to reach their potential; and help them to optimize their potential (P: 10). It [the intervention] made me a better teacher (P: 11). Deemer (2004) contends that if teachers experience a greater sense of confidence, motivation, and efficacy during teaching, they provide more effective classroom instruction, resulting in increased learner motivation and academic performance and success, and in the view of the authors, consequently strengthening learner academic and personal wellbeing. Some participants made mention of the fact that their undergraduate training did not equip them with the strategies that they were taught during the intervention, and therefore recommend the intervention as important for in-service teachers: We were never taught these strategies at varsity. New teachers too who do not know all the concepts can benefit (P: 3). This course is a must for in-service teachers to develop themselves (P: 8). Teachers do not know all these strategies to make teaching more interesting. Our training [undergraduate training] did not equip us with all these tools (P: 14). Theme 5: The Effects of the Intervention on Teachers' Classroom Practice and Learner Development Following the teachers' responses, the teaching strategies acquired during the intervention in all likelihood hold benefits for teaching practice and the learners' involvement in the classroom. The benefits toward teaching practice eluded to an important sub-theme extracted from the data, namely being able to create quality teaching and learning environments that support learner engagement and learner development during learning. Creating Quality Teaching and Learning Environments That Support Engagement During Learning The most remarkable responses for the effects of the intervention on teachers' classroom practice and learner development captured the following information: Learner engagement and enthusiastic involvement during learning were fostered. Learners also tend to be more disciplined and pay better attention during teaching, take part, and stay involved during teaching. The following responses were cited by the teachers: Children have become more engaged in learning. Learners have definitely become more involved. Children who were wandering off during teaching are now more focused. Learners are more involved and contribute in class (P: 3). In particular, learners who were seemingly uninvolved during teaching tend to start to take part more (P: 14). One participant observed that learner discipline and [their] listening skills have improved (P: 11). Greater engagement seems to promote more attentive and focused learning: It is as if learners are more awake (P: 5), very excited and curious (P: 8), find the lessons [in class] more interesting (P: 8), and enjoy it [the lessons] even more (P: 12). The use of the different teaching strategies creates energy and enthusiasm in the classroom (P: 11), and learners are eager to learn (P: 8). There appears to be an increased willingness to learn, as learners have become open to talking during learning and are enthusiastic (P: 8), are excited to work with the strategies, and love to do discussions linked to the six hats [teaching strategy] (P: 8). What appears to be advantageous is that the variety of teaching strategies makes it possible for learners to achieve learning outcomes in different ways, therefore appealing to a wider range of learner interests, thus avoiding weariness among learners, as mentioned by one teacher participant: It seems as if learning becomes easier (P: 12). There's a lot of different approaches so nobody is bored with one particular way something is done (P: 13). The learners are all achieving because somehow getting to the outcome through different methods is beneficial. There are many ways to skin a cat. The learners have self-confidence because there isn't just one particular way and they are finding it more interesting (P: 13). Also, independent thinking which comprises higher-order thinking, creative thinking, and metacognitive skills for selfassessment, reflection about work, and monitoring of work was likely stimulated and encouraged with the application of the teaching strategies. Advancing Independent Thinking The Cognitive Education intervention aims to encourage autonomous thinking and learning. In this regard, the teachers reported the following: I am amazed at how their [the learners'] thinking grows if you lead and guide them (P: 5). They [the learners] love the new way of working and are positive, and eager to learn on their own (P8). The learners start thinking at a higher level (P: 10), and [display] deep and profound understanding (P: 8). The teaching strategies enable the learners to acquire thinking in a creative manner (P: 5). An encouraging finding concerns the use of Bloom's Taxonomy as a teaching tool not only as a tool to guide assessment. Incorporating Bloom's Taxonomy as a teaching tool seems to advance the observed, independent deeper levels of thinking. Two teacher participants alluded to the potential of using Bloom's Taxonomy as a teaching tool: Also, with the tree of Bloom's taxonomy, what I love about the learners is that they sort of know if we are now on the knowledge level or are we now going to the understanding level and they like to see where they [are] regarding the different [thinking] levels and they also start to pose questions, you know, based on the different [thinking] levels (P:13). Another teacher reported using Bloom's Taxonomy during teaching to make sure that learners go to different thinking levels (P:10). Apart from Bloom's Taxonomy, the six thinking hats and Thinking Maps strategies probably also encourage independent thinking, as communicated by the following teachers: The Thinking Maps help learners to always think about their thinking, to better understand their thinking (P: 16). The six thinking hats and Thinking Maps stimulate creative thinking (P: 8). How learners responded to teachers' questions bears witness to deeper thinking before answering the questions, as the learners were starting to give more extended answers (P: 8). Moreover, learners were purposively confronted with questions to make [them] think and let them answer questions instead of [the teacher] answering it [the questions] (P: 8). Concerning the skill to engage in self-regulated, metacognitive, and reflective action during learning, learners reportedly have started to check and monitor their own work (P: 13). Learners displayed autonomy by enacting self-assessment and self-reflection and they've started setting little goals for themselves for where they are now and where they want to be. The prior mentioned evidence attests to an improved ability to self-direct learning behavior. The benefits of Cognitive Education for social learning were favorably assessed by the teachers and therefore included as a subtheme concerning the possible effects of the intervention. Advancing Active Cooperation and Interaction The development of social skills for work working together with others and sharing seemed to benefit from the cognitive approach to teaching applied by the teachers. The teaching strategies open possibilities and opportunities for learners to learn from each other (P: 13). In particular, cooperative learning enriches the learners because they learn from each other and not only from me [the teacher] as such and then their friends can also help (P: 8). The learners learn not to trust only in their own knowledge. . . [but] can use each other, it makes a difference in what they answer. Specifically, the Thinking Maps strategy promotes good interaction with learners (P: 10). In general, the learners appeared to be excited and there is more interaction (P: 13) during teaching, and work[ing] with and also share[ing] and pair[ing] (P: 13) and help[ing] each other (P: 8) have improved. Cooperative learning apparently encourages learners to be more focused on thinking and writing down their own ideas, and they are not afraid to do it and they learn together, and because they are learning, they naturally reflect in the answers (P: 17). It can be concluded that the development of attitudes and dispositions such as respect toward others, self-confidence, selfrespect, increased motivation levels, an inclination to work independently, and dispositions to enhance the quality of work, such as being eager, more focused, managing impulsivity, accuracy, persistence, and being open-minded to the opinions of others in all probability benefitted from the cognitive approach to teaching. For this reason, the development of attitudes and dispositions could be considered as a subtheme that encapsulates another effect of the Cognitive Education intervention. The Development of Attitudes and Dispositions The Habits of Mind strategy was found to particularly benefit the development of attitudes, dispositions, and values toward their school work that could elevate the quality of the school work, such as accuracy, avoiding impulsivity, and persistence. It [Habits of Mind] also teaches the learner to manage impulsivity, and focus on accuracy (P: 8). They [the learners] are not impulsive, wait for the task and I can see a change in their way of thinking (P: 8) Everyone begins to finish their work because they are keen to put in the best they can do (P: 8). An important dimension reflected in the data concerns the development of positive dispositions toward others and oneself: Habits of Mind benefits the value system: respect and self-confidence. Habits of Mind develops values, respect, and consideration for the opinions of others (P: 11), respect[ing]. . .one another (P: 14), respecting each other's responses, [and] learning from each other (P: 7), as well as to have self-respect (P: 5). The importance of using the Habits of Mind strategy daily was voiced by one participant: Habits of Mind can definitely be used daily -things like accuracy and persistence. It should not be loose standing from the other strategies. Teaching in essence is about values (P: 14). A final sub-theme providing evidence of the effect of the Cognitive Education intervention reflects some improvement noticed in learners' performance. Improvement in Learner Performance Improvement in learner performance was observed by some of the participants. I noticed some improvement in learner achievement, as well as in their concentration (P: 5). Performance improved. They share with one another, learn from one another, and are able to solve problems (P: 9). Marks improved. I can see a change in learners' thinking and the way they approach their tasks (P: 11). For sure. Their performance is better, they communicate better and take part in discussions (P: 12) Learners are more involved and I have noticed some improvement in marks (P: 14). Two participants, participants 10 and 16, reacted cautiously regarding the improvement they noted in academic performance, as they, according to the authors, rightfully argue that there might be other variables that could have impacted improved performance too, apart from the new teaching strategies. Trends Observed in the Written Reflections The data mainly reflected positive trends concerning the three questions posed to the participants. Except for one response, none of the responses obtained were supported by half or more than half of the participants to be considered as a strong trend in the data. Therefore, without disregarding any of the responses for future adaptation to the intervention, the researchers decided to refer to particular thoughts in the data as a trend if it was supported by six or more participants. The authors concisely report the following major trends in the written reflections for the three questions. Question 1: The Participants' Feelings About the Intervention Content All the participants considered the content covered in the seven study units to be interesting, useful, and valuable in the context of preparing learners to cope with the challenges of the 21st century. It was disturbing that seven of the teachers noted that they did not realize the importance of stimulating the development of thinking skills and dispositions during teaching. They mentioned that the intervention enabled them to acquire knowledge and skills to enable their learners to do well. Question 2: The Participants' Perceptions About New Learning That Took Place For eight of the participants, the intervention provided the first exposure to become knowledgeable about the working of the brain, the importance of stimulating the brain and learning how the brain impacts learning, thinking, and learner development. Seven participants experienced the intervention as an eye-opener to how the world has changed, and how teaching has not yet adapted to meet the challenges brought along by a changing world in the 21st century. Becoming aware of the challenges learners are faced within the 21st century made eleven of the teachers realize the importance of creating a thinking classroom in which teaching for, of, and about thinking is placed in the center. The intervention contributed to their understanding of their role as teachers to develop cognitive processes (skills and dispositions) among learners to nurture learner autonomy. Eight of the participants especially found the use of Bloom's Taxonomy as a teaching tool to be novel, and nine participants for the first time became aware of employing assessment for, of, and about thinking. Question 3: The Participants' Suggestions for Improving the Intervention Only one strong suggestion was made by six of the participants, namely that they wished that there was more time to receive more practical examples of how to apply the various teaching strategies to subject content that were modeled during the intervention of teaching practice change. Important Preliminary Findings Although the intervention was initially developed to equip teachers with teaching strategies to empower learners with the thinking skills and dispositions to self-regulate learning, evidence emerged from the research results that support the learners' progressive growth toward becoming more adept at self-directed learning as well. The participants' understanding of Cognitive Education yielded responses that testified to the latent potential of Cognitive Education being effective for developing the thinking skills required for independent self-directed academic and personal wellbeing. In particular, critical thinking skills for making choices in daily life, such as analysis, evaluation, creative thinking, and problem-solving, skills, as well as social skills, and skills to communicate were foregrounded by the teachers. Thinking skills, such as creative, analytical, reflective, and evaluative thinking, are according to Guglielmino (2013), Barrett (2014), and Obied and Gad (2017) important for academic success and ensuring personal wellbeing. Furthermore, critical thinking skills are essential to guide the planning, monitoring, and evaluation of academic self-directed learning (Bailey, 2016;Uribe-Enciso et al., 2017;Coberley-Holt and Elufiede, 2019), as well as to self-direct the emotional/affective processes (Du Toit-Brits, 2018;Sandhu and Zarabi, 2018) that are required for ensuring academic and personal wellbeing. If Cognitive Education encourages and enables learners to critically engage in the meaningful planning, monitoring, and evaluation of their school work, learners will most likely over time display greater ownership of, and engagement, immersion, and involvement in their school work. Additionally, greater capability to self-regulate one's work positions one to selfdirect the identification and elimination of the intellectual, emotional, motivational, and environmental obstacles that may challenge the success of one's learning efforts (Huppert and So, 2009;Du Toit-Brits, 2018;Sandhu and Zarabi, 2018), and in so doing, ensure greater goal orientation and consequently enhanced academic and personal wellbeing (Huppert and So, 2009;Lewis et al., 2011;Conley, 2014;Rimpelä et al., 2020). Subsequently, the attained academic wellbeing could contribute to increased motivation, persistence, self-efficacy, and selfconfidence (Conley, 2014), as signs of greater personal wellbeing. Unfortunately, the understanding of the teachers did not reflect that Cognitive Education includes the explicit development of dispositions that are central to ensuring academic and personal wellbeing. Nevertheless, the development of dispositions among the learners was identified by the teachers during the application of the newly acquired teaching strategies. Teachers found the intervention useful in enabling them to adopt a more flexible and differentiated approach to teaching in their classrooms, which could lay the foundation for creating positive classroom environments for enabling self-directed academic and personal wellbeing (Pajares, 2001;Booysen et al., 2017;Fomina et al., 2020). Flexible and differentiated teaching approaches involve learners during learning and allow them to achieve learning outcomes in different ways, and from the authors' point of view, impeding frustration and boredom, and contributing to academic wellbeing. Overall, it appeared that learners' academic and personal wellbeing benefitted to some extent from the cognitive approach to teaching applied by the teachers. The five core elements of the theory of personal wellbeing according to Seligman (2011, p. 16) doubtlessly benefitted from exposure to the cognitive approach to teaching. Firstly, as part of positive emotions (Seligman, 2011) the data described the learners as being happy, but more than just bearing smiles on their faces. Learners became more empowered to take charge of their learning, which could have contributed to their displaying a more passionate, enthusiastic, eager, and engaged approach toward their learning, thus reflecting some characteristics of academic wellbeing (Lewis et al., 2011;Wang and Degol, 2014) and likely experiencing some degree of satisfaction and fulfillment whilst engaged in learning. The positive emotions probably lead to greater motivation, persistence, and willingness toward sustained study and task engagement (Dweck et al., 2011;Villavicencio and Bernardo, 2013;Rüppel et al., 2015), and the building of personal resources such as selfefficacy and optimism (Ouweneel et al., 2011) mentioned by the teachers. In the long run, positive emotions could promote greater study engagement to ensure better academic achievement and overall wellbeing at school (Gräbel, 2017). The detected positive emotions such as enjoyment, eagerness, and enthusiasm concerning learning that were experienced by the learners who were exposed to Cognitive Education, are all important for academic achievement (Richardson et al., 2012;Wang et al., 2015) and emotional wellbeing (Seligman, 2011). One could therefore conclude, that perhaps Cognitive Education could provide the foundation for encouraging the positive emotions that will enable academic and personal wellbeing. Secondly, learners displayed greater positive engagement (Seligman, 2011) during learning. The discerned development of critical thinking among the learners in all probability might have inspired the self-assessment and monitoring of work as features of positive engagement referred to by Seligman (2011). The self-assessment and self-monitoring behavior might also signify some degree of competence toward self-directed leadership and autonomy during learning displayed by the learners as a result of the exposure to the new teaching strategies. The deeper levels of thinking exhibited by the learners during learning without a doubt prepared them for growing engagement during learning. It seems fair to argue, that Cognitive Education enables learners to acquire the skills and dispositions that energize learning and prevent academic burnout (Salmela-Aro and Upadyaya, 2014;Rimpelä et al., 2020) that could negatively affect academic wellbeing. On the account of the authors, Cognitive Education could therefore be accepted as an approach to teaching that would enable increased self-directed participation (behavioral engagement) in learning, a positive inclination toward learning (emotional participation), and a willingness to apply stronger mental efforts to learning (cognitive participation) (Wang and Degol, 2014). Evidence of self-directed learning is found in learners who are empowered to autonomously control the intellectual/cognitive, motivational, emotional, and contextual factors that might hamper their academic and personal wellbeing (Karademas, 2006;Ryan and Deci, 2011;Moreira et al., 2015;Sandhu and Zarabi, 2018) and cause setbacks and effort decline (Dweck et al., 2011). The development of autonomy and increased independence witnessed among the learners who were exposed to Cognitive Education could contribute to the learners developing positive self-attitudes, which, according to the data, manifested as a display of greater emotional wellbeing in the form of selfconfidence that was witnessed by the teachers. The noticed academic engagement among the learners could be viewed as a sign of academic wellbeing that was probably encouraged through the acquisition of dispositions to be more focused, involved, and open-minded during learning, as well as being more disciplined to listen during teaching, thus paying better attention. The raised level of cognitive engagement could be considered as a protective factor to ensure academic wellbeing and success (Wang and Fredricks, 2014), and life satisfaction (Lewis et al., 2011). Weariness and being pessimistic about school (Salmela-Aro and Upadyaya, 2014;Rimpelä et al., 2020), helplessness (Heikkilä et al., 2010), and being avoidance-oriented (Saeri et al., 2017;Tuominen et al., 2020) as signs of academic discontent might therefore probably be eliminated when learners are exposed to continued Cognitive Education. The authors believe that improved engagement in learning enabled better understanding and meaning-making of subject content, which brought happiness and enjoyment to the learners. Therefore, enjoyment generated through the cognitive approach to education could be seen as a factor that can strengthen academic and emotional wellbeing. Also, enjoyment and engagement can energize learners to continuously pursue academic success that would enable them to build meaningful futures after school, consequently elevating psychological wellbeing. Thirdly, through Cognitive Education one might suggest that fostering skills and dispositions to establish social connectedness and positive relationships (Seligman, 2011) as an important element of wellbeing, is possible. Teachers detected among the learners the emergence of skills to work together with others, learners learning from others, showing respect to, and sharing with others, all of which suggest a reflection of teamwork and kindness (Seligman, 2011). The development of communication -and social skills that were mentioned by the teachers support the encouragement of positive social attitudes and connectedness toward others, which could be acknowledged as crucial to the social dimension of personal wellbeing (Olsson et al., 2013). The authors consider social connectedness as important for the provision of emotional support in challenging times. Social support can play a prominent role in providing encouragement and assistance to learners in the face of academic challenges that could negatively impact emotional wellbeing. Interacting and sharing with peers testify to increased social connectedness (Olsson et al., 2013), which is probably indicative of more positive attitudes toward others and personal behavior that in all probability will not manifest in isolation, sadness, and self-doubt (Saeri et al., 2017). For Olsson et al. (2013), social connectedness is viewed as a more important route than academic ability to ensure adult personal wellbeing. Fourthly, meaning as an element of wellbeing (Seligman, 2011) seemed to benefit a great deal from the new approach to teaching applied by the teachers. Experiencing meaning can be associated with the teachers' observations of numerous positive traits, feelings, and behaviors such as, engagement, enjoyment, curiosity, excitement, involvement, interest, eagerness, and alertness among the learners whilst involved in classroom learning, that support the positive aspects of personal emotional wellbeing (Schimmack and Diener, 2003;Seligman, 2011;Fomina et al., 2020). The authors believe that motivation to learn, finding a sense of purpose and meaning in learning, and mastering what was learned were probably inspired by the cognitive approach to teaching. Cognitive Education in all likelihood enables the development of positive personality traits, feelings, and behaviors connected to character strength with which wellbeing is associated (Seligman, 2011). In the fifth place, it would appear that the teachers' impressions of increased self-confidence, less impulsive working ways, persistence, self-control, self-efficacy, more accurate working ways, and self-respect among learners attest to some degree of accomplishment, the final core element of wellbeing, according to Seligman (2011). The mentioned traits also focus one's attention on attributes of personal emotional wellbeing that were probably achieved. If a strong sense of self-efficacy prevails, Bandura (2006) asserts that people can reign with resilient power over and master obstacles in the way of their self-development and life circumstances (Bandura, 2006). Feelings of self-efficacy prompt one to persist (Tompkins, 2013), and can magnify accomplishment, as well as personal wellbeing (Bandura, 1994). Suldo et al. (2006) posit that teaching environments need to be modified to support academic and personal wellbeing, intending to transform learners from being dependent and self-regulated (Ryan and Deci, 2011;Fomina et al., 2020), to self-directed (Kazachikhina, 2019). The application of Cognitive Education demonstrated that the classroom environment indeed mediates a positive link for enabling self-directed academic and personal wellbeing (Ryan and Deci, 2001;Rüppel et al., 2015;Gräbel, 2017;Rimpelä et al., 2020), by shaping positive cognitions and positive emotions for academic and personal wellbeing that contribute to the flourishing of character strengths associated with autonomous learning, namely, self-efficacy, self-confidence, and self-esteem (Seligman et al., 2009;Macaskill and Denovan, 2013). Over and above, the implementation of a cognitive intervention at the school level supports the reasoning of Paus et al. (2008) and Choi (2018), that childhood and adolescence are the most decisive stages for cognitive development, learning how to regulate emotions, inspiring motivation and establishing social interactions; thus laying the foundation for enabling academic and personal wellbeing at an early age. Although the intervention probably enabled self-directed academic and personal wellbeing among learners, it was encouraging that the intervention boosted emotional wellbeing among the teachers too, as their competence, self-confidence, self-efficacy, and self-motivation were elevated due to their being empowered with a repertoire of new teaching strategies. These strategies empowered them to create quality inclusive teaching and learning environments that did not focus on a one-sizefits-all approach but were open to developing and transforming attitudes/dispositions as well as the thinking capacity among learners toward self-directed learning. Brockett (2009), Heikkilä et al. (2010, Macaskill and Denovan (2013), and Villavicencio and Bernardo (2013) suggest that helping learners develop personal wellbeing may bolster academic achievement and self-direction, suggesting that academic wellbeing and self-direction depends on personal wellbeing. However, the authors argue that the strong reciprocal relationships between academic and personal wellbeing (as outlined in the article) rather suggest establishing an environment that encourages the simultaneous enabling of both, with self-directed learning as the vehicle to support the enabling. The experience of meaningfulness during school engagement could spark positive emotions and satisfaction that benefit emotional wellbeing. In turn, positive emotions could lead to greater immersion and involvement in schoolwork, being prepared to put in more effort in schoolwork, and experiencing school work as meaningful. Feelings of weariness as part of school burnout could ignite negative emotions as part of emotional wellbeing. On the other hand, negative emotions could manifest as a pessimistic attitude toward school, and contribute to feelings of academic inadequacy that could potentially impact one's psychological wellbeing concerning being successful and vice versa. This research provided in-service teacher development that, opposed to the current top-down approach to professional development provided by the Department of Education in South Africa (Govender, 2015), focused on teachers being selfresponsible and self-directed initiators of the quality of their teaching practices. The intervention could therefore be regarded as holding a two folded benefit, namely: firstly, to enable selfdirected academic and personal wellbeing among learners via a cognitive approach to teaching and learning, and secondly, to enable teachers themselves to self-direct and enhance their teaching practices. Potential Shortcomings and Limitations The fact that learners' experiences with the cognitive approach to teaching were not gauged, could be regarded as a limitation, as learner data would have enabled the authors to collect richer data that could have strengthened the data obtained from the teacher participants. Additionally, observation research would have permitted the authors to gather reliable data concerning the classroom practices of teachers to support the preliminary findings that suggest the latent potential of Cognitive Education to enable self-directed academic and personal wellbeing. The effects of the intervention were only tracked over a short period, which makes it difficult to infer the long-term benefits of the intervention. Also, research that includes other nationalities, contexts, and learners of various ages needs to be conducted to confirm the present findings and to conclusively make deductions about the potential of Cognitive Education to contribute to the enabling of self-directed academic and personal wellbeing. Wider empirical research with diverse groups of teachers and learners needs to establish if a cognitive approach to teaching might hold situational or context-specific limitations for enabling self-directed academic and personal wellbeing. The extent to which external factors such as ability, gender, cultural and social contexts, home environment, lifestyle, and family influence could play in enabling self-directed academic and personal wellbeing (Suldo et al., 2006), were not explored in the study. Also, a quantitative analysis of the strength of the relationships between Cognitive Education, self-directed learning, and academic and personal wellbeing would provide greater confirmation to the qualitative findings obtained. Despite the shortcomings, the present research contributes to the theory and practice of self-directed learning and academic and personal wellbeing. Nevertheless, the authors endorse additional research in the field. Advances and Future Directions According to the research findings, many of the finer dimensions of personal wellbeing seemingly did not yet benefit from the intervention. In particular, it was not clear from the data how aspects related to psychological wellbeing, such as purpose in life, life satisfaction, personal growth, and self-acceptance might have benefitted from the intervention. Compared to psychological wellbeing, benefits related to academic wellbeing, and emotional and social wellbeing seemed to have been exploited more. Still, from the data, it is also not clear how dimensions of emotional wellbeing, such as resilience, self-motivation, and self-doubt might have benefitted from the Cognitive Education intervention. The aforementioned could be attributed to the exclusive focus on teaching strategies that placed explicit focus on advancing the thinking skills and dispositions to selfregulate behavior in the cognitive domain of learning. Although increased self-directed cognitive action beneficial to academic and personal wellbeing was evident, adapting the intervention to also include teaching strategies that specifically focus on shaping self-regulated behavior concerning the motivational, emotional/affective, and contextual/environmental domains of learning could deliver more powerful gains for self-directing psychological and emotional wellbeing. To address the mentioned gaps in the future, expand the accomplished impact, and tap into the benefits of the Cognitive Education intervention for enabling self-directed wellbeing the authors envisage undertaking a comprehensive assessment of and prioritizing the wellbeing needs of the young (Fattahi et al., 2020) for the South African context. Prioritizing the wellbeing needs for the South African context would necessitate a more purposeful consideration of a wider repertoire of teaching strategies to include in the intervention that would encourage the development of the skills and dispositions for enabling selfdirected wellbeing across a wider spectrum of wellbeing needs. It would be of interest to establish whether the priority that is given to psychological and social wellbeing needs (Fattahi et al., 2020); health and physical wellbeing needs (Pouresmaeil et al., 2019;Azadi et al., 2021), and spiritual wellbeing needs (Pouresmaeil et al., 2019), also manifest as prime wellbeing needs in the South African context. On further reflection, the authors concur that substantial attention should be devoted to adapting the material of the Cognitive Education intervention to integrate the core elements of academic and personal wellbeing with the principles of Cognitive Education. By adapting and strengthening the theoretical framework of the intervention, the finer dimensions of academic and personal wellbeing could be aligned to applicable teaching strategies that adequately enable academic and personal wellbeing in its entirety. An aspect that requires further inquiry is how and whether experiencing personal wellbeing in an academic context, as discovered in the research, might relate to experiencing personal wellbeing in daily life circumstances. Also, although the research findings pointed to some advances concerning learner performance that refer to deeper thinking, understanding, and achievement, the evidence according to the current research is not yet powerful, and the long term effects of a Cognitive Education intervention on academic performance and achievement need to be established. Azadi et al. (2021) affirm the importance of an acceptable theoretical framework to achieve success with educational intervention programs aimed at enhancing wellbeing behaviors. Consciously embedding the design of the Cognitive Education intervention in three theoretical frameworks seemed meaningful, as beneficial research results in support of enhanced academic and personal wellbeing were offered. Expanding on the theoretical work of Teal et al. (2015) that links research in two fields, namely, self-directed learning and Positive Psychology (wellbeing), the authors aimed to present a theoretical and practical suggestion of how Cognitive Education as a theoretical foundation could bridge self-directed learning and Positive Psychology (wellbeing), thus uniting three fields that mutually reinforce and support each other, and prompt new questions to be posed and answered concerning the relationship between Cognitive Education, Positive Psychology (wellbeing), and self-directed learning. Through the interdisciplinary connection of Positive Psychology (wellbeing), Cognitive Education, and self-directed learning new topics of investigation and practical application are illuminated. CONCLUSION To the authors' best knowledge, the research reported is a first and novel attempt to explore the role that Cognitive Education could play in providing the conditions for enabling self-directed academic and personal wellbeing among school learners. This research demonstrated that Cognitive Education could be regarded as a form of strengths-based education. Through the application of selected teaching strategies thinking skills and dispositions that encouraged increased self-directed action during learning were developed. The advances learners made in particular concerning the development and application of critical thinking and metacognitive thinking attest to benefits for supporting self-directed action during learning. The growth and development of dispositions such as being increasingly involved in learning, displaying a more open mindset toward the opinions of others, engaging with peers, exhibiting more empathy and respect toward peers, and persisting in more accurate task completion are some of the dispositions that are associated with progressive self-directed action. These thinking skills and dispositions served as protective factors for encouraging positive thoughts and emotions during learning, promoting positive engagement during learning, advancing social connectedness during learning, contributing to experiencing learning as meaningful, accomplishing greater success, and fostering increased self-efficacy that consequently advanced academic and personal wellbeing. The research findings disclosed the importance of Cognitive Education to affect, inspire and lay the groundwork for greater resilient and self-directed action among learners in decision making that could benefit their academic and personal wellbeing. Arguments in favor of teacher support for academic and personal wellbeing are not new, however, this research clarifies how a cognitive approach to teaching capacitates teachers to modify teaching environments that would enable learners to acquire the thinking skills and dispositions necessary to become independent, self-regulated, and eventually self-directed managers of their academic and personal wellbeing. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available on request. Please contact MG, mary.grosser@nwu.ac.za. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Basic and Social Sciences Research Ethics Committee (Ethics number: NWU-HS-2017-0036. Project approval dates: 20-2-2017 to 20-2-2020). The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS MG contributed to introduction and rationale to the research, and conceptualized the intervention and who also acted as editor for the intervention material. GV and MK reviewed the literature related to Cognitive Education. MG, GV, and MK contributed to data analysis and interpretation and finalized the manuscript. GV coordinated and managed the implementation of the intervention, assisted by MK. The intervention was conceptualized by MG who also acted as editor for the intervention material. MG and MK developed the intervention. MK assisted, coordinated, managed the implementation of the intervention, and developed intervention. All authors were involved in presenting sessions during the intervention. FUNDING This work was supported by a Scholarship of Teaching and Learning Grant from the North-West University, Vanderbijlpark Campus, South Africa, where the authors are employed and where the research was conducted.
2022-02-15T14:30:26.717Z
2022-02-15T00:00:00.000
{ "year": 2022, "sha1": "bd64691a699b889a11f949481aa4ed38e3b1c068", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "bd64691a699b889a11f949481aa4ed38e3b1c068", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
235219254
pes2o/s2orc
v3-fos-license
Survey and molecular detection of Sri Lankan cassava mosaic virus in Thailand Cassava plantations in an area of 458 hectares spanning five provinces along the Thailand–Cambodia border were surveyed from October 2018 to July 2019 to determine the prevalence of cassava mosaic disease (CMD) caused by Sri Lankan cassava mosaic virus (SLCMV) in the region. CMD prevalence was 40% in the whole area and 80% in Prachinburi, 43% in Sakaeo, 37% in Burium, 25% in Surin, and 19% in Sisaket provinces. Disease incidence of CMD was highest 43.08% in Sakaeo, followed by 26.78% in Prachinburi, 7% in Burium, 2.58% in Surin, and 1.25% in Sisaket provinces. Disease severity of CMD symptoms was mild chlorosis to moderate mosaic (2–3). The greatest disease severity was recorded in Prachinburi and Sakaeo provinces. Asymptomatic plants were identified in Surin (12%), Prachinburi (5%), Sakaeo (0.2%), and Buriram (0.1%) by PCR analysis. Cassava cultivars CMR-89 and Huai Bong 80 were susceptible to CMD. In 95% of cases, the infection was transmitted by whiteflies (Bemisia tabaci), which were abundant in Sakaeo, Buriram, and Prachinburi but were sparse in Surin; their densities were highest in May and June 2019. Nucleotide sequencing of the mitochondrial cytochrome oxidase 1 (mtCO1) gene of whiteflies in Thailand revealed that it was similar to the mtCO1 gene of Asia II 1 whitefly. Furthermore, the AV1 gene of SLCMV—which encodes the capsid protein—showed 90% nucleotide identity with SLCMV. Phylogenetic analysis of completed nucleotide sequences of DNA-A and DNA-B components of the SLCMV genome determined by rolling circle amplification (RCA) indicated that they were similar to the nucleotide sequence of SLCMV isolates from Thailand, Vietnam, and Cambodia. These results provide important insights into the distribution, impact, and spread of CMD and SLCMV in Thailand. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 (USD). Thailand is one of the largest exporters of cassava products in the world and has a production capacity of approximately 31 million tons per year. In 2019, the export value of cassava from Thailand was 2.66 billion USD [2]. Cassava mosaic disease (CMD) caused by cassava mosaic geminivirus (CMV) is one of the most important diseases in Africa, as CMV is among the top 10 viruses affecting economically important crops [3]. The virus has a twinned icosahedral particle morphology and contains two genomic DNA components (DNA-A and DNA-B) [4]. The genome size of DNA-A and DNA-B components ranges from 2.7 to 3.0 kb [5]. CMV, which was first reported in Tanzania [6], belongs to the genus Begomovirus (family Geminiviridae) [7]. Plants affected by CMD have misshapen leaflets with foliar yellow or green mosaic patterns, curls, distortions, and mottling, which reduce the leaflet size and give a general appearance of stunting [8]. The virus is transmitted by whitefly (Bemisia tabaci) and via infected stem cuttings [9]; in Africa, these were shown to reduce the cassava yield by 35-60% and 55-77%, respectively [10]. Although nine CMV species have been reported across Africa and on islands in the Indian Ocean, only two occur in Asia, Indian cassava mosaic virus (ICMV) and Sri Lankan cassava mosaic virus (SLCMV) [11], with only the latter reported in Southeast Asia [12]. CMD emerged in Southeast Asia in 2015 [13]. In 2017, a survey of CMD was conducted in the Cambodian province of Stung Treng, which experienced an SCLMV outbreak despite its distant location from Ratanakiri [13]. Additionally, the Vietnam Academy of Agricultural Sciences Plant Protection Research Institute reported CMD in Tay Ninh province, where it damaged the established crop spanning more than 1200 ha of land in 2017 [14]. Based on a survey conducted in July and August 2018, the Department of Agriculture (DOA) of Thailand identified 22 plants with CMD symptoms in a 2.27-ha cassava plantation in Sisaket and Surin provinces in northeastern Thailand. According to the DOA, the infected plants were subsequently removed. However, CMD currently (2020-2021) affects more than 45,000 ha of the main cassava production area in Thailand [www.forecast-ppsf.doae.go.th] and has already been reported in South China and Laos [15,16]. The current study presents the outcome of a CMD survey conducted across cassava plantations in five major provinces along the Thailand-Cambodia border from October 2018 to July 2019. We used appropriate and standardized procedures including PCR and DNA sequencing to detect CMD in the tested samples. The results of this survey provide an estimate of the spread and severity of CMD, data on the prevalence of the whitefly vector, and a classification of whitefly biotypes. Survey routes and sample collection The survey was conducted from October 2018 to July 2019. Five major cassava-producing provinces of Thailand (Sisaket, Surin, Buriram, Sakaeo, and Prachinburi) located on the border with Cambodia were surveyed. An area of 458 ha planted with cassava (201 cassava fields) was used to sample cassava plantations in the five provinces (Fig 1). A total of thirty 3-to 6-month old cassava plants were randomly sampled from a 1-ha area of the plantation along two paths intersecting in an "X"; leaves were collected from the plants for PCR detection. The precise location of sampled plants was determined using the global positioning system (Compass Deluxe Navigation, a free application) [16]. The cultivar and age of sampled cassava plants, mode of CMD transmission, symptom severity, and number of whiteflies were noted following a previous study [17] with minor modifications. The size of whitefly populations was determined by counting the numbers of adult whiteflies on the five topmost leaves of each sampled plant [18]. CMD prevalence, incidence, and symptom severity Incidence of cassava mosaic disease was calculated as the disease prevalence and incidence using the following formulas: Disease prevalence rate (%) was calculated using the following equation [19]: Disease prevalence ¼ Number of fields with visible symptoms Total number of fields observed � 100 Disease incidence (%) was calculated using the following equation: where N is the total number of observations, and n is the total number of plants with no disease symptoms. The severity of CMD symptoms was scored on a scale ranging from 1 to 5 (1 = no visible symptoms; 2 = mild chlorosis of the entire leaflet or mild distortion at the base of the leaflet, but overall green and healthy leaves; 3 = moderate mosaicism throughout the leaf, and narrowing and distortion of the lower one-third of the leaflet; 4 = severe mosaic and distortion of two-thirds of the leaflets, with general reduction in leaf size; and 5 = severe mosaicism, with distortion of the entire leaf) [20]. Disease severity index (DSI) was calculated using the following equation: Two categories of infection were recognized and recorded as "cutting-borne" by the presence of symptoms on the lowest earliest-formed leaves) and "whitefly-borne" recognized by the presence of symptoms on upper leaves only (S2 Fig). The latent infection rate (%) was calculated as follows: Latent infection ¼ Number of asymptomatic plants Total number of plants collected � 100 Adult whiteflies were collected from cassava fields located in Buriram, Sakaeo, and Surin provinces using an aspirator and transferred to 1.5-ml tubes containing 90% ethanol that were stored at −20˚C. Data analysis A general linear model that considered location as fixed was used. Least square means for disease severity and number of whiteflies were estimated for each location and via cassava cultivars and were compared using Bonferroni t-tests. The data were analyzed using SAS software [21]. DNA extraction and SLCMV detection DNA was extracted from dried cassava leaves (20 mg) using the modified cetyl trimethylammonium bromide (CTAB) method [22]. Briefly, the leaves were crushed in CTAB buffer using metal beads and incubated at 65˚C for 30 min. The homogenized mixture was then added to 700 μl of chloroform:isoamyl alcohol (24:1), and DNA was precipitated using isopropanol alcohol for 3 h. The DNA pellet was washed twice with 70% ethanol and then dried at room temperature for approximately 30 min. The DNA was resuspended in water containing 100 μg/ml RNase (Thermo Fisher Scientific, Waltham, MA, USA) and stored at −20˚C. The quality and quantity of the isolated DNA were assessed by agarose gel electrophoresis and spectrophotometry [23]. To isolate DNA from whiteflies, five adults were randomly selected from among those collected at each location. Genomic DNA was isolated as previously described [24], with minor modifications. Briefly, each whitefly was crushed in lysis buffer (200 mM NaCl and 200 mM Tris-HCl [pH 8.0]) containing β-mercaptoethanol and proteinase K (10 mg/ml), and the mixture was incubated at 65˚C for 90 min. DNA was recovered by centrifugation. PCR amplification was performed in a 25-μl reaction volume containing 1× PCR buffer (PCR Biosystems, London, UK), 0.2 μM each of forward and reverse primers, and approximately 50 ng of the DNA template. The thermal cycling conditions for the AV1 gene were as follows: initial denaturation at 94˚C for 5 min; 35 cycles of denaturation at 94˚C for 40 s, annealing at 55˚C for 40 s, and elongation at 72˚C for 40 s; and final elongation at 72˚C for 5 min. For the mtCO1 gene, the thermal cycling conditions were as follows: initial denaturation at 94˚C for 5 min; 35 cycles of denaturation at 94˚C for 40 s, annealing at 52˚C (mtCO1) for 40 s, and elongation at 72˚C for 40 s; and final elongation at 72˚C for 5 min. The amplified PCR products were separated on a 1% agarose gel alongside a 1-kb DNA ladder (Thermo Fisher Scientific) that was stained with RedSafe Nucleic Acid Staining Solution (iNtRON Biotechnology, Sangdaewon, South Korea) in 1× Tris-acetate-EDTA buffer. The gels were visualized using a Gel Doc imaging system (Syngene, Frederick, MD, USA). Confirmed negative and positive controls were included in all assays. Complete genome characterization of the SLCMV isolate collected in Thailand The Buri Ram province isolate was selected as a representative of CMV species diversity. The circular DNA of this isolate was obtained by rolling circle amplification (RCA) using phi29 DNA Polymerase (New England Biolabs, Ipswich, MA, USA), according to the manufacturer's instructions. The RCA product was digested with restriction endonucleases, and a~2.7-kb fragment (the expected size of DNA-A and DNA-B fragments of SLCMV) was amplified from the digestion products. The DNA-A and DNA-B fragments were purified and cloned into the pGEM-T Easy vector (Promega, Madison, WI, USA) and then transformed into Escherichia coli strain DH5α cells by the heat shock method. The cloned inserts were sequenced in their entirety by primer walking (S1 Fig). Sequencing and phylogenetic analysis Nucleotide sequences of the amplified fragments were searched in the National Center for Biotechnology Information database using BLAST (https://blast.ncbi.nlm.nih.gov/Blast.cgi). Multiple sequence alignment of the nucleotide sequences was performed using Molecular Evolutionary Genetics Analysis version X (MEGA X; http://www.megasoftware.net/) [26]. Phylogenetic trees were constructed in MEGA X using the neighbor-joining method with 1000 bootstrap replications. Infected cassava plants showed at least one of the typical CMD foliar symptoms such as green or yellow mosaic pattern, leaflet curling, and leaflet narrowing with distortion. Disease transmitted through infected stems caused symptoms in the whole plant, whereas transmission by whiteflies caused symptoms in only the top part of the plant (S2 Fig) [17]. Approximately 95% of the CMD incidence was attributable to whiteflies, with stem cuttings being responsible for 5% of infections. Stem cutting-and whitefly-borne infections were observed in the same plot. CMD symptoms typically appear 3-5 weeks after infection [27]. There was also a strong relationship between the mode of infection and whitefly populations. CMD was mainly spread by whiteflies, especially in Sakaeo, Prachinburi, and Buriram provinces, and there was no significant difference in disease incidence between Sisaket and Surin provinces (Table 1). SLCMV symptom severity Disease severity varied significantly between provinces (p<0.05). Disease severity was highest in Prachinburi and Sakaeo provinces and low in Sisaket province (Table 1). Plant age was correlated with the severity of disease symptoms. For example, 1-to 3-month-old infected plants had an average severity of 3.75 (moderate to severe mosaicism), whereas 5-to 7-month-old plants had an average severity score of 2.58 (mild chlorosis). Symptoms caused by cuttingborne disease were more severe than those caused by whitefly-borne infection. We also surveyed cassava cultivars grown in the study area. Six cassava cultivars were identified on the surveyed route (Huai Bong 80, Rayong 9, Rayong 11, Rayong 72, Kasetsart 50, and CMR-89). Many farmers planted several cultivars in a single plot. The CMR-89 cultivar was the most common in the surveyed area, accounting for approximately 53% of the total area, followed by Rayong 72 (36%) and Kasetsart 50 (5%). Disease severity significantly differed (p<0.05) between cultivars: it was moderate in Huai Bong 80 and CMR-89 but low in Kasetsart 50 and Rayong 72 ( Table 2). The main mode of transmission in all cassava cultivars was via whiteflies. Assessment of whitefly population size Whitefly nymphs and adults were collected from the abaxial surface of the five topmost leaves of cassava plants. The nymphs had a flattened oval shape and resembled scaly insects. The average number of whiteflies significantly differed (p<0.05) between provinces. The population density was high in Sakaeo, Buriram, and Prachinburi provinces but low in Sisaket and Surin provinces ( Table 1). The survey was conducted from October 2018 to July 2019, which spans the cold season (October-February), summer season (March), and rainy season (July). The number of whiteflies was high from May to July, with an average of 11.77, 10.6, and 6.59 per plant in May, June, and July, respectively; the average numbers were low from December to March. Amplification, sequencing, and phylogenetic analysis of the whitefly mtCO1 gene The nucleotide sequence of the mtCO1 gene amplified from the DNA of whiteflies collected from Surin, Sakaeo, and Burirum provinces have been deposited in the DNA Data Bank of Japan (DDBJ) under accession numbers LC579572, LC579573, and LC579574, respectively. The mtCO1 gene amplified from whiteflies in these provinces showed 99% sequence similarity to that of B. tabaci Asia II 1. Phylogenetic analysis of the mtCO1 sequences of B. tabaci from Thailand showed that they grouped closely with reference sequences determined for a large number of Asia II 1 species collected from other regions in the world (Fig 2). PCR-based detection of CMV PCR products were amplified from 1434 of 6120 samples collected from the five provinces using AV1-specific primers (S3 Fig). Of the 1434 PCR-positive samples, 61.7% were collected from Sakaeo, 20% from Prachinburi, 13% from Surin, 4.5% from Buriram, and 0.8% from Sisaket provinces. The PCR results also revealed that 205 samples harbored a latent infection; of these cases, samples from Surin and Buriram provinces showed the highest and lowest percent infection rates, respectively, whereas no latent infection was detected in plant samples from Sisaket province. Additionally, among the different cultivars, latent infection was detected in CMR-89 and Rayong 72 but not in Kasetsart 50, Rayong 9, Rayong 11, and Huaibong 80. Whole-genome sequence of SLCMV The complete genomic DNA sequence of the Burirum SLCMV isolate was obtained by RCA, and nucleotides sequences of DNA-A and DNA-B were submitted to the DDBJ under accession numbers LC586845 and LC588395, respectively. A BLAST search revealed that the DNA-A and DNA-B nucleotide sequences of the Burirum isolate were identical to those of previously characterized SLCMV isolates, with the highest sequence identity (99%) to isolates from Prachinburi (MN026159) [16]. We also conducted a phylogenetic analysis of the whole genome sequence of the Burirum SLCMV isolate. The phylogenetic tree indicated that the SLCMV isolates collected in our study belonged to the same species and were closely related to isolates from Vietnam (Figs 3 and 4). Discussion We surveyed CMD incidence and whitefly populations in an area where CMD has been previously reported as well as in new cassava plantations along the Thailand-Cambodia border. CMD was detected in some locations in five provinces where the disease was thought to have been eradicated by the DOA. Although the extent of the geographic area is an important factor affecting the eradication of CMD, other factors should also be considered such as planting distance, geographic location, mode of infection, and whitefly numbers [11]. A CMD outbreak was reported in Stung Treng province, Cambodia in 2016-2017 [13]. Stung Treng is located approximately 300 km from the Thailand border. Thailand is the main distributor of cassava to Cambodia, Laos, and other Southeast Asian countries [28]. CMD could rapidly spread through infected plant material transported across this region. We determined that the CMD outbreak in Thailand was initially caused by infected stem cuttings (primary infection source), and the second wave of the epidemic was caused by CMV transmitted via whiteflies. In Africa, CMD epidemics have been primarily driven by whiteflies [8,11,29,30]; however, in Asia, whiteflies appear to play a secondary role in the spread of CMD. Nonetheless, the results of epidemiologic studies in Ivory Coast, Kenya, and Uganda support our speculation that the spread of CMD into and within the experimental cassava cultivation area was directly related to the number of adult whiteflies present and to the incidence of CMD, as determined by the locality or administrative district where the trials were carried out and where CMD dissemination was widespread. Any subsequent spread of CMD is attributable to the viruliferous whiteflies moving between or within planting areas after acquiring the virus from cassava plants grown from infected cuttings or infected by whiteflies during growth [29,31]. The region bordering Thailand and Cambodia is rich in forests and high mountains, which act as a natural barrier to the movement of whiteflies. It is possible that the incidence of CMD in Thailand near the Cambodian border was caused by the exchange of infected cassava planting material among local populations. Disease incidence is related to fluctuations in the whitefly population due to environmental factors such as rainfall, wind, and temperature [10]. We found a large number of whiteflies in Prachinburi and Sakaeo provinces, consistent with the disease incidence rates in these provinces (Table 1). Furthermore, whitefly population size impacts the spread of CMD, as whiteflies can travel distances of up to 100 km a year [8], with an estimated flight speed of approximately 0.2 m/s [10]. During its life cycle of approximately 30-40 days, a female whitefly lays up to 300 eggs on the abaxial surface of leaves [32]. Temperature, humidity, and rainfall influence the population size of adult whiteflies. Conditions that are conducive to an increase in whitefly numbers include temperatures <35˚C and a relative humidity of approximately 65-73% [15]. Whitefly density was the highest in May, followed by June and July. May marks the beginning of the rainy season in Thailand, with temperatures <30˚C and approximately 64% relative humidity [33]. We therefore propose that farmers should be persuaded to modify their traditional planting practices-which include planting soon after the onset of the rainy season-to avoid high disease incidence caused by abundant whitefly populations. We found that most of the infections were caused by whitefly, which influenced the spread of CMD in the surveyed area. CMV transmitted by whiteflies has caused CMD not only in Thailand but also in Africa since the 16th century [34]. Understanding the ecological and biological characteristics of whiteflies can aid the prediction of future CMD epidemics according to weather data, thereby facilitating disease management [17]. Disease severity is affected by virus strain, plant age, plant genotype, and environmental conditions [35]. In CMD-resistant varieties, the appearance of symptoms in leaves is influenced to a greater extent by cooler temperatures than by hot weather [36]. Moreover, symptoms are exacerbated in plants regenerated from infected planting material. In this study, CMR-89 was susceptible to CMD (>70% disease incidence) and showed the highest disease severity among the seven tested cultivars. Although CMR-89 is not a DOA-certified cultivar, it is grown in approximately 22% of cassava plantations in Thailand (Office Agricultural Economics 2018). In one study that screened CMD resistance in cassava cultivars by grafting, CMR-89 and Rayong 11 were found to be susceptible to CMD, whereas Kasetsart 50, Rayong 72, and Huai Bong 60 showed moderate resistance [37]. Discontinuing the cultivation of CMR-89 and promoting the cultivation of CMD-tolerant or moderately resistant cultivars is critical for controlling this disease. The pattern of CMD spread differed depending on the mode of transmission. Most cassava plants infected by whiteflies were located at the edge of the plot, with the infection then spreading inward. Whitefly density was especially high in newly planted cassava stands located close to mature cassava plants. Similar cases have been reported in several countries in East and West Africa, where new cassava plantings were colonized by whitefly populations immigrating from older cassava stands in the area. The immigrant whiteflies reproduced and reached their peak population size within a few months, and before the population declined adults dispersed to younger cassava plants [38][39][40]. Thus, the whitefly count can be useful for predicting and controlling the spread of CMD, and farmers should frequently monitor their cassava plants and whitefly populations. CMD has been reported in several Southeast Asian countries. SLCMV has been detected in cassava fields in Thailand, and similar viruses have been reported in Cambodia, Vietnam, and China [13, 16,41]. CMD was reported in Thailand in 2018 after its occurrence in Cambodia and Vietnam. The viral strain identified in this study has the same origin as that first reported in Ratanakiri, Cambodia, and other studies conducted in Thailand [16,42]. The Rep protein encoded by DNA-A of the Burirum SLCMV isolate had seven additional amino residues at its C-terminal end. This 7-amino-acid motif is essential for the accumulation of the Rep protein and virulence of SLCMV [12]. The genomes of SLCMV isolates from Southern India, Sri Lanka, and Southeast Asia were not recombinant but harbored a point mutation [16]. Additionally, SLCMV isolates from Southeast Asia, China, and India clustered together in a separate group from the original SLCMV isolate from Colombo, Sri Lanka (AJ314737) [4] (Fig 3). Further investigation is needed to determine the host range of SLCMV, clarify the mechanisms of transreplication of its DNA components, and identify the genetic determinants of symptoms. According to the phylogenetic analysis, the partial coding sequence of mtCO1 of B. tabaci from Thailand was classified as an Asia II 1 cryptic species , which must be taken into account in whitefly population management. Based on our results, we propose the following basic approaches for controlling the outbreak and spread of CMD: 1) educate farmers and agricultural extension officers about CMD, including how to distinguish CMD symptoms from mineral deficiency or herbicide toxicity; 2) develop CMD-resistant cassava varieties and cultivate them on a sufficiently large scale; 3) practice phytosanitary techniques such as the use of CMD-free planting material and removal (rouging) of diseased plants; and 4) avoid planting cassava varieties susceptible to CMD such as CMR-89 and Rayong 11, especially in high-risk areas. Conclusion We surveyed the spread of CMD in five major cassava-producing provinces of Thailand along the border with Cambodia. This is the first survey to report patterns of CMD spread, disease incidence and severity, and whitefly density in Thailand. This information will aid the development of disease management strategies to reduce the spread of CMD in affected areas. Although conducting surveys is costly and time-consuming, the information that is obtained is critical for disease epidemiology.
2021-05-28T13:26:20.883Z
2021-05-25T00:00:00.000
{ "year": 2021, "sha1": "47d0f20fb09120eceafedae13852cfdaead911a1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0252846&type=printable", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c04986f7b1aa79fe48812d499d002b6b7ec3c533", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
119582797
pes2o/s2orc
v3-fos-license
Bounds for higher topological complexity of real projective space implied by BP We use Brown-Peterson cohomology to obtain lower bounds for the higher topological complexity, TC_k(RP^n), of real projective spaces, which are often much stronger than those implied by ordinary mod-2 cohomology. Introduction and main results In [8], Farber introduced the notion of topological complexity, TCpXq, of a topological space X. This can be interpreted as one less than the minimal number of rules, called motion planning rules, required to tell how to move between any two points of X. 1 This became central in the field of topological robotics when X is the space of configurations of a robot or system of robots. This was generalized to higher topological complexity, TC k pXq, by Rudyak in [10]. This can be thought of as one less than the number of rules required to tell how to move consecutively between any bounds in many cases, and then focus in Theorem 1.4 on a particular family of cases, which we show is often much stronger than the results implied by mod-2 cohomology. The general result is obtained from known information about the BP -cohomology algebra of products of real projective spaces. It gives conditions under which nonzero classes of a certain form can be found. Here and throughout, νp´q denotes the exponent of 2 in an integer. Theorem 1.1. Let k ě 3 and r ě 0. Suppose there are positive integers a 1 , . . . , a k´1 whose sum is km´p2 k´1 q2 r such that for all j 1 , . . . , j k´1 with j i ď m and k´1 ÿ i"1 j i ě pk´1qm´p2 k´1 q2 r . Suppose also that Theorem 1.1 applies in many cases, but we shall focus on one family. Here and throughout, αp´q denotes the number of 1's in the binary expansion of an integer. We prove Theorems 1.1 and 1.4 in Section 2. In Section 3, we describe more specifically some families of particular values of pm, k, rq to which this result applies, and the extent to which these results are much stronger than those implied by mod-2 cohomology. In Section 4, we prove that the cohomology-implied bounds for TC k pP n q are constant for long intervals of values of n. In these intervals, the BP -implied bounds become much stronger than those implied by cohomology. Proofs of main theorems In this section, we prove Theorems 1.1 and 1.4. The first step, Theorem 2.1, follows suggestions of Jesus González, and is similar to work in [2]. We are very grateful to González for these suggestions. There are canonical elements X 1 , . . . , X k in BP 2 ppP n q k q, where pP n q k is the Cartesian product of k copies of P n . Proof. Let pP n q r0,1s denote the space of paths in P n , and P n,k " pS n q k {ppz 1 , . . . , z k q " p´z 1 , . . . ,´z k qq a projective product space.( [5]) The quotient map P n,k π ÝÑ pP n q k is a pZ 2 q k´1 -cover, classified by a map pP n q k µ ÝÑ BppZ 2 q k´1 q " pP 8 q k´1 . The map pP n q r0,1s p ÝÑ pP n q k defined by σ Þ Ñ pσp0q, σp 1 k´1 q, . . . , σp k´2 k´1 q, σp1qq lifts to a map pP n q r0,1s r p ÝÑ P n,k .( [2, (3.2)]) A definition of TC k pP n q is as the sectional category secatppq. The lifting r p implies that secatppq ě secatpπq. Let G " pZ 2 q k´1 , and B t G " p˚t`1Gq{G, where˚t`1G denotes the iterated join of t`1 copies of G. Note that B t G is the tth stage in Milnor's construction of BG, with a map i t : B t G Ñ BG. By [11, Thm 9, p. 86], as described in [2, (4.1)], µ lifts to a map pP n q k r µ ÝÑ B secatpπq G. satisfies µ˚pX i q " u i pX i´Xk q for 1 ď i ď k´1, with u i a unit. Since µ˚" r µ˚is ecatpπq and B t G is t-dimensional, µ˚pX a 1 1¨¨¨X a k´1 k´1 q " 0 if 2a 1`¨¨¨`2 a k´1 ą secatpπq. The theorem now follows since ś pX i´Xk q a i ‰ 0 implies µ˚p ś X a i i q ‰ 0, which implies ÿ 2a i ď secatpπq ď secatppq " TC k pP n q. We use this to prove Theorem 1.1. where the sum is taken over all permutations pℓ 1 , . . . , ℓ k q of t2 r , . . . , 2 r`k´1 u. (An analogous result was derived in BP -homology in [7], following similar, but not quite so complete, results in [12] and [4], which also discussed the dualization to obtain BP -cohomology results.) The result follows from Theorem 2.1 once we show that pX 1´Xk q a 1¨¨¨p X k´1´Xk q a k´1 ‰ 0 P BP 2km´p2 k´1 q2 r`1 ppP 2m q k q. This expands as ÿ , for values of j 1 , . . . , j k´1 described in Theorem 1.1. By (2.2) and (1.2), this equals, mod with ℓ " pℓ 1 , . . . , ℓ k´1 q as in (1.3). Note here that ℓ k " 2 r`k´2r´ℓ 1´¨¨¨´ℓk´1 . The terms in (2.3) are 0 unless the exponent of each X i equals m, since otherwise there would be a factor X p with p ą m. We are left with ÿ with pℓ 1 , . . . , ℓ k´1 q as above, and this is nonzero by the hypothesis (1.3) and the fact, as was noted in [12], that by the (proven) Conner-Floyd conjecture, v h k X m 1¨¨¨X m k ‰ 0 for any nonnegative integer h. In the following proof of Theorem 1.4, we will often use without comment Lucas's Theorem regarding binomial coefficients mod 2, and that ν`m n˘" αpnq`αpm´nq´αpmq, and αpx´1q " αpxq´1`νpxq. (2.4) Proof of Theorem 1.4. We explain the proof when k ě 4 and A " 6 p8q, and then describe the minor changes required when A " 3 or k " 3. We apply Theorem 1.1 with a i " m´p2 k´1 q2 r´i , 1 ď i ď k´3, a k´2 " m, and a k´1 " 2m´p2 k´1 q2 r´pk´3q . Lemma 2.5 with t " k´2 shows that ν`p 8B`6q2 k´2´2k`1 h˘ě αpBq for the required values of h. The proof for arbitrary j (in the required range) follows from the easily proved fact that Now we prove (1.3). We divide the top and bottom of the binomial coefficients by 2 r´pk´3q ; this does not change the exponent. The tops are now and the bottoms are selected from 2 k´3 A´2 k´3 , . . . , 2 k´3 A´2 2k´4 . All the bottoms except the last one are greater than the first top one. Thus to get a nonzero product in (1.3), the last bottom must accompany the first top, and after dividing top and bottom by 2 k´4 , it becomes`2 A´p2 k´1 q 2A´2 k˘" 1 mod 2. Similar considerations work inductively for all but the final two factors, showing that the ith bottom from the end must appear beneath the ith top and gives an odd factor. What remains is where pj, j 1 q are the ordered pairs of distinct elements of The`1 on top does not affect the exponent of the binomial coefficients, and so we may remove it and then divide tops and bottoms by 2 k´3 , obtaining where pj, j 1 q are ordered pairs of A´1, A´2, and A´4. Thus the sum in (1.3) has νp´q " 2 r , coming from the single summand corresponding to pj, j 1 q " pA´4, A´2q. When A " 3 mod 8, the following minor changes must be made in the above Part (a) of Theorem 1.4 follows similarly. We have a 1 " m and a 2 " 2m´7¨2 r . Then by the same methods as used above, we show that with m as in the theorem, and P denoting a positive number and I a number which is irrelevant, ‚ If m´7¨2 r ď j ď m, then ν`2 m´7¨2 r j˘ě 2 r . ‚ The values pν`m m´2 r˘, ν`m m´2 r`1˘, ν`m m´2 r`2˘q are p0, P, 0q (resp. pP, 0, Iq) in case (i) (resp. (ii)) of the theorem. The following lemma was used above. Proof. Using (2.4), we can show from which the lemma is immediate. Numerical results In this section, we compare the lower bounds for TC k pP 2m q implied by BP with those implied by mod-2 cohomology. In [6], the best lower bounds obtainable using mod-2 cohomology were obtained. They are restated here in (4.2). In Table 1 32 192 152 33 198 152 34 204 190 35 206 190 36 216 190 37 222 2083 8 222 2143 9 222 2144 0 240 2144 1 246 232 42 252 2384 3 254 2384 4 254 2384 5 254 2384 6 254 248 47 254 248 48 254 248 49 254 280 50 254 2865 1 254 2865 2 254 2865 3 254 304 54 254 310 55 254 310 56 254 310 57 254 310 58 254 3205 9 254 3206 0 254 3326 1 254 3326 2 254 3326 3 254 332I n Table 2, we present another comparison of the results implied by Theorem 1.4 and those implied by ordinary mod-2 cohomology. We consider lower bounds for TC 4 pP 2m q for 2 11 ď m ă 2 12 . In Table 2, the first column refers to a range of values of m, the second column to the number of distinct new results implied by Theorem 1.4 in that range, and the third column to the range of the ratio of bounds implied by Theorem 1.4 to those implied by ordinary cohomology. There are many other stronger bounds implied by BP via Theorem 1.1, but our focus here is on the one family which we have analyzed for all k and r. of this latter range to TC k pP 2m q for arbitrary k and arbitrary 2-power near the end of the range. In Theorem 4.1, we will show that the bound for TC k pP 2m q implied by cohomology has the constant value pk´1qp2 e´1 q for r k´1 k¨2 e s ď 2m ď 2 e´1 . In this range, the bound implied by Theorem 1.4 will increase from a value approximately equal to the cohomology-implied bound to a value which, as we shall explain, is asymptotically as much greater than the cohomology-implied bound as it could possibly be. The following result gives a result at the end of each 2-power interval, since each e can be written uniquely as 2 r`r`3`d for 0 ď d ď 2 r . For example, the case r " 1, d " 0, k " 3 in this proposition is the 332˚next to m " 60 in Table 1, and the case r " 2, d " 3, k " 4 gives m " 3980, the start of the last row of Table 2. For r ě 1 and 0 ď d ď 2 Then TC k pP 2m q ě 2km´p2 k´1 q2 r`1 . Proof. It is straightforward to check that the conditions of Theorem 1.4 are satisfied for these values of m and r. For m as in Proposition 3.1, the lower bound for TC k pP 2m q implied by cohomology is pk´1qp2 2 r`r`4`d´1 q. One can check that the ratio of the bound in Proposition 3.1 to the cohomology bound is greater than k k´1´1 2 2 r`1 . Since, as was noted in [2], pk´1qn ď TC k pP n q ď kn, the largest the ratio of any two estimates of TC k pP n q could possibly be is k{pk´1q. Thus the BP -bound improves on the cohomology bound asymptotically by as much as it possibly could, as e (hence r) becomes large. Jesus González ( [2]) has particular interest in estimates for TC k pP 3¨2 e q. We shall prove the interesting fact that our Theorems 1.1 and 1.4 improve significantly on the cohomological lower bound for TC 3 pP 3¨2 e q, but not for TC k pP 3¨2 e q when k ą 3. In Table 3, we compare the bounds for TC 3 pP 3¨2 e q implied by Theorem 3.3 and by (3.2) for various values of e. Every e has a unique r and d. The m-column is the value of m ă 3¨2 e´1 which appears in the proof of 3.3. The "BP -bound" column is the bound for TC 3 pP 3¨2 e q given by Theorem 3.3, and the "H˚-bound" column that is given by (3.2). The final column is the ratio of the BP -bound to the H˚-bound, which approaches 1.125 as e gets large. it is about 1.4% better than that implied by Theorem 3.3 and 11.6% better than that implied by cohomology. For one who wishes to check this result when e " 11, use m " 3066, r " 3, and a 1 " 3287 in Theorem 1.1. The values of ν`a 1 m´2 r`ε( resp. ν`a 2 m´2 r`ε˘) for ε " 0, 1, 2 are (5,6,7) (resp. (6,6,3)). TC k pP n q result implied by mod-2 cohomology, in a range In this section, we prove that the lower bound for TC k pP n q implied by cohomology is constant in the last 2 k portion of the interval between successive 2-powers. This generalizes the behavior seen in Table 1 (k " 3) or Table 2 (k " 4) . In the previous section, we showed that the bound implied by BP rises in this range to a value nearly k{pk´1q times that of the cohomology bound, which is as much as it possibly could. Recall from [2] or [6] that zcl k pP n q is the lower bound for TC k pP n q implied by mod-2 cohomology. It is an analogue of Theorem 2.1, except that classes are in grading 1 rather than grading 2. Here we prove the following new result about zcl k pP n q. Theorem 4.1. For k ě 3 and e ě 2, zcl k pP n q " pk´1qp2 e´1 q for r k´1 k¨2 e s ď n ď 2 e´1 . Note that, since pk´1qn ď zcl k pP n q ď kn (by [2] or [6]), this interval of constant zcl k pP n q is as long as it could possibly be. Proof. We rely on [6, Thm 1.2], which can be interpreted to say that, with n t denoting n mod 2 t , zcl k pP n q " kn´maxp2 νpn`1q´1 , kn t´p k´1qp2 t´1 qq, (4.2) with the max taken over all t for which the initial bits of n mod 2 t begin a string of at least two consecutive 1's. That zcl k pP 2 e´1 q " pk´1qp2 e´1 q is immediate from (4.2). Since zcl k pP n q is an increasing function of n, it suffices to prove if n " r k´1 k¨2 e s, then zcl k pP n q " pk´1qp2 e´1 q. The case k " 3 is slightly special since the binary expansion of n " r2 e`1 {3s does not have any consecutive 1's. For this n, (4.2) implies that zcl 3 pP n q " 3n`1´2 νpn`1q " 2 e`1´2 , as desired. From now on, we assume k ą 3 in this proof. One part that we must prove is if n is as in (4.3). Write 2 e " Ak´δ with 0 ď δ ď k´1. Then n " 2 e´A , and the desired inequality reduces to k´δ ě 2 νpA´1q since νpA´1q " νp2 e´A`1 q. If A´1 " 2 t u with u odd, then k´δ " 2 e´2t uk ě 2 t since k´δ ą 0, proving the inequality. The rest of the proof requires the following lemma. Lemma 4.5. Let k be odd, and e the multiplicative order of 2 mod k. Thus e is the smallest positive integer such that k divides 2 e´1 . Let m " pk´1q 2 e´1 k , and let B be the binary expansion of m. If t " αe`β with 0 ď β ă e, then the binary expansion of rpk´1q2 t {ks consists of the concatenation of α copies of B, followed by the first β bits of B. Also, the binary expansion of rp2 v k´1q2 v`t {p2 v kqs with k odd equals that of rpk´1q2 t {ks preceded by v 1's. If k ě 4, B begins with at least two 1's. Proof. Let f t " pk´1q2 t {k. Then, letting tf u " f´rf s denote the fractional part of f , This shows that as t increases, the binary expansions of the rf t s are just initial sections of subsequent ones. They start with at least two 1's when k ě 4 since r2 2 pk´1q{ks " 3. If e is as in the lemma, then showing that adding this e to the exponent just appends B in front of the binary expansion. which shows the appending of 1's in front. In Table 4, we list some values of B, the binary expansion of m, for the m associated to k as in Lemma 4.5. The property (4.7) says roughly that the beginning of B has more 1's than anywhere else in B. For any k ą 3 and n " r k´1 k¨2 e s as in (4.3), equations (4.2) and (4.4) imply that zcl k pP n q ď kn´pkn´pk´1qp2 e´1 qq " pk´1qp2 e´1 q, with equality if, for all t for which the initial bits of n mod 2 t begin a string of at least two consecutive 1's, kn t´p k´1qp2 t´1 q ď kn´pk´1qp2 e´1 q. This is equivalent to 1´1 k ď n´nt 2 e´2t . (4.6) By the lemma, if k is odd (resp. even), the RHS of (4.6) is the same as (resp. greater than) it would be if pn, eq is replaced by pm, eq, with notation as in the lemma, provided t ď e. Note that equality holds in (4.6) if pn, e, tq is replaced by pm, e, 0q. Hence, again using the lemma for cases in which t ą e, (4.6) will follow from its validity if pn, eq is replaced by pm, eq, and, since 1´1 k " m 2 e´1 , this reduces to showing mt 2 t´1 ď m 2 e´1 . (4.7) Let q " 2 e´1 k " 2 e´1´m and q t " 2 t´1´m t its reduction mod 2 t . Now the desired inequality reduces to qt 2 t´1 ě q 2 e´1 " 1 k ; i.e., kq t ě 2 t´1 . We can prove the validity of this last inequality as follows. Write q " q t`2 t α, for an integer α. Then 2 e´1 " kq " kq t`2 t αk. Reducing mod 2 t gives the desired result. Remark 4.8. It appears that the stronger inequality kq t ě 3¨2 t´1 holds when q " 2 e´1 k , but we do not need it, and it seems much harder to prove.
2018-01-18T13:34:45.000Z
2018-01-18T00:00:00.000
{ "year": 2019, "sha1": "8f44185af5e2d8c2c28f9b5849fe33d692912ba1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1801.06006", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8f44185af5e2d8c2c28f9b5849fe33d692912ba1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
236420602
pes2o/s2orc
v3-fos-license
Juridical Analysis of the Authenticity of Notary Deed after Apostille is Implemented in Indonesia One of the obstacles in international civil and commercial law practice traffic is that documents originating from abroad to be used domestically or otherwise domestic documents will be used abroad to be used as authentic evidence; it is required that chain legalisation of these documents be used be held. (Brosch & Mariottini, 2020). Of course, this procedure is not effective and efficient because the costs and time are wasted to fulfil the formalities so that the document can be used as evidence to ensure the security of every transaction that occurs (Bastos, 2019). This simplicity of formality certainly requires a way to simplify the formality, which is often difficult by not reducing the document's authenticity and the quality of proof of the document concerned (Bahri & Yahanan, 2019). Business contracts have become more efficient and are quicker to create and distribute among each party. Physical presence and signatures can be represented through digital and proof authenticity can be guaranteed through storage media that can be accessed easily and quickly (Coresy & Saleh, 2020). Almost all countries have expanded their territorial boundaries with jurisdictions through cyber laws in their respective countries. Globalisation and electronic disruption are reasons for removing and breaking legal barriers that are too strict and rigid (Perry, Hofmann & Scrivens, 2017). Law is made more flexible and adequate in meeting the needs and developments of the times (Mertokusumo, 2009). Laws that are often left behind by business transaction activities are designed to adapt and adapt to every business activity. Especially in Indonesia, various changes have been made regarding legal regulations and technical services to support Abstract Apostille: Removing the Chain of Authenticity of Public Documents The substance of the convention is to establish the method by which documents issued in one country are applying for legal purposes in all other signatories. The technique in this convention is in the future known as Apostille (from Latin post illa and French: marginal note) (Pertegás, 2017). This requirement is enforced because the country of destination/recipient of the document does not know the identity, official capacity of the person signing the form, or the authority that sealed the document (Rukmana, Savitri & Padha, 2021). Therefore, certification is enforced in every country and requires every public record certified by an official who understands the document. This is behind the procedure known as "legalisation" (Hague Conference on Private International Law, 2013). Legalisation describes the process by which a signature/seal/stamp is applied to a public document as a series of authentication and certification by a public official where the final authentication by an official in the destination country (legalisation chain) can have a legal effect. Figure 1. Legalisation Chain With the enactment of Apostille, it can remove the legalisation process and replace it with a single formality through the issuance of an authentication certificate called "Apostille" by an authority designated by the country of origin (called "Competent Authority") (Sihombing, 2020). It is a simple process defined by the convention, which can be illustrated as follows: Figure 2. Apostille Chain Based on the provisions of the convention and the guidelines above, Apostille has a limited influence on the sources of the underlying public records and the quality of these public documents. Apostille has only a minor impact on (Sugianto, 2013). These restricted effects include the validation of the document's signature authenticity, the ability of the signatories and the identification of the seal or stamp in the form. The substance of the public records is unrelated to Apostille (Sugianto, 2013). Although the document's public existence means that its content is valid and accurate, it adds no legal importance or legal implications to signing such public records. Here it is recommended that the competent authorities provide notice of the restricted effects of Apostille on the HCCH Special Commission (Suwantara & Sukma, 2021). Apostille does not specify that all private law provisions comply with public documents. Public documents shall be enforced. Personal law should establish whether a record is in general, and inspecting the defects rests with the Competent Authority. For example, a notary can do certain acts or legalise the notary concerned (Tan, 2020). Of course, the competent authority cannot be obliged to do so since an Apostille has no legal impact other than legalising the document's public roots and the publication without curing or not the defect (Tobing, 1982). The Apostille Convention shall not affect the country of destination's right to decide the evidence validity of public records in foreign countries. Nevertheless, the convention stipulates that not every country of destination will deny Apostille, while the country of destination can determine the acceptability and evidence validity of the document. They may also set a time frame to accept external public papers (for example, documents need to be generated within a period after implementation), even though this limit cannot apply to Apostille itself (Tol, 2020). Moreover, the law of proof of the country of destination must also define how much international public records can establish those evidence. The convention shall not restrict the impact of Apostille as long as it is recognisable and is linked to the publicly released text; each published Apostille will affect. Therefore, Apostle should not be refused based on her age (Rusu, 2019). However, this does not impair the establishment of limitations on accepting public documents by the competent authorities in the country of destination based on their private law (Setiadewi & Wijaya, 2020). Apostille cannot be implemented without the presence of a competent authority. The competent authorities play a central role and form the backbone of the Apostille's operations. They perform three essential functions: verifying the authenticity of public documents, publishing Apostille, and recording every Apostille issued in a register. The proper process of Apostille depends on the persistence, effectiveness and accuracy of these performance functions. The Apostilles register can be saved in the paper (index card) or electronic format. Nowadays, in the digital and electronic era, most of the Competent Authorities in their respective countries keep Apostille lists in electronic format instead of paper registers. The priority of keeping logs in electronic form is because they offer the following benefits to Competent Authorities. Some of the benefits of electronic storage include 1) Ease of recording the details of each Apostille issued; 2) Easy verification of the origin of Apostille; 3) Automatic statistical creation of the Apostille service delivered by the competent authority (for example, the number of Apostilles issued during a specific period); 4) Eliminates the constraints of more miniature workspace; In January 2012, the name was changed to the electronic Apostille Program (e-APP). At its November 2012 meeting, the Special Commission acknowledged the tremendous progress in e-APP implementation that has been made since its meeting in 2009, and through e-APP has been able to enhance the effective and safe performance of the convention. Since then, many Competent Authorities have implemented one or both of these components, affirming the place of the Apostille Convention in the electronic age. Implementation of Apostille in Notary Deed in Indonesia Regarding the area of domicile of a notary, Article 18 paragraph (2) of Law no. 30 of 2004 concerning the Position of a Notary regulates that a Notary has an office area covering the entire province of his domicile and Article 19 paragraph (1) in conjunction with paragraph (3) UUJN regulates that Notary Public is obliged to have only one office, namely in his / her domicile and not authorised successively. -According to by continuing to carry out a position outside his / her place of domicile. Based on this explanation, the authenticity of a notary deed as an authentic deed has 2 (two) requirements, namely the requirements regarding the forms of the act and the authenticity requirements regarding the area of the notary's position of office. With the fulfilment of these 2 (two) conditions, the notary deed as a public document has met the public / general nature and is binding on a third party. Authentication on the notary deed has two functions: a formal process (formality causa) and evidence (probation causa). Causa formality means that the deed is functioning to complete or complete a legal act, so it is not a legal act. In this context, a deed is a formal requirement for the existence of legal action. Probation is causa means that the deed has a function as evidence because, since the beginning, the act was done deliberately for later proof. The written nature of an agreement in the form of a deed does not make the deal valid but only so that it can be used as evidence in the future (Tobing, 1982). To test the authenticity of a faked notary deed from a formal aspect, it must be proven the formality of the act. That is, it must be able to prove the truth of the day, date, month, year, hour (hour) before facing, prove the truth, the parties that appear, prove what has been seen, witnessed, heard by the notary, besides that it must also be able to prove the truth of the statements or statements of the parties given/submitted in front of a notary and the correctness of the signatures of the parties, witnesses and notaries or there is a deed making procedure that is not carried out. In denial or denial of the formal aspects of a falsified notary deed, if there are parties who feel aggrieved, a lawsuit must be filed with the public court. The plaintiff must be able to prove that there are formal aspects that are violated or inappropriate in the deed concerned or the person concerned has never felt before a notary on the day, date, month, year, hour (hour) mentioned at the beginning of the deed or think that the signature in the act is not his signature if this happens the person concerned, or the said party has the right to sue the notary or other party who benefits from untruth. Post Apostille was enforced in Indonesia as in Presidential Regulation No. 2 of 2021 concerning Post-enactment of Presidential Regulation No. 2 of 2021 concerning Ratification of the Convention Abolishing The Requirements of Legalisation For Foreign Public Documents (Convention on the Elimination of Legalisation Requirements for Foreign Public Documents) where notary deeds are included in the scope of public documents that Apostille can apply, then the principle of the authenticity of notary deeds which has been enforced can be adapted to the principles of Apostille. Apostille makes it easy to create the authenticity of foreign public documents by simplifying the chain of authenticity. The method to streamline the chain of authenticity as in Apostille should be applicable not only to use foreign public records but also in terms of each country's internal needs. One hundred twenty countries have signed the Apostille Convention, both members of the HCCH and non-members. Apostille has been adopted by countries that adhere to common law, civil law, or mix law systems. Many countries that adhere to the civil law legal system have adopted the Apostille provisions, including the Netherlands, Germany, and France, which are the benchmarks for notary law in Indonesia. Thus, implementing Apostille as a legal unification that harmonises international civil law can be accepted by countries that adhere to different legal systems. To be able to apply Apostille in a notary deed properly so that the authenticity of the act is still fulfilled, it is necessary to distinguish between conventional and electronic application by taking into account the essential elements of Apostille, which include, among others, the Apostille Process, the Apostille Effect, and the competent authority. The Apostille process is carried out in several stages, namely Request -Verification -Issuance -Registration. The documents carrier or party in the document or the individual who executes the paper will, conventionally, request Apostille (for example, an official of authority or a notary). Apostille does not differentiate or render eligibility conditions between persons or legal entities for someone who wants them and does not require an explanation of why they were requested. Apostille may be given upon application of an individual agent or a delegate who wishes to use Apostille, proving that the person intending to use the Apostille is allowed to submit his request. Third-party trading companies in certain countries provide facilities for citizens to receive apostilles and similar documents (for example, notary authentication). The convention does not approve or prohibit activities permissible where allowed by and implemented in compliance with the applicable legislation, given that Apostille is only issued following the pattern by the competent authorities. Apostille guarantees the authenticity of documents by checking by the competent authorities on several subjects: the authenticity of the documents it produces signatures, the assurances of the individual's capacity to sign a form, and the identity of the document's seal or stamp. The Apostille Convention, however, does not check if all public records are content or authentic. The competent authority is not obliged to check the quality of documents relating to a certificate based on Article 1(2)(d) of the convention. Conventionally, the implementation of Apostille for notary deeds to remain authenticated as an authentic deed, the Competent Authority to issue Apostille certification, one of which is the notary. After a waiting period of 6 (six) months since the accession instrument was submitted to the HCCH and no member party objected, the government needs to formulate a regulation on the competent authority which determines the notary as one of the parties authorised to issue Apostille certificates and keep them in the register. . The regulation of notary authority as the authority that is authorised to issue apostille certificates is the most appropriate solution to maintain the authenticity of notary deeds as authentic deeds because they are coherent with the elements of original acts as regulated in Article 1868 of the Civil Code, Article 256 RBg and UUJN, which are made based on law, by officials and in the jurisdiction of the deed it is made. Thus, the degree of authenticity of notary deeds applying Apostille still has a perfect degree of evidence. Regulatory arrangements regarding the authority authorised to issue Apostille must also impose sanctions on the appointed official for wrongly executing public documents (Example: A notary who gives a certificate of a notary who does not comply with legal requirements). Therefore, if the competent authority is not a notary, it can ask the author of the document to determine whether the document was falsified or changed. In the era of globalisation, every country, including Indonesia, is expected to be at least able to accommodate the demands and pressures of globalisation through the reconstruction of adequate legal arrangements to provide a clear roadmap to achieve a prosperous economy. The success of a nation's economy can be seen from its success in formulating and enforcing adequate laws, both for national and international interests. Based on this urge, the existence of law is demanded to be efficient, its enforcement is practical, progressive, relevant, and the actual is no longer aggressive, rigid, complex, etc., so that it is difficult to understand both from its existence function, and purpose. There is a need and insistence on a legal model like this, so the law should become a tool to increase economic efficiency (Sugianto, 2013). As the rule of law, Indonesia is essential that the law at a certain point belongs to the community to maximise the broadest possible social utility. The ability of the law to provide Justice by maximising the most comprehensive potential social utility (maximise overall social utility) like this, according to Posner, is time to become an economic standard. This conception is known as the economic conception of Justice. The term the economic conception of Justice and economic Justice, although in essence has slightly different definitions, both have the same goal, namely to improve the standard of human life. From a financial point of view, the national interest for Indonesia in this regard is related to national economic growth and its stability through regulation and legal regulations. Meanwhile, international interests are linked to the ability of the state to accommodate and compete in the era of economic globalisation and free trade, as has happened today in the world economy. The meaning of globalisation needs to be understood here as the rapid increase in business worldwide to make exchange interactions more open, integrated and borderless. First, globalisation, in general, must be acceptable. Like it or not, whether we are ready or not ready, accept it or not, the development of globalisation cannot be avoided, so that globalisation has changed the world economic system, which creates opportunities and challenges. Thus, it is clear that all the elements of each country are (or are incentivised to be) active. Second, globalisation has had a significant impact on the world economy with various consequences. It even affects almost everything that exists, for example, the production of goods and services, jobs, labour and production processes, and the effect on investment. The various impacts and consequences of globalisation are broadly similar, namely the urge for efficiency, productivity and competitiveness. Third, globalisation has also led to increased competition globally. This is often said to be the age of competence that emphasises the importance of law's role in each country's economy because the country's legal system and national laws are categorised as one of the essential national products. Apart from demonstrating its competitiveness, federal law must ensure a conducive investment climate in a healthy economic environment. At this point, Law and Economics see the law in its context as legal regulation to be an essential tool for achieving the nation's financial success. These demands and insistences have made economics a part of social science that is useful for law, especially in understanding human nature as a legal subject and humans as potential state capital (human capital). Concerning a country, economics provides a convincing road map to achieve the nation's economic success. Entering the era of globalisation, the law often faces challenges to adapt. This kind of desire came from economists, but thinkers and legal practitioners began to yearn for the supremacy of efficiency, so they began to follow the economist's way of thinking, for example, in explaining the efficiency and progress of the law. Many legal experts (including law users) long for the existence of law-which should ideally be an efficient, effective, and progressive development. Law and pure law science are essentially ignorant of these concepts as broadly as economics. Therefore, if one starts to consider efficiency, effectiveness, and so on, it is time to see what economics can explain. Especially Indonesia as a country located in ASEAN, in the context of realising the ASEAN Economic Community in 2015 as mandated in the provisions of the Cebu Declaration, all ASEAN member countries must liberalise in the fields of trade in goods, services, investment, free skilled labour and free flow of capital. More freely in the Southeast Asia region (Penasthika, 2017). The biological child of globalisation is the development of internet-based electronics. Globalisation and electronics have influenced many countries to transform government management based on electronic services or socalled e-government. In Indonesia, electronic-based services have been regulated in the Presidential Regulation of the Republic of Indonesia Number 97 of 2014 concerning the Implementation of One Stop Services in conjunction with the Regulation of the Minister of Home Affairs of the Republic of Indonesia Number 138 of 2017 concerning the Implementation of Regional One-Stop Services and has been implemented in almost all institutions both ministries and institution. The purpose of electronic services (PSE) is to increase economic growth through investment, improve the quality of licensing and nonlicensing services to the public, and improve the quality of one-stop integrated service delivery. There are different paradigms in understanding meaning and see how to determine authenticity. From a technical perspective, authenticity is seen more in a process that pays attention to its material aspects because it considers how to authenticate both identities, documents or devices. Meanwhile, from a legal perspective, the meaning of authenticity is more seen in the object, namely the existence of written evidence which is legally assumed to have perfect evidentiary value because its formality has been guaranteed, made by the competent authority (an official under oath) so that the material/substance is guaranteed. However, it is interesting to note that technically, if a document has gone through the authentication process and is accepted as authentic, it is automatically used or runs to the next cycle without stopping. Meanwhile, in procedural law, an original deed, even though it has perfect evidentiary power, in its implementation, there are still possible conditions that make it unable to be used correctly (Edmon Makarim, 2015). Apostille can be published in addition to paper and electronic form (e-Apostille). With the assumption that each electronic copy is considered a public document when issued by the competent authority. Storage completes the Apostle process shown in the following diagram: Figure 4. The Apostille Convention In the electronic era, e-register is the best solution to provide the best service to everyone. Via the internet, electronic registers (e-registers) can be accessed online, allowing anyone to quickly verify the origin of the Apostille they have received (regardless of whether the Apostille has been published in paper or electronic form). E-registers can also help prevent rejection in minor cases of the formality of procedures and can be quickly or easily verified without intervention from the competent authorities. In implementing Apostille electronically, the notary deed to ensure the quality of its authenticity can be pursued in 2 (two) ways, namely: first, the Notary Deed is made conventionally which is attached to the Apostille certificate then scanned with the information that the copy is a copy of the deed made conventionally, and minutes are stored in the repertorium register of each notary. Things like this can be applied to issuing Electronic Mortgage Certificate by the Land Office. Limited information and electronic documents that cannot be equated with written letters as authentic documents are formulated in Article 5 paragraph (2) of the ITE Law that Electronic Information and/or Electronic Documents and/or their printouts as referred to in paragraph (1) are an extension of the tools. Valid evidence by the applicable procedural law in Indonesia. In addition, Article 1867 of the Civil Code and Article 164 of the Civil Procedure Code (Herziene Indonesisch Reglement / HIR) states that notary deeds are recognised by law as authentic deeds, rather than electronic information be legally questionable. Therefore, obstacles and problems with the authenticity of electronic documents must be re-conceptualised regarding the original deed as an ius constituendum in civil law in Indonesia. Moving on from the hypothesis that electronic information and documents originated from an electronic system that works properly and are accountable, then a piece of electronic information and documents that can be trusted is because an electronic system is also trustworthy. From an electronic system that has guaranteed reliability, the authenticity of electronic records is guaranteed to be materially reliable, and formally the authenticity of electronic documents is guaranteed to be reliable, safe, and operated responsibly. On the other hand, if the electronic information and documents come from an electronic system working poorly, electronic documents and information are not yet worthy of trust because the electronic system is not sound in operation. This hypothesis leads to an understanding that there is a range or spectrum in determining the weighted value of the proving power of electronic information, from the weakest to the strongest. Authentic deeds are accepted because they are made with sincerity by public officials who have skills and qualifications. An original act can be done electronically if established and kept under the conditions stipulated by a decree in a State Institution. When made by or in front of a notary, it is exempt from the handwriting requirement required by law. By considering the doctrine of Prof. Smith regarding computer security and also the rules of secure communication (secured communication), which includes confidentiality, integrity, authenticity, authorisation, non-repudiation, and availability (CIAAANA), so it can be seen that there is a range of values or a spectrum in determining the weight of the value of the power of evidence. From an Electronic Information (IE) and/or Electronic Document, which will greatly depend on the extent to which the security system can both in the information system and the electronic communication system itself. At the lowest level, the existence of IE objectively is not guaranteed its validity in explaining the presence of a legal incident recorded by it and is unable to explain or ascertain who the legal subject is responsible for it. However, because an EI cannot be denied its existence just because of its electronic form, with these characteristics, there will be more free space for judges to conduct a "functional equality" examination whether it will be equated as written, original, and signed evidence. While in the middle level, the existence of IE can be one of the fulfilment of the five elements in secured communication; however, there is still a clear indication of those concerned. Objectively, IE is guaranteed its validity or can explain who the Legal Subject is responsible for it. However, the accountability or reliability of the Electronic System used is not guaranteed to run correctly (not accredited), so that by itself, it can quickly be rejected by the person concerned. Whereas at the most substantial level (high level), the existence of IE is objectively guaranteed its validity and can explain who the legal subject is responsible, and the electronic system is guaranteed to run well (accredited), so as long as the parties cannot prove otherwise, then what declared by the system can be considered technically and legally valid. In such a context, the substance of an EI has been well preserved and should materially be equated with an authentic deed. In electronic information and documents, the use of Electronic Signatures as a form of using an electronic authentication method is a necessity and a necessity to obtain highlevel evidentiary power values. In practice, a TTE also requires an Electronic Certificate that supports its existence, which basically will be related to the rule of law without denying it, namely if it is only carried out by both parties who communicate without involving a trusted third party (Trusted Third Parties, in the future abbreviated with T3P), then there is still an opportunity for one of the parties to denounce it at a later date. Therefore, to ensure that it cannot be denied, the role of a third party is needed. The Indonesian government, since 2012, has issued regulations on the implementation of electronic systems and transactions, and this has been revised in 2019 through Government Regulation of the Republic of Indonesia Number 71 of 2019 concerning the Implementation of Electronic Systems and Transactions. Article 42 PP No. 71/2019 regulates that Electronic Transactions must use an Electronic Certificate issued by the Indonesian Electronic Certification Operator (PSrE). Electronic Transactions can use a Reliability Certificate issued by a registered Reliability Certification Agency with due regard to aspects of security, reliability and efficiency. The PSrE is divided into 2 (two), namely the parent PSrE and the parent PSrE. Parent PSrE is an electronic certificate operator / Certification Authority (CA) run by the Indonesian government under the Directorate of Information Security, Ministry of Communication and Information Technology of Indonesia, which issues Electronic Certificates for Parent Electronic Certification Operators, meanwhile, parent PSrE is an electronic certificate operator / Certification Authority (CA ) which has been recognised by the parent PSrE to run digital certificate services carried out by both Indonesian and foreign nationals, organisations and Electronic Certificate administering business entities that are domiciled in Indonesia or have foreign capital ownership. Ensure that the parent PSrE has given official authority to the parent PSrE to carry out its functions and duties as an Electronic Certificate Operator. As of November 2019, two PSrE partners from government agencies, namely the Agency for the Assessment and Application of Technology (BPPT) and the National Cyber and Crypto Agency (BSSN). As well as four PSrE partners from the private sector, namely Privy.id, Peruri, Digisign, and Vida. The six services owned by PSrE are electronic signature (TTE), electronic seal (e-seal), preservation in TTE and electronic stamps, time markers, registered electronic delivery, and website authentication. It is believed that the T3P paradigm can be carried out by the Electronic Certificate Service Provider (PSrE), which provides a means of authenticity tracing to the communicating parties and the T3P party who is deemed worthy of carrying out the mandate is a Notary. The role and involvement of the notary as the organiser of electronic certification or sub-operator seems to have become a necessity and necessity for electronic transactions. The best practice standards are in the public interest. As the only staterecognised notary organisation, the Indonesian Notary Association (INI) needs to establish a particular field that deals with information and electronic documents, such as the Chamber of Notaries or the Federation of Notaries legally based in Germany. The most substantial role of T3P is if they include the function and part of a notary in it, at least as an examiner and legalising someone's identification in the registration process and obtaining a certificate (RA) in it. This process will ensure that the party applying for the certificate is the correct person and will ensure that the applicant receives it in person. Meanwhile, the most influential role of a notary's function in supporting an Electronic Transaction is if they can act as RA and have the authority to do a deed electronically. IV. Conclusion Based on the analysis and discussion above, this article concludes that Apostille can be implemented for notary deeds while maintaining authenticity as an authentic deed with perfect evidentiary quality. With the adoption and adoption of the Apostille method in Indonesia, the chain of formalities for the issuance of notary deeds needs to be redesigned and adjusted to the provisions of the apostille convention with a new approach and method in transforming the procedure for issuing notary deeds that are better and following the needs in the era of globalisation and electronics. To be able to apply Apostille in a notary deed so that the authenticity of the act is still fulfilled, it is necessary to distinguish between conventional and electronic application by taking into account the essential elements of Apostille, which include, among others, the Apostille Process, the Apostille Effect, and the competent authority. The Apostille process is taken in several stages, namely Request-Verification-Issuance-Recording.
2021-07-27T00:05:14.587Z
2021-05-29T00:00:00.000
{ "year": 2021, "sha1": "933949b24368e728022180dd3bee5e614c6e06b8", "oa_license": "CCBYSA", "oa_url": "http://www.bircu-journal.com/index.php/birci/article/download/2000/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "50c0667c27485006fe97515687ec64dfb6ed651c", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Business" ] }
237091521
pes2o/s2orc
v3-fos-license
Connected and Disconnected Sea Partons from CT18 Parametrization of PDFs The separation of the connected and disconnected sea partons, which were uncovered in the Euclidean path-integral formulation of the hadronic tensor, is accommodated with an alternative parametrization of the non-perturbative parton distribution functions in the CT18 global analysis. This is achieved with the help of the distinct small $x$ behaviours of these two sea partons and the constraint from the lattice calculation of the ratio of the strange momentum fraction to that of the $\bar u$ or $\bar d$ in the disconnected insertion. The whole dataset of CT18 is used in this CT18CS fit. The impact of the recent SeaQuest data on the $\bar{d}(x)-\bar{u}(x)$ distribution of CT18CS is also discussed. The separate momentum fractions for the valence, the connected sea and disconnected sea of $u$ and $d$, the strange and the gluon partons are presented at $\mu =1.3$ GeV for the first time. They can be compared term-by-term with systematic error controlled lattice calculations. Introduction In high energy experiments, such as those at hadron colliders, theoretical analyses depend on the parton structure of the hadronic beams in terms of their parton distribution functions (PDFs) in order to understand the W ± , Z and Higgs productions in precision measurements of the Standard Model parameters and the search of new physics. The universal PDFs can be extracted from deep inelastic scattering (DIS) and Drell-Yan processes with the help of factorization theorem and global analyses which involve the DGLAP evolution equations. Since the factorization formula involves an integral of the product of the parton distribution functions (PDFs) and the perturbative short distance kernel, extracting PDFs is intrinsically an inverse problem. The common approach is to model the PDFs in terms of the valence and sea partons with respective small and large x behaviours and perform a global fit of the available experimental data at different Q 2 values. As a result, the quality of the fit and its accuracy depend on the precision and availability of the experimental data in the relevant kinematic range. In particular, the flavour structure of the partons can be improved with experiments which directly address the flavour dependence. For example, the first experimental evidence that the sea patrons have non-trivial flavour dependence is shown in the experimental demonstration of the violation of Gottfried sum rule. The original Gottfried sum rule, I G ≡ 1 0 dx[F p 2 (x) − F n 2 (x)]/x = 1/3, was obtained under the assumption thatū andd sea partons are the same [1]. However, the NMC measurement [2,3] ]/x turns out to be 0.235 ± 0.026, a 4 σ difference from the Gottfried sum rule, which implies that theū =d assumption was invalid. The recent SeaQuest experiment clearly shows that thed/ū ratio in the range 0.1 < x < 0.4 is substantially larger than unity (∼ 1.5) [4]. Other flavour-dependent issues under active experimental and theoretical pursuits include the intrinsic strange and charm partons [5][6][7][8][9], and the s(x) −s(x) [8][9][10][11][12][13] and c(x) −c(x) [14] differences. The violation of the Gottfried sum rule prompted the Euclidean path-integral formulation of the the hadronic tensor of the nucleon for DIS which uncovered that there are two kinds of sea partons, one is the connected sea and the other disconnected sea [15,16]. They are so named to reflect the topology of the quark lines in the 4-point current-current correlator in the nucleon. The connected sea (CS) results from a connected insertion of the currents on the same 'valence' quark line and the disconnected sea (DS) is from a disconnected insertion involving a vacuum polarization from the quark loop involving the external currents. These are 'hand-bag' diagrams. On the other hand, the 'cat's ears" diagrams, where two currents in the current-current correlator couple to different quark lines, are higher twists and are suppressed in the DIS region, but they are as important as the leading twists in lowenergy lepton-nucleon scattering [17,18]. The suppression of the higher twist contributions at large Q 2 has been demonstrated in a recent lattice calculation which shows that the 'cat's ears' diagrams drops out quickly as compared to those 'hand-bag' diagrams when the three momentum transfer becomes large [18]. It is proved [15] that, in the isospin symmetric limit, the Gottfried sum rule violation originates only from the CS which is subject to Pauli blocking due to the unequal numbers of the valence u and d quarks in both the proton and the neutron. Attempts have been made [19][20][21][22] to separate out the CS and DS partons by combining strange parton distribution from a HERMES experiment [23],ū+d from the CT10 analysis [24], and the ratio x s / x u (disconnected insertion) from lattice calculations [22,25]. In this work, we shall accommodate parton degrees of freedom delineated in the pathintegral formulation of the hadronic tensor in the form of CT18 global analysis [26] of unpolarized PDFs. Adopting lattice results as constraints to perform the global fits has been applied to quark transversity distribution [27]. The present work goes one step further to explicitly separate the CS and DS degrees of freedom for the first time under the CT18 parametrization [28]. This manuscript is organized as follows. Sec. 2 gives a brief review of the path-integral formulation of the hadronic tensor which defines the parton degrees of freedom. Sec. 3 describes the parametrization of the PDF for each of the parton degrees of freedom and the details of the global analysis. The result of a global analysis with the inclusion of both CS and DS partons fitted to the original CT18 data sets, termed as CT18CS fit, is presented in Sec. 4. The second moments of the separate valence, the disconnected sea, and the gluon partons are presented for the first time at the input scale which can finally be compared directly with lattice calculations for each term and each flavor. We note that the E906 SeaQuest [4] data only became available after the completion of the CT18 analysis. Hence, we shall examine in Sec. 5 the impact of the E906 SeaQuest [4] data on a global fit similar to CT18CS, in which the E866 NuSea [29] data was already included. Sec. 6 contains our summary. Parton Degrees of Freedom from Euclidean Pathintegral Formulation of the Hadronic Tensor The Euclidean hadronic tensor was formulated in the path-integral formalism to identify the origin of the Gottfried sum rule violation [15,16]. It is defined as the current-current correlator in the nucleon with Fourier transform in the spatial directions It is a function of τ , which is the Euclidean time separation between the currents. Formally, the inverse Laplace transform converts W µν ( q, p, τ ) to the Minkowski hadronic tensor with c > 0. However, this is not practical in lattice calculation, as there are no data on the imaginary τ . Instead, one can turn this into an inverse problem and find a solution from the Laplace transform [30] W µν ( q, p, τ ) = dν e −ντ W µν ( q, p, ν). This has been studied [17,18,30,31] with the inverse algorithms such as Backus-Gilbert, Maximum Entropy and Bayesian Reconstruction methods. The spectral density in leptonnucleon scattering has several kinematic regions as the energy transfer ν increases -the elastic scattering, the inelastic reactions (πN, ππM, ηN etc.) and resonances (∆, Roper, S 11 , etc.), shallow inelastic scattering (SIS), and deep inelastic scattering (DIS) regions. To determine how large a ν is needed for DIS, we look at W , the total invariant mass of the hadronic state for the nucleon target at rest The global analyses of the high energy lepton-nucleon and Drell-Yan experiments to extract the parton distribution functions usually make a cut with W 2 > 12 GeV 2 to avoid the elastic and inelastic regions. Thus, to be qualified in the DIS region, the energy transfer ν needs to be greater than 8 GeV for Q 2 = 4 GeV 2 , a typical choice made in the CTEQ-TEA PDF global analysis. It is shown [15,16,32,33] that, when the time ordering t f > t 2 > t 1 > t 0 is fixed, the 4-point function for extracting the matrix element W µν ( q, p, τ ) in Eq. (1) can be grouped in terms of 6 topologically distinct and gauge invariant path-integral insertions, according to different Wick contractions among the Grassmann numbers in the two currents and the source/sink interpolation fields. They can be further grouped into two classes. The first class includes those where the currents are coupled to the same quark propagator as illustrated in Fig. 1. The second class involves those where the two currents are coupled to different quark propagators as illustrated in Fig. 2. In low energy lepton-nucleon scattering, all 6 diagrams contribute and they are not separable [33]. However, in the DIS region, the first class are 'hand-bag' diagrams which include the leading twist contributions, the second class are 'cat's ears' diagrams, which are higher twists and are suppressed by O(1/Q 2 ). The first class in Fig 1 includes three path-integral diagrams that can be denoted as connected insertions (CI) ( Fig. 1(a) and Fig. 1(b)), where the quark lines are all connected, and disconnected insertions (DI) (Fig. 1(c)), where there are vacuum polarizations associated with the currents in disconnected quark loops. We should note that Fig. 1(b) includes the exchange contribution to prevent the u or d quark in the loop in Fig. 1(c) from occupying the same Dirac eigenstate in the nucleon propagator, enforcing the Pauli principle. In fact, Fig. 1(c) and Fig. 1(b) are analogous to the direct and exchange diagrams in time-ordered Bethe-Goldstone diagrams in the many-body theory [34]. As far as the leading-twist DIS structure functions F 1 , F 2 and F 3 are concerned, the three diagrams in Fig. 1 are additive with contributions classified as the valence and sea quarks q v+cs in Fig. 1(a), the connected sea (CS) antiquarksq cs in Fig. 1(b), and disconnected sea (DS) quarks q ds and antiquarksq ds in Fig. 1(c) [15,16,32,33]. Since the u and d partons in the quark loop in Fig. 1(c) appear in a different flavour trace than the one involving the nucleon propagator, they cancel in the Gottfried sum in the isospin symmetric limit. Thus, the Gottfried sum rule violation comes entirely from the connected sea (CS) differencē u cs −d cs in the F 2 structure functions in this case [15]. It is proven [16,33] from short distance expansion that the parton degrees of freedom defined in diagram in Fig. 1 are separable, unlike the case of low-energy lepton-nucleon scattering, where the higher twists are important [17,18]. Furthermore, these parton degrees of freedom are identical to those defined from the recent Feynman-x approaches [33], i.e. quasi-PDF [35], pseudo-PDF [36], and lattice cross section [37]. PDFs can be extracted from the factorization formula [38] where the experimental cross section or structure functions are expressed as a convolution integral of the coefficient functions and the PDFs. In practice, the global fitting programs adopt the parton degree of freedoms as u, d,ū,d, s,s and g. We see that from the path-integral formalism of QCD, each of the u and d have two sources, one from the connected insertion (CI) ( Fig. 1(a)) and one from the disconnected insertion (DI) (Fig. 1(c)), so areū andd from Fig. 1(b) and Fig. 1(c). On the other hand, s, c ands,c only come from the DI (Fig. 1(c)). In other words, u =ū cs +ū ds ,d =d cs +d ds , s = s ds ,s =s ds , This classification of the parton degrees of freedom is richer than those in terms of q andq in the global analysis due to the fact that there are two sources for the quarks -q v+cs and q ds -and two sources for the antiquarks -q cs andq ds . The distinguishing feature of CS and DS lies in their characteristic small-x behaviors, which we shall explore in this work to perform global analysis. In the Regge theory, the small-x behaviour of q v+cs andq cs , being in the flavour non-singlet connected insertions, are dominated by the reggeon exchange. Thus, we expect q v+cs (x),q cs (x) −→ x→0 x −α for q = u, d, where α ∼ 0.5 is the slope of the Regge trajectory. Whereas, DS is flavour singlet and can have Pomeron exchanges. Hence, q ds (x),q ds −→ x→0 x −1 for q = u, d, s, c. In an attempt to separate the CS and DS quarks [19] by combining strange quark distribution from a HERMES experiment andū +d from CT10, it is found that x(ū cs −d cs ) spans the same x range as that of x(ū −d), which suggests that they have similar small-x behaviours; whereas x(ū ds +d ds ) is much singular for x < 0.05 [19]. This is consistent with expectation. Until the Feynman-x and/or the hadronic tensor approaches on the lattice have all the systematic errors, such as excited states contamination and large nucleon momentum, are under control so that all region of x can be compared with those from the global analyses, the most reliable comparison between global fittings and lattice calculations are via the parton moments. The latter are getting mature with all the systematic errors (e.g. continuum and infinite volume extrapolations, excited states contamination, physical pion mass, non-perturbative renormalization, and scale setting) have been taken into account [39,40]. However, as pointed out in [33,41], it is not possible to compare the moments from global analyses and those from the lattice calculations in detail, except for the limited isovector (u − d) and stangeness moments. This is because the lattice calculation of moments in the three-point functions are organized in the connected insertions (CI) and disconnected insertions (DI). The CI includes both q v+cs andq cs , while DI includes q ds andq ds . On the other hand, in the present global analyses, CS and DS degrees of freedom are not separated. To make a comparison at the moment level, it is encumbered upon global analyses to disentangle the connected sea from the disconnected, so that the full lattice results of moments in CI and DI can be compared to them for each flavor. Global fitting In this section, the general setting of the CT18CS global fit is presented. The CT18CS, as an extended parametrisation of PDFs in accommodation with the Euclidean path-integral formalism of QCD, requires a different scheme of parton classification with more parton degrees of freedom. The specific parton degrees of freedom to be parametrized and a number of ansatzes imposed in this global analysis will be explained in the following Sec. 3.1. In the Sec 3.2, we will introduce the settings of small-x and large-x behaviour for CS and DS parton distributions. Parton degrees of freedom In the QCD global analysis of parton distributions in the proton, the PDFs of various partons are parametrized in some functional forms at the initial scale Q 0 (about 1 GeV), from where the PDFs are evolved to any arbitrary higher energy scale Q via DGLAP evolution equations. Typically, it is assumed that the charm and bottom quark PDFs are generated perturbatively from QCD evolution, though in some special studies, the possibility of having non-perturbaive charm PDF at the Q 0 scale was also considered, such as in Refs. [6][7][8][9]42]. Therefore, in general, the total number of parton degrees of freedom at the Q 0 scale is 7, which includes the following partons: In CT18 [26], it is also assumed that the strange PDFs s =s at the Q 0 scale, though s =s will be generated at large Q scale via NNLO QCD evolution. Given this ansatz, the number of parton degrees of freedom is 6 in CT18. We note that in MSHT20 PDFs [13] and NNPDF4.0 [9], a non-vanishing asymmetric strangeness s(x) =s(x) is imposed in the non-perturbative parametrisation at their respective Q 0 scales. As mentioned in the last section, when the separation of CS partons and DS partons are considered, we would have more partonic degrees of freedom. The classification of partons becomes: g, u v+cs , u ds ,ū cs ,ū ds , d v+cs , d ds ,d cs ,d ds , s ds ,s ds , totally 11 of them. To implement all the degrees of freedom in Eq. (7) and obtain their Q 2 dependence would require generalized DGLAP evolution equations as developed in Ref. [41]. In the present study, we shall pararmetrize the extended set of partons in Eq. (7) at the input scale Q 0 . with some specific assumptions to be listed below, and then combine the CS and DS into the conventional partons in Eqs. (5) and (6) so that we can use the same NNLO evolution equations as for CT18. In this way, we can compare with the results of CT18 to discern the different roles played by CS and DS and their respective impacts on physics at this stage. When the generalized evolution code is ready, we can fully explore the CS and DS effects at all scales. For the present work, we have adopted the following assumptions to reduce the number of parton degrees of freedom from 11 to 6, similar to that in CT18. • Similar to CT18, we assume the symmetric disconnected sea parton distributions: u ds =ū ds , d ds =d ds , and s ds =s ds . (8) • The isospin symmetry is imposed for the u and d quarks: • The DS components of u and d quark PDFs are proportional to the s quark PDF, i.e. Since the DS in the lattice calculation involves a correlation between the quark loop and the nucleon propagator via the gluons, it is not as sensitive to the nucleon wavefunction as are the valence and CS partons in the connected insertion. The only difference between u ds , d ds and s is their quark masses. Thus, it is reasonable to postulate that their distribution are the same modulo a proportional constant R. In this work, we determine the value of R from the ratio of the second moment between the strange and the sum of u and d in the disconnected insertion, predicted by a lattice QCD calculation which has taken all the systematic errors into account [22]. It yields is the momentum fraction carried by the light quark (either u or d) in the disconnect insertions. This result was obtained by properly evolving the matching coefficients from 2 GeV to 1.3 GeV [43], using the known result of 1/R at 2 GeV, which was found to be 0.795(79)(77) [22]. In the CTEQ-TEA PDF global analysis, the normalizations for individual sea quark PDFs are computed using the valence quark and momentum sum rules, and the first moments x g and the ratio x s+s / x ū+d fitted as free parameters. Since the parametrizations do not determine the ratio of the strange-to-nonstrange PDFs, we restrict this ratio in the present work by the above-mentioned prediction from lattice-QCD. Specifically, we require that the ratio s ds (x) +s ds (x) / ū ds (x) +d ds (x) , at Q 0 = 1.3 GeV, is constrained at the 68% confident level to be in the interval [0.718, 0.926] with a central value of 0.822, by imposing the appropriate Lagrange Multiplier constraint in the CT18CS fit. Finally, we note that the assumption in Eq. (10) can be checked by the similar x 3 ratio in lattice calculations in the future. • We further define u cs ≡ū cs and d cs ≡d cs , so that which agrees with the usual definition of the valence quark: q v ≡ q −q = [u v+cs + u ds ] − [ū cs +ū ds ], when u ds =ū ds . It was pointed out in [41] that when u ds andū ds are not equal, the q v ≡ q −q definition leads to conceptual puzzles, such as the valence u can evolve into valence d in NNLO evolution and that the strangeness can have valence distribution when s =s. These puzzles are resolved with the definition in Eq. (12) [41]. With all the above conditions taken into account, the remaining parton degrees of freedom are g, u v , u cs , d v , d cs , s ds . As discussed earlier below Eq. (7), we shall combine CS and DS into the usualū/d d.o.f., i.e.ū =ū cs +ū ds =ū cs + Rs andd =d cs +d ds =d cs + Rs at the input scale and evolve them in the same NNLO equations as CT18 in the global fitting. Small-x and large-x behaviour At the starting Q 0 scale, the non-perturbative PDFs are parametrised as where the parameters a 1 and a 2 dominate the behaviour of PDFs as x approaches 0 or 1, respectively, and the Poly(x), constructed with a set of Bernstein polynomials in CTEQ-TEA PDF family, is responsible for the shape of PDFs in a wide range of x. In practice, we implemented the following ansatzes to parametrize various parton distributions at the Q 0 scale. •d/ū x→0 −→ 1. Based on the isospin symmetry in strong interaction, we requireū andd to have the same small-x behaviour, where the disconnected sea dominates. Specifically, this ansatz is implemented by setting aū 1 = ad 1 to preserve the isospin symmetry in the small-x region. This ansatz was also applied in the CT18 fit. See, Appendix C of Ref. [26] • u ds ,ū ds , d ds ,d ds , s ds ,s ds x→0 −→ x −1 . Since the DS partons are flavour singlet and can have Pomeron exchanges, their smallx behaviour goes like x −1 . Based on Eq. (10), this ansatz is implemented by setting a s 1 = 0, which is the value of the shape parameter a 1 of the strangeness PDF. We note that a s 1 = 0 is consistent with the CT18 error PDF sets, with the value of a s 1 of the CT18 central set shown in the first row of Table 1. Like valence partons, the CS partons are in the connected insertion, which is flavour non-singlet. Thus, we set the small-x behaviour of CS partons to be the same as those of valence-quark partons: a u cs 1 = a d cs In the CT18 fit, the ratio d/u was required to approach a finite number as x → 1. This assumption is also kept in the CT18CS fit, which is done by setting a u v 2 = a d v 2 = a u v 2,CT18 = a d v 2,CT18 = 3.036. Since the ratio d/u at x → 1 is dominated by valence partons and the parameter a 2 controls the PDF behaviour as x → 1, we fix the a 2 values of valence partons as those in CT18 fit, for simplicity. •d/ū x→1 −→d/ū of CT18. As shown in Refs. [4,26], CT18 PDFs can describe reasonably well both the E866 NuSea [29] and E906 SeaQuest [4] data, though the SeaQuest data were not included in the CT18 fit, as they only became available after the completion of the CT18 fit. Since both data sets provide important constraints on the ratiod/ū as x → 1, and the CS component of anti-quarks dominates the sea parton behaviour in the large x region, we set aū cs 2 = ad cs 2 = aū 2,CT18 = ad 2,CT18 = 7.737 in the CT18CS fit, for simplicity. For a quick comparison, we list in Table 1 the fitted values of a 1 and a 2 parameters of various partons in the CT18 and CT18CS NNLO fits. The numbers marked with " " indicate that they are not fitted, but input values in the CT18CS fit. Note that we did not list the values of the other shape parameters used in these fits. In total, there are 28 such shape parameters to be fitted in both the CT18 and CT18CS fits, with the same number (6) of parton degrees of freedom. We note that the published CT18 PDF error sets include an additional pair of eigenvector sets to account for the larger error of gluon PDF in the small-x region. Results In this section, we present the results of the CT18CS global fit on aspects of quality of the fit, the configuration of PDFs, and various PDF Mellin moments. The comparison between CT18CS and the standard CT18 NNLO fits shows that CT18CS, with an extended parametrisation, is consistent with the CT18 global analysis. Note that this global analysis uses the same data sets as the ones used in the CT18 analysis. There are in total 39 data sets, with 3681 data points included [26]. Quality of the fit In Table 2, we compare the quality of the CT18CS fit to that of the CT18 fit. It turns out, both have the total χ 2 CT18 = 4292 for a total of 3681 data points. The experimental data sets which made non-negligible contributions to the change in χ 2 of these two fits are also listed in Table 2 Comparison of PDFs In this section, we compare the fitted CT18CS PDFs obtained in this analysis to the published CT18 PDFs [26]. In CT18CS, the u and d quark distributions are represented by the combination of valence, connected sea, and disconnected sea quark distributions. Forū andd distributions, they are also made of connected sea, and disconnected sea distributions. In Figs. 3 and 4, the decomposition of u, d,ū,d in terms of CS and DS parton distributions, at Q 0 = 1.3 GeV, are shown, respectively. The PDF error bands, obtained at the 90% confidence level (C.L.), are also shown for comparison. displayed in Fig. 4. As shown, CT18CS is in good agreement with CT18 NNLO for these parton distributions. Furthermore, in CT18CS, the novel CS parton distribution is found to be responsible for u and d sea quark distributions in the intermediate-x region. On the contrary, the DS patron distribution plays a more important role in the small-x region. We should note that the errors of u, d,ū andd at small x (i.e., x < 10 −3 ) from CT18CS are substantially smaller than those in CT18. This is mainly due to the ansatz that we imposed on their small x behavior to be x −1 in Sec. 3.2. A useful format to compared andū PDFs resulted from the CT18CS and CT18 fits is to examine their difference, as shown in Fig. 5. Since we have assumed in this analysis that the DS component ofd andū are the same, hence (d−ū) = (d cs −ū cs ) in CT18CS PDFs. It shows that the CT18CS central value is close to that of CT18 NNLO for x > 0.03. In the small-x region, the difference (d cs −ū cs ) of CT18CS vanishes. This result is consistent with the prediction presented in Fig. 4 of Ref. [19] in which the E866 NuSea [29] and HERMES [23] data were compared to CT10 PDFs [24] in the framework of leading order analysis. We also show in Fig. 6 similar comparison for g-PDF, s-PDF, d − u, and PDF ratio (s +s)/(ū +d) at Q = 1.3 GeV. In Fig. 6(a), gluon distributions in CT18CS and CT18 fits are in very good agreement across the whole x range. The quark CS and DS separation has no effect on the gluon distribution. As shown in Fig. 6(b), the uncertainty in the strangeness distribution in CT18CS is reduced by a large margin, as compared to CT18 for x < 0.03. This is of the same pattern as for u, d andū,d since we have adopted the same ansatz, i.e. a s 1 = 0, in the CT18CS fit, cf. Sec. 3.2. The central value of the s-PDF distribution of CT18CS for small x is correlated with those ofū andd via the R ratio introduced from the lattice result. The comparison of d − u distribution between CT18CS and CT18 is shown in Fig. 6(c). The d − u distribution corresponds to the d v+cs − u v+cs distribution in CT18CS. This is because u ds and d ds are assumed to be the same under isospin symmetry, cf. Eq. (9). For x > 0.005, both central values and sizes of the uncertainty bands of the two PDFs are in good agreement. In the low-x region, the ansatz that CS and valence partons have the same behaviour for x → 0 in CT18CS leads to a significant reduction in the uncertainty size. In Fig. 6(d), the ratio of (s +s)/(ū +d) in CT18CS is compared to that in CT18. In the small-x region, where the DS parton dominates the sea quark distribution, this ratio for CT18CS is constrained by the lattice input, in addition to the ansatz a s 1 = 0, which reflects its central value and small uncertainty. As a consequence, the error in CT18CS is greatly reduced as compared to that of CT18 for x < 0.03. In the larger-x region, where the CS parton becomes important, this ratio is constrained by imposing the same large-x behaviors forū andd as in CT18. It is noted [22] that the PDF ratio (s +s)/(ū +d) starts to dip for x > 0.01. This is due to the fact thatū ord has two components -CS and DS, in contrast tos which only has DS. As shown in Fig. 4(b), when x > 0.01 the CS components start to show up and contribute to the denominator of the ratio, making it smaller. PDF Mellin Moments The momentum carried by a certain flavour parton can be calculated in terms of the second moment x of its PDF. In Table 3, we compare the predictions of CT18CS to CT18 PDFs on the second moments of various partons at the input scale. Theū andd are split into CS and DS in CT18CS and x u v+cs is from the direct insertion calculation of the u quark on the lattice, which corresponds to the sum of the valence and CS, cf. Eq. (12). Other similar comparisons can be found in Table VII of Ref. [26]. Without the CS and DS separation, one is not able to compare separate flavor-dependent PDF moments to those from the lattice calculation [41], since the disconnected insertion lattice calculation corresponds to the DS, while the CS is lumped with the valence in the connected insertion. The only exceptions are the strange moments which only have DS and those of u − d which only involve the connected insertion. They are quite limited. One cannot compare the moments for u, d,ū andd. Now that the CS and DS are separated (although at the input scale) in CT18CS, the lower half of Table 3 shows that, at Q 0 = 1.3 GeV,ū cs andd cs carry about 1.20% and 1.97% of the total momentum of the proton, respectively. Namely,d cs carries more momentum thanū cs . For comparison,ū ds andd ds each carries about 1.67% of the total momentum of the proton. Totally, the CS and DS components of up-and down-quarks carry about 6.34% and 6.68% of the total momentum of the proton, respectively. In addition, the strange PDF only has DS component which accounts for 2.74% of proton's total momentum, with both s ands contributions included. This is driven by the input value of R taken from the lattice prediction of 1/R =< x > s+s / < x >ū +d (DI) = 0.822(69)(78) at Q = 1.3 GeV, where < x >ū +d (DI) is the momentum fraction carried by the DS component ofū andd partons. By separating the CS and DS components of partons in the global analysis, the predictions in Table 3 can be directly compared to lattice calculations of separate flavors in both the connected and disconnected insertions, term by term. In Table 4, we collect the second moments of u + − d + = (u +ū) − (d +d) and s + = s +s predicted by CT18CS and CT18 calculations, at 1.3 GeV and 2.0 GeV, respectively. Lattice results of x u + −d + and x s+ at Q = 2.0 GeV are also given and they are found to be consistent with the CT18 predictions. However, we note that the deviation of the lattice calculations from different groups are large and not all systematic errors have been taken into account. The Impact of SeaQuest data Fixed-target Drell-Yan measurements provide an important probe of the x dependence of the nucleon PDFs. This fact motivated the Fermilab E866 NuSea experiment [29], which determined the deuteron-to-proton cross section ratio σ pd 2σ pp out to relatively large x 2 , the momentum fraction of the target. Intriguingly, E866 found evidence that the cross section ratio dropped below unity, σ pd 2σ pp < 1, as x 2 approached and exceeded x 0.25. The E866 results stimulated an interest in performing a similar measurement out to larger x 2 with higher precision -the main objective of the subsequent E906 SeaQuest experiment at Table 3: The second moment x of CT18CS and CT18 NNLO at 1.3 GeV. The superscript " * " indicates that due to the fourth ansatz imposed in Eq. (11), the second moments for CS components between quarks and anti-quarks are identical, namely, < x >ūcs=< x > u cs and < x >dcs=< x > d cs . The superscript " †" indicates that due to the second ansatz imposed in Eq. (9), the second moments of DS components of u,ū, d, andd are identical. The second moments of (u + − d + ) and s + predicted by CT18 [26] and CT18CS at 2.0 GeV and 1.3 GeV, respectively. We also show lattice results at 2.0 GeV. For x u + −d + , we follow Ref. [40] in supplying ranges obtained from various calculations, grouped according to the number of active flavours, N f , in the lattice action used. Fermilab [4]. Comparing to the NuSea data, the recent SeaQuest data include an extra bin which records data around x ∼ 0.4 with high precision. In Fig. 7, we compare the predictions by CT18CS to the NuSea and SeaQuest data. For x 2 > 0.2, the NuSea and the SeaQuest data exhibit different shapes of σ(pd)/2σ(pp). The ratio σ(pd)/2σ(pp) for the NuSea data clearly decreases as x 2 becomes higher than 0.2, while for the SeaQuest data, this ratio seems to remain the same up to x 2 = 0.4. The difference in the shape of σ(pd)/2σ(pp) distribution implies that NuSea and SeaQuest data have different preference for the PDF-ratiod/ū or the PDF-differenced −ū in the large-x region. In view of the fact that, in the CT18CS analysis,q =q cs +q ds for q = u or d andū ds =d ds , the deviation ofd/ū from unity is thus due to the differentū cs andd cs contributions in the proton. Hence, it is interesting to know how the inclusion of the SeaQuest data in a global fit such as CT18CS could modify the PDF-difference (d −ū), cf. Fig. 5 Table 5: The χ 2 of selected data sets included in the CT18CS and CT18CSp206 fits. Only those with non-negligible ∆χ 2 = |χ 2 CT18CS − χ 2 CT18CSp206 | are listed. N pt,E is the number of data points of individual data set, and χ 2 CT18CS and χ 2 CT18CSp206 are the χ 2 values predicted by using the CT18CS and the CT18CSp206 fit. Note that the E906 SeaQuest data [4] (ID=206) are not included in the CT18CS fit, but are in the CT18CSp206 fit. Below, we discuss the result of a new fit, referred to as "CT18CSp206" below, which follows the same approach as CT18CS, but with the inclusion of the E906 SeaQuest data to the original CT18 data set. In Table 5, we compare the quality of the CT18CSp206 fit to that of CT18CS. The only data sets with non-negligible ∆χ 2 = |χ 2 CT18CS − χ 2 CT18CSp206 | are just the E866 NuSea data and E906 SeaQuest data. From CT18CS to CT18CSp206 fit, the χ 2 for E866 NuSea data is increased by about 5 units, while the fit to the E906 SeaQuest data is improved (with a reduction of 12 units in its χ 2 ). This tension in the change of χ 2 reflects the different preferences of PDF-ratiod/ū or the PDF-differenced −ū in the large-x region. In Figs. 7 and 8, we compare the predictions of CT18CS and CT18CSp206 to the NuSea and SeaQuest data. In Fig. 7(a), the prediction of CT18CS is closer to the E866 NuSea data points for x 2 > 0.2, comparing to those of CT18 and CT18CSp206. For Fig. 7(b), the CT18CS prediction presents a different shape from E906 SeaQuest data points particularly for x 2 > 0.2, while CT18 and CT18CSp206 PDFs show better consistencies with these data points. Fig. 8 shows the comparison of uncertainty sizes between the total experimental uncertainty and the PDF-induced uncertainty in predictions for both E866 NuSea and E906 SeaQuest data. All of three above-mentioned PDFs sets exhibit conservative uncertainties, so that the PDF-induced uncertainties in predictions are larger than the experimental uncertainty for both data sets, except for the data point with the highest x 2 value in E866 NuSea measurement. For x 2 > 0.2, the CT18CS predictions for both data sets possess a slightly larger error bands than predictions of CT18 and CT18CSp206. For most of the range of x 2 , the error band of CT18CSp206 is comparable to the CT18 error band, while in the prediction of E906 SeaQuest data with x 2 > 0.3, CT18CSp206 has a larger uncertainty. Finally, we remark that the impact of SeaQuest data to modifying the CT18CS PDFs can also be studied by using the ePump-updating method, detailed in Refs. [50,51]. The idea is to add the SeaQuest data, with a given weight, to the original CT18 data set and perform a new global fit using the ePump-updating method. This will update the original CT18CS PDFs and produce a new set of PDFs. Given this new set of PDFs, one can calculate the change in the total χ 2 of each data set included in the global fit, as compared to that given by the original CT18CS PDFs. Instead of examining χ 2 E (N pt,E )/N pt,E for the individual experiment E, which has different probability distribution and is dependent on the total number of data point N pt,E , we provide an equivalent information in the form of the effective Gaussian variables S E = 2χ 2 E − 2N pt,E − 1 [24]. A well-fitted data set should have S E between −1 and 1. An S E smaller than −1 means the data set is fitted too well (maybe due to large experimental errors) and an S E larger than 1 indicates poor fitting. To examine the potential tensions between the E906 SeaQuest data and the data sets included in the CT18CS fit, we plot in Fig. 9(a) the change of the effective Gaussian variable S E for some data sets included in CT18CS as the weight of SeaQuest data is increased from 0 to 10. Only the data sets with non-negligible change in S E are shown. Note that a weight of zero corresponds to the CT18CS fit, in which the SeaQuest data were not included, and a weight of one leads to the above-mentioned CT18CSp206 fit. As the weight of SeaQuest data increases, the S E of SeaQuest data becomes smaller, as expected, while the E866 NuSea data becomes much larger, indicating tension with the SeaQuest data. Both the NMC F d 2 /F p 2 (ID=104) and CMS 8 TeV W and A ch (ID=249) data show very slight increase in their S E values as the weight of SeaQuest data increases from zero. In Fig. 9(b), we compare the PDF-ratiod/ū, as a function of x at Q = 1.3 GeV, among CT18 NNLO, CT18CS, and CT18CSp206, where the E906 SeaQuest data (labelled as ID=206 in Table 5) is included, via the ePump-updating method [50,51]. It shows that CT18CSp206 has a larger PDF ratiō d/ū at x > 0.2, as compared to CT18CS. On the other hand, the uncertainty of the PDF ratiod/ū of CT18CSp206 in large-x region is enlarged from that of CT18CS to tolerate the tension between the two data sets. For completeness, we also show in Fig. 10 the comparison ofd/ū, (d −ū), s, and (s +s)/(ū +d), respectively, as predicted by these three different global fits at Q = 100 GeV. In Figs. 10(a), 10(b), at 100 GeV the comparison of PDF ratiō d/ū, or of the PDF difference (d −ū), is similar to those at 1.3 GeV, c.f. Fig. 10. The impact of the SeaQuest data on s(x) and PDF ratio (s +s)/(ū +d) at 100 GeV is negligible, as shown in Figs. 10(c), 10(d). Summary In this work, we present a NNLO QCD global analysis named CT18CS where the connected sea partons and disconnected sea partons, as revealed in the path-integral formulation of the hadronic tensor in QCD, are separately parametrized at the input scale of Q 0 = 1.3 GeV. The CS and DS are mainly distinguished by their respective small-x behaviors. Furthermore, we assumed that the DS of u and d are proportional to the s with the proportional constant constrained by a recent complete lattice calculation of the second moment ratio [22] x s+s / x ū+d (DI) = 0.822(69)(78) at Q = 1.3 GeV, where x ū+d (DI) is the momentum fraction carried by the DS component ofū andd partons. This lattice QCD constraint was included in the CT18CS fit via the Lagrange multiplier method. Together with the ansatz a s 1 = 0, this lattice input has helped reduce the error of the ratio x s+s / x ū+d greatly for x < 0.03 as compared to that of the CT18 fit. Short of applying the evolution equations where CS and DS partons are evolved separately, we impose a number of ansatzes regading small-x behaviors and isospin symmetry, as described in Sec. 3.1. in the input scale and evole the combined CS and DS partons during evolution. In this way, the PDFs are still evolved from Q 0 with the usual parton classification, namely g, u,ū, d,d, s and the results can be compared with CT18 at Q 0 . It is found that the fit quality of CT18CS is comparable to that of CT18 NNLO. The CT18CS PDFs, obtained with an extended parametrisation, are consistent with CT18 NNLO in a wide range of x, but the errors of the quark partons in CT18CS are greatly reduced at small x as compared to those of CT18, mainly due to the small-x behaviors imposed and the lattice QCD input. As expected, the DS components primarily contribute toū andd in small-x region, and the CS components provide sizable contribution in the intermediate-x region. We give the second moments of CS and DS in different flavors at scale Q 0 . They can be compared with systematic error controlled lattice calculations term by term for the first time. At Q = 1.3 GeV, we find that up and down quarks in the CS sector takes about 6.34% of total momentum, while the momentum in DS sector is about 6.68% of total amount. They are comparable in size at this low scale. The implication of CT18CS PDFs are studied in the comparison of predictions for the NuSea data and SeaQuest data between CT18CS and CT18 NNLO PDFs. A new global fit (referred to as CT18CSp206) on the basis of CT18CS is obtained with the SeaQuest data included. Through a scan of of the effective Gaussian variable S E over various weights to the E906 SeaQuest data, using the ePump-updating method [50,51], it is found that the SeaQuest data and the NuSea data are in tension. In the future, global analyses should incorporate the extended evolution equations [41] where the connected sea and the the disconnected sea are evolved separately so that they will remain separated at all Q 2 for better and more detailed delineation of the PDF degrees of freedom and compared to lattice results term by term. time under the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant No. ACI-1053575. We also thank the National Energy Research Scientific Computing Center (NERSC) for providing HPC resources that have contributed to the research results reported within this paper. We acknowledge the facilities of the USQCD Collaboration used for this research in part, which are funded by the Office of Science of the U.S. Department of Energy. The work of J. Liang is partially supported by the National Science Foundation of China (NSFC) under Grant
2021-08-17T01:16:19.433Z
2021-08-15T00:00:00.000
{ "year": 2022, "sha1": "5644a8efe08162dfccd6fab703611b0f78f0018d", "oa_license": "CCBY", "oa_url": "https://scipost.org/10.21468/SciPostPhysProc.8.067/pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "546bd4eb18f8264a71c3c603921fb21a4677a3e0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119119332
pes2o/s2orc
v3-fos-license
Equivalence of Simplicial Ricci Flow and Hamilton's Ricci Flow for 3D Neckpinch Geometries Hamilton's Ricci flow (RF) equations were recently expressed in terms of the edge lengths of a d-dimensional piecewise linear (PL) simplicial geometry, for d greater than or equal to 2. The structure of the simplicial Ricci flow (SRF) equations are dimensionally agnostic. These SRF equations were tested numerically and analytically in 3D for simple models and reproduced qualitatively the solution of continuum RF equations including a Type-1 neckpinch singularity. Here we examine a continuum limit of the SRF equations for 3D neck pinch geometries with an arbitrary radial profile. We show that the SRF equations converge to the corresponding continuum RF equations as reported by Angenent and Knopf. Hamilton showed that this yields a forced diffusion equation for the curvature, e.g. the scalar curvature evolves aṡ (1. 2) The bulk of the applications of this curvature flow have utilized the numerical evolution of piecewise-flat simplicial 2-surfaces [5,6]. It is a widely accepted verity in computational science that a geometry with complex topology is most naturally represented in a coordinate-free way by an unstructured mesh. This is apparent in the engineering applications utilizing finite-volume [7] and finite-element [8] algorithms, and is equally true in physics within the field of general relativity through Regge calculus [9,10], and in electrodynamics by discrete exterior calculus [11]. One expects a wealth of exciting new applications for discrete formulations of Ricci flow in 3 and higher dimensions. Our presumption is based on the uniformization theorem in 2-dimensions, and since the topological taxonomy is richer in higher dimensions [12]. We envision that these higher-dimensional applications will involve geometries with complex topology and geometry. Preliminary work in this direction is already underway [13][14][15][16]. In accord, we recently introduced a discrete RF approach for three and higher dimensions that we refer to as simplicial Ricci flow (SRF) [17]. SRF is founded on Regge calculus and upon the mathematical foundations of Alexandrov [9,[18][19][20]. Here the SRF equations are similar to their continuum counterpart. They are naturally defined on a d-dimensional simplicial geometry as a proportionality between the time rate of change of the circumcentric dual edges, λ i , and the simplicial Ricci tensor associated to these dual edges,λ It is the aim of this paper to explicitly show that these SRF equations in 3-dimensions for a geometry with axial symmetry converge to the continuum Hamilton RF equations for an arbitrarily refined mesh. We examine a continuum limit of the SRF equations for 3D neck pinch geometries. We show that the SRF equations converge to the corresponding continuum RF equations as reported by Angenent and Knopf [21,22]. In particular, we examine a piecewise flat axisymmetric geometry with 3-sphere topology, S 3 = S 2 × [0, 1]. Our lattice 3-geometry is tiled primarily with triangle-based frustum polyhedra with two simplicial polyhedra "end caps." This axisymmetric lattice 3-geometry is characterized by edges of two kinds, the axial edges, a i and the cross-sectional sphere edges, s i . In recent work we examined a discrete model where each of the cross-sectional spheres was approximated by an icosahedron, of edge s i [23]. The accuracy of our model was limited by the relatively low spatial resolution of the icosahedron. The model considered here is built of infinitesimal isosceles-based frustums. The spatial resolution, , of each of the spherical cross-sectional polyhedra with radius ρ is driven by a single infinitesimal scale = ρξ → 0, and we assume an arbitrarily large number of cross-sectional spheres. The spherical edges s i and axial edges a i are infinitesimal. The axisymmetry of this model allows us to consider only two SRF equations, one associated to each axial edge, a i , the other with a spherical edge, s i . In the limit there will be an ever increasing number of frustum blocks in our model. We show here that we recover the exact RF equations in the continuum limit. The foundation of this work is the analysis of Angenent and Knopf on the neck pinching singularity analysis of RF on a class of axisymmetric double-lobe-shaped geometries of Fig. 1. The metrics they considered were warped product metrics on I × S 2 , where I ∈ R is an open interval, dΩ 2 = dθ 2 + sin 2 (θ)dφ 2 , is the usual metric of the unit 2-sphere, a is the proper axial distance away from the equator, and ρ(a) is the cylindrical radial profile of the axi-symmetric geometry, i.e. ρ(a) is the radius of the cross-sectional 2-sphere at an axial distance a away from the equator. The vector-valued one-form Ricci tensor is, where the primes refer to partial derivatives with respect to the axial distance, a, and the dots refer to derivatives with respect to time, t. In particular, we show in this manuscript that the SRF equations, under a suitable mesh refinement, converge to the continuum Hamilton's RF equations of Knopf and Angenent, Armed with this result, all continuum theorems and corollaries apply equally to the SRF equations. They are equivalent in the continuum limit we use. Therefore, we can confidently say that the SRF equations are consistent with the following theorems of Angenent and Knopf for non-degenerate neck pinches: 1. If the scalar curvature is everywhere positive, R ≥ 0, then the radius of the waist (a min = ρ(0)) is bounded, where T is the finite time that a neck pinch occurs. 2. As a consequence the neck pinch singularity occurs at or before, T = a 2 min . 3. The height of the two lobes is bounded from below, and under suitable conditions the neck will pinch off before the lobes will collapse. 4. The neck approaches a cylindrical-type singularity. The result of this paper shows that any continuum RF theorem or curvature bound for this class of geometry will apply equally well to the SRF evolution with this geometry -the SRF equations are equivalent to the RF equations in a continuum limit. Our result applies to both degenerate and non-degenerate neck pinch singularity formation. The proof of the convergence of SRF to continuum RF is done here explicitly and algebraically. While this does not prove the equivalence between Hamilton's RF and SRF equations for any geometry and for any dimension; nevertheless, we conjecture this is true. The work here supports the definition (Def. 1) of the SRF equations introduced recently in [17]. II. A LATTICE APPROXIMATION OF THE ANGENENT-KNOPF NECKPINCH GEOMETRY For the purpose examining Type-1, or Type-2 neck pinch behavior of the SRF equations, we have introduced a PL lattice geometry sharing the qualitative features of the Angenent and Knopf initial data [21] as illustrated in Fig. 1. The continuum cross sections of this geometry in planes perpendicular to the symmetry axis are 2-sphere surfaces. We impose no mirror symmetry in the radial profile, ρ = ρ(a). The surface and metric can be parameterized by two coordinates, a and ρ(a). Here a given point on the surface is identified by its proper "axial" distance of a from the Fig. 6 in the Appendix). In this paper we take the limit of an ever more finely discretized sphere, and with an ever increasing number of spherical cross sections. In this limit we show that the continuum Hamiltonian RF equations are recovered from the SRF equations. equator (or any other point we so choose) , and the radius, ρ = ρ(a), of the cross-sectional sphere on which the point lies. The continuum warped-product metric of this surface as introduced by Angenent and Knopf [21], where is the usual spherical line element. The initial data is determined by a radial profile function at t = 0 for the double-lobed geometry, and amounts to specifying a function relating the cylindrical radius, ρ, to a scaled proper axial distance along the double-lobed geometry away from an equator or neck, a ∈ {a min , a max }, By way of an example, if the double-lobed geometry has no neck, and were just a sphere of radius, R 0 , this initial radial profile function is simply the cylindrical coordinate radius, ρ(a, t = 0) = R 0 cos(a). However, Angenent and Knopf's mirror-symmetric double-lobed geometry introduced a parabolic waist for their purpose so as to aid in their mathematical analysis of the neck singularity, where constant A controls the degree of neck pinching in the initial-value data, and the constant B is chosen so as to ensure continuity in the radial profile function at a = ±π/4. In this paper, we explore arbitrary radial profiles. We provide a simplicial approximation of an axisymmetric warped-product geometry at time t characterized by an arbitrary C 2 radial profile ρ(a, t) = ρ(a) ∀ a ∈ {a min , a max }. (2.6) We first identify an arbitrarily large number (N s → ∞) of nearly equal-spaced spherical cross sections. Next we examine one of these spheres, namely the i th cross-sectional sphere. There are many ways to approximate this Here we take the length of the isosceles triangles that we are projecting onto the sphere of radius ρ to be arbitrarily small, k ρ. This yields an infinitesimal parameter in our model that we will drive to zero, In order to construct the SRF equations at O it is necessary for us to extend the lattice radially one more level out from the first six equilateral triangles so that we can examine 18 additional equilateral triangles, each of edge length k ρ, that we project onto the surface of the sphere as shown in the right-hand side of Fig. 2. The projected triangles will no longer be equilateral. In particular there will be two sets of 6 isosceles triangles, {O, X , Y} and {V, X , Y}, as well as twelve triangles with three different edge lengths {X , U, V}. These 24 triangles are composed of combinations of six distinct edges, We assume that all of the triangulated spherical polyhedral cross-sections in our model have the same lattice topology. Furthermore, we assume that they are all congruent to each other under a suitable global scale factor. Consider the i'th and (i + 1)'st polyhedral sphere of radius ρ i and ρ i+1 ; respectively. Each triangulated polyhedron has N 0 1 vertices, and therefore N 1 = 3N 0 − 6 edges and N 2 = 2N 0 − 4 triangles. We connect these two polyhedra together by connecting the N 0 pairs of corresponding pairs of vertices by N 0 identical axial edges, each of length a i . Each pair of corresponding triangles when connected by three a i edges will form a triangular-based frustum block (see Appendix, Fig. 6). We require that the geometry interior to each of our frustum blocks is flat Euclidean 3-space. Consequently, the 3-dimensional geometry between the two bounding spherical polyhedrons is tiled with N 2 frustum blocks, one for each of the triangles. In particular, axial edge O i O i+1 = a i is the meeting place of six identical isosceles triangle frustum blocks as illustrated in Fig. 3. This is not ordinarily the case for every axial edge, a i , e.g. axial edge X i X i+1 = a i is the meeting place of three distinct pairs of frustum blocks, two pairs are isosceles frustum blocks the last pair is a general triangular-based frustum block. The axial symmetry of this model permits us to tile the geometry with the non-simplicial furstum blocks. Given the symmetry of our model the geometry of each frustum block is completely determined by its 9 edge lengths, in other words the symmetry endows each frustum block with rigidity. This construction gives us an axisymmetric 3-cylinder geometry composed of triangular-based frustums. We "cap-off" this 3-cylinder by treating each of the two bounding polyhedrons of radius ρ 1 and ρ Na as flat Euclidean tiles of our PL geometry. Our lattice geometry becomes homeomorphic to a 3-sphere when we include the two polyhedral end caps. It is composed of N 2 (N a − 1) frustum blocks and two triangulated polyhedral "end caps." This can be seen in Fig. 1, albeit with one dimension suppressed. There the end caps are flat hexagons as opposed to triangulated polygons. A continuum limit of our lattice is achieved here by taking (1) N a → ∞, or equivalently a i → 0, and (2) N s → ∞, or equivalently ξ → 0. III. THEOREM: THE SRF EQUATIONS FOR THE FRUSTUM GEOMETRY ARE THE HAMILTON RF EQUATIONS The recent definition of the SRF equations and the dual-edge SRF equations in [17] made use of elements from both the simplicial lattice geometry (in this case the frustum lattice geometry with end caps, F), and from its circumcentric dual lattice F * . The SRF and dual-edge SRF equations for the S 3 frustum geometry, F are functions of the (N a − 1) axial edges, a i ∀i ∈ {1, 2 . . . , N a − 1}, as well as the N a radii ρ i ∀i ∈ {1, 2 . . . , N a }. The dual circumcentric lattice is composed of the dual axial edges, α i ∀i ∈ {1, 2 . . . , N a }, and the dual edges σ i ∀i ∈ {1, 2 . . . , N a + 1}. Each dual axial edge reaches from the circumcenter of one frustum block to the circumcenter of the adjacent frustum block sharing a common triangle face. The dual edge α pierces the triangle at the triangles circumcenter, and the edge is perpendicular to the triangle. On the other hand, the dual spherical edge σ reaches from the circumcenter of one frustum block to the circumcenter of an adjacent frustum block sharing a common trapezoidal face. This dual edge σ is perpendicular to the trapezoid and pierces the trapezoid at its circumcenter. Therefore, there will be two dual axial edges, α and three dual spherical edges σ emanating form the circumcenter of each of the frustum blocks. While the circumcenter of each of the two polyhedral "end caps" will be the common meeting place of the N 2 dual axial edges. We proved recently that the dual-edge SRF equations are equivalent to the simplicial-edge SRF equations for this warped-product geometry in Appendix B of [23], where the definition of the volume-weighting factors are defined below in IV B . Therefore, and for the purpose of this paper, it will suffice to prove the following theorem: IV. THE DUAL αi-EDGE SRF EQUATION AT THE POINT Oi AND ITS CONTINUUM LIMIT In this section we examine the dual-edge SRF equation associated to the axial edge, α i ∈ F * as displayed in the lower left-hand side of Eq. 3.1. Edge α i is dual to triangle OiXiYi ∈ F. The edge α and triangle OiXiYi are illustrated in the upper right hand side of Fig. 4. We also use Eqs. 2.8-2.12 to examine the continuum limit of this equation. We use the definition of the dual-edge SRF equations as introduced recently in [17], and consequently we findα We use the results of the Appendix and the definitions in [17] to examine each term of this equation and to examine the Taylor series expansion to obtain the continuum limit expression. We do this in three steps by examining (1) The dual edge α i reaches from the circumcenter C 3i−1 of one frustum block F i−1 to the circumcenter C 3i of an adjacent frustum block F i . This dual edge is perpendicular to triangle i := O i X i Y i common to F i−1 and F i . From A 3, Eqs. A35-A36, we find We will display the next higher order terms in our expansions. They may be useful for numerical applications or to study singularity formation analytically. B. Circumcentric Volume Weighting Factors The hybrid volume V αi is the volume of the hybrid polyhedron formed by the product of dual edge α i ∈ F * and triangle OiXiYi ∈ F and is the sum of there reduced hybrid tetrahedra, The moment arms, m αisi and m αisi are displayed in the upper right-hand part of Fig. 4, and are calculated in Eqs. A20-A22. The fractional volumes and their series expansion in terms of ξ are, (4.7) C. Gaussian Curvature In [17] we defined the Gaussian curvature of edge s i to be the deficit angle, si , and distributed uniformly over the circumcentric dual area s * i , The edge, s i , is common to four isosceles frustum blocks as shown in Fig. 5. There are two pairs of dihedral angles of the frustum blocks at edge s i , and we can use Eq. A6 and Eq. A9 to define the deficit angle, The dual area s * i is a trapezoid and is shown in the lower left diagram of Fig. 4 and is the sum of four isosceles triangles, (4.14) The Gaussian curvature of edge s i in our lattice is expressed as, (4.16) Similarly we can calculate the Gaussian curvature curvature associated with the hinge edges i where The dual area and dihedral angles now extend into the next band of triangles away from the pole O i as shown in Fig. 2. Using the expressions in the Appendix and the series expansion for the edges (Eq. 2.8) the deficit angle fors i is (4.20) and the dual areas * i is the sum of four kite areas whose series in ξ is (4.22) Therefore the Gaussian curvature is The dual edge SRF equation associated to α i was give in Eq. 4.1. We can calculate the series expansion of this equation in the limit ξ 1. We keep the lowest-order term by substituting the expressions we derived in the last section through Eqs. 4.6, 4.7, 4.15 and 4.23. We find the zeroth-order term, We make the following substitutions, (4.27) In the limit of ζ 0 → 0 and ζ 1 → 0 we find Eq. 4.25 becomes, . (4.28) In this limit we can also substitute yielding after some rearrangement,ȧ We immediately recognize the right-hand side of this equation is the second derivative of ρ with respect to a, and therefore we recover the corresponding continuum RF equation for the axial edge in the limit, namelẏ In this section we repeat the process we presented in the last section; however, we examine the dual-edge SRF equation associated to the edge σ i ∈ F * . This edge in the dual cross-sectional polyhedron is dual to trapezoid O i X i O i+1 X i+1 ∈ F. This is illustrated in the lower right of Fig. 4, and anchors the dual SRF equation in the upper left-hand side of Eq. 3.1. We also use Eqs. 2.8-2.12 to examine the continuum limit of this equation. We use the definition of the dual-edge SRF equations as introduced recently in [17], and consequently we finḋ Here we differentiated the axial edge a i = O i O i+1 from the axial edgeâ i = X i X i+1 . The fractional volumes, deficit angles and dual areas are not equal. As we did in the last Sec. IV, we use the results of the Appendix and the definitions in [17] to examine each term of Eq. 5.1 and to examine the Taylor series expansion to obtain the continuum limit expression. We do this in three steps by examining (1) the two circumcentric volume weighting factors on the righthand side of the dual SRF equation, (2) the two Gaussian curvature expressions also on the right-hand side, and finally (3) the time derivative on the left-hand side of the equation. A. The Dual Edge σi to Trapezoid OiXiOi+1Xi+1 The dual edge σ i reaches from the circumcenter C 3i−1 of one frustum block, F i , to the circumcenter, C 3i , of the adjacent frustum block F i . This dual edge is perpendicular to trapezoid O i X i O i+1 X i+1 common to these two frustum blocks. From A 3, Eq. A32, we find B. Circumcentric Volume Weighting Factors The hybrid volume V σi is the volume of the hybrid polyhedron formed by the product of dual edge σ i ∈ F * and trapezoid O i X i O i+1 X i+1 ∈ F and is the sum of there reduced hybrid tetrahedra, Vσ i s i+1 where the moment arms shown in the lower right-hand part of Fig. 4 and by Eqs. A23-A25. Additionally, symmetry of the trapezoid guarantees that m σiâi = m σiai so that V σiai = V σiâi . The fractional volumes and their series expansion in terms of ξ are, In [17] we defined the Gaussian curvature of edge a i = O i O i+1 to be the deficit angle, ai , distributed uniformly over the circumcentric dual area a * i , The edge, a i is common to six identical isosceles frustum blocks as shown in Fig. 3. We use dihedral angle of the frustum block at edge a i given in Eq. A6 to determine define the deficit angle, The boundary dual area a * i is a hexagon and is shown in the upper left diagram of Fig. 4 and is the sum of six isosceles triangles, The series expansion for the Gaussian curvature of axial edge a i in powers ξ is, We also need to calculate the slightly more involved expression for the Gaussian curvature, 21) and the dual areaâ * i is the sum of three pairs of kite areas whose series in ξ iŝ Therefore, the Gaussian curvature at edgeâ i is Finally, the last of the four Gaussian curvatures we need for Eq. 5.1 is associated with edge s i+1 . To calculate we need only increment each index in Eq. 4.15 by one. D. The Zeroth-Order Expansion Term of SRF Equation for Dual-Edge σi The dual edge σ i SRF equation was give in Eq. 5.1. We can calculate series expansion of this equation in ξ and keep the lowest-order term by substituting the expressions we derived in the last section through Eqs. 5.7, 5.8, 5.9 for the three circumcentric volume weighting factors, as well as the three Gaussian curvatures given in Eqs. 4.15, 5.24, 5.16 and 5.23. We find a zeroth-order term, We make the following substitutions, in the last term. In the limit of ζ 0 → 0 and ζ 1 → 0, we find Eq. 5.25 becomes, (5.30) In this limit we can also substitute in the previous equation to yield, after some rearrangement, We immediately recognize the numerator of the first two terms on the right-hand side of Eq. 5.34 as the second derivative of ρ with respect to a and the square of the first derivative of ρ with respect to a. Therefore, we recover the corresponding continuum RF equation for the axial edge in the continuum limit, namelẏ We demonstrated that the continuum limit of the SRF equations yielded the Hamilton RF equations for an interesting class of warped product metrics. Therefore, all the mathematical foundations, definitions and theorems of the continuum equations can be automatically transferred to the discrete in SRF equations. This further reinforces the definition of the SRF equations recently forwarded as Definition 1 in [17]. While we proved this for geometries described by the warped-product metrics in Angenent, Isenberg and Knopf [21,22], we conjecture that the SRF equations suitably converge to the continuum Hamilton RF equations for any n-dimensional geometry for n ≥ 2. We explore Ricci flow in 3 and higher dimensions because we can very well believe that the SRF equations will have an equally rich spectrum of application as does 2-dimensional combinatorial RF applications. We therefore are motivated to explore the discrete RF in higher dimensions so that it can be used in the analysis of topology and geometry, both numerically and analytically to bound Ricci curvature in discrete geometries and to analyze and handle higher-dimensional RF singularities [26,27]. The topological taxonomy afforded by RF is richer when transitioning from 2 to 3-dimensions. In particular, the uniformization theorem says that any 2-geometry will evolve under RF to a constant curvature sphere, plane or hyperboloid, while in 3-dimensions the curvature and surface will diffuse into a connected sum of prime manifolds [12]. We are motivated by Alexandrov [18], "The theory of polyhedra and related geometrical methods are attractive not only in their own right. They pave the way for the general theory of surfaces. Surely, it is not always that we may infer a theorem for curved surfaces from a theorem about polyhedra by passage to the limit. However, the theorems about polyhedra always drive us to searching similar theorems about curved surfaces." A. D. Alexandrov, 1950 We are therefore eager to explore the geometry and curvature of higher-dimensional polytopes through the SRF equations as guide for continuum analogues. The SRF equations are constructed, in part, from the nine distinct dihedral angles of the frustum block. We found it convenient to construct a diagonal for each of the three trapezoidal faces of the frustum, We can use these diagonals to subdivide the frustum block into three tetrahedra, We can then use the usual formula for the dihedral angle of a tetrahedron to determine the nine dihedral angles of the frustum block. The cosine of the dihedral angle for the tetrahedron shown in Fig. 7 is a function of its six edge lengths, cos θ ab = Here, abx is the area of the triangle face {ABX}, and aby is the area of the triangle face {ABY }. Using this cosine formulae together with the decomposition of the frustum into three tetrahedra and the expressions for the diagonals of the three trapezoidal faces, we find the following three dihedral angles associates to the three edges of the base
2014-04-15T20:00:46.000Z
2014-04-15T00:00:00.000
{ "year": 2014, "sha1": "328a9da12a437ce76c9c4ad38190b66811b3fcc6", "oa_license": null, "oa_url": "http://www.intlpress.com/site/pub/files/_fulltext/journals/gic/2014/0001/0003/GIC-2014-0001-0003-a002.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "328a9da12a437ce76c9c4ad38190b66811b3fcc6", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
225677339
pes2o/s2orc
v3-fos-license
The prognostic significance of HLA-A2 expression on somatic cells in patients with left-sided colon and rectal cancers Introduction. Current knowledge about colorectal cancer (CRC) identifies tumor immunogenicity as one of the more important issues. In cancers, a prerequisite for immune system activation is the presentation of tumor associated antigen (TAA) epitopes to immunocompetent cells. HLA-A2 is one of the antigens in the context of which TAAs are present, but data on the possible impact of HLA-A2 antigen expression on the survival of patients with colorectal cancer are scarce and sometimes contradictory. The aim of this study was to analyse the relationship between HLA-A2 expression in patients with left-sided colorectal cancer in various stages of disease and their long-term survival, and to answer the question of whether a lack of HLA-A2 expression is actually a negative prognostic factor. Material and methods.  A prospective analysis of 58 patients with left-sided colorectal cancer was carried out. Expression of HLA-A2 was determined by patient blood lymphocyte staining, and analysed using flow cytometry. Results. In the study group, patients with HLA-A2 expression lived statistically longer than HLA-A2 negative patients (p = 0.027). There was no significant difference in overall survival between the HLA-A2+ and HLA-A2- groups with stage II and III left-sided CRC. However, the Cox proportional hazard model showed that a lack of HLA-A2 expression was a negative prognostic factor in the group of radically operated patients without distant metastases. Conclusions. HLA-A2 status may affect the clinical course of patients with left-sided colon and rectal cancer, although left-sided tumors are less immunogenic than right-sided ones. HLA-A2-positive patients with left-sided colorectal cancer lived statistically longer than those who were HLA-A2-negative (p = 0.027). Lack of HLA-A2 expression was a negative prognostic factor in the group of radically operated patients. Introduction The appropriate immune response to tumor cells is dependent on their ability to be recognized by immunocompetent chromosomally instable (CIN) phenotype located in the left side of the colon and rectum) [22]. This heterogeneity should be used to stratify patients in order to provide them with the most optimal, current, and novel immune-based therapeutic strategies available in clinical practice [21]. In this study, we concentrated on tumors located in the left colon and rectum -a more homogenous sub-group of CRC. Locally advanced malignancies were the main area of interest. The aim of this study was to answer the question of whether HLA-A2 expression is an important prognostic factor in left-sided CRC, which may present with decreased immunogenicity compared to right-sided CRC. Material and methods The study group consisted of 58 colorectal cancer patients treated in a single institution between 2007 and 2012. Only patients with tumors located in the rectum or left colon were included. The term left colon was defined as the large intestine, from the left 1/3 of the transverse colon distally. All patients had histologically confirmed disease, were over 18 years-old, and had had an electively performed surgical procedure. Patients with simultaneous right-sided colon cancer or patients with a history of other neoplastic diseases were excluded. Preoperative radiotherapy was used in 3 of the rectal cancer patients. All patients had no history of autoimmune diseases or recent infections. The group was composed of 25 women and 33 men, with a mean age of 66 (SD 11), in varying stages of disease (tab. I). The surgical procedures were carried out according to oncological guidelines. Due to the changes of the TNM staging systems during the study period, all the specimens were re--staged according to the 7 th edition of the TNM. The clinical and pathological data were recorded. Patients received postoperative chemotherapy if indicated. All patients were followed up for at least 5 years, or until death, and dates of death were verified by the census registry office. All patients provided their informed, written consent. The study was approved by the Jagiellonian University Ethical Committee KBET no. 86/B/2007 and KBET no. 122.6120.128.2015. The study was registered at ClinicalTrials.gov, registration number NCT03640572. Blood samples were collected prior to any interventional procedure in sterile EDTA vacutainers. Cell preparation was started 1-2 hours after a blood draw. Expression of HLA-A2 was determined by patient blood lymphocyte staining, using PE-conjugated mouse anti-human HLA-A2 mAb or PE-conjugated isotype-matched mouse immunoglobulins (both BD Pharmingen) as a negative control, followed by lysis of erythrocytes (FACS Lysing Solution, BD Biosciences) and flow cytometry analysis (FACS Canto). Statistics The statistical analysis was conducted using the software Statistica 13 (StatSoft Inc.). The Kaplan-Meier method was used for the calculation of survival probabilities and the Wilcoxon-epitopes require a specific HLA, or other antigens, for presentation. HLA class I antigens are integral membrane glycoproteins that are inherited and expressed at varying levels on the surface of virtually all somatic cells [1]. It is known that HLA-A2 (MHC class I) is common among Caucasians (approximately 45% of the Caucasian population) [2] and that some antigens characteristic for colorectal cancer (CRC) are presented in its context. These include CSNK1A1, GAS7, HAUS3, SRPX, WDR46, ERBB2, AKAP13, and MUC1 antigens. A large proportion of cancer vaccine research has been limited to the HLA-A2-positive population, and HLA-A2-negative patients have been used as a control group in targeted immunotherapy studies [3]. However, some findings show that HLA status itself can influence the clinical course of the disease, as the natural immune response may differ between patients with or without this antigen expression. Data on the possible impact of HLA-A2 antigen expression on the survival of patients with colorectal cancer are limited and sometimes contradictory. Most data covers a mixed population of right and left-sided colon and rectal cancer patients. Since we know that MHC class I expression is often absent in micro-satellite instability (MSI) tumors [11,12], which are clinically characterized as having a favorable prognosis and are more frequently observed in right-sided colon tumors [13,14], the results obtained from a mixed population of all colon and rectal cancer patients might not be representative [15]. According to UICC cancer statistics published in 2018, colorectal cancer is the 3 rd cancer in terms of incidence and second in cancer-related mortality in the world. Despite progress in prevention and therapy, there is still space for new therapies and translational research in colorectal cancer. In recent years, several trials on immunotherapy using checkpoint inhibitors in colorectal cancer have produced promising results [16]. However, more than 50-80% of cancer patients fail to respond to checkpoint inhibitor therapy [17]. Therefore, the investigation of predictive and prognostic factors in various subgroups of colorectal cancer patients is justified. Colorectal cancer is not a homogenous disease in terms of primary tumor location, and there is evidence that right-sided and left-sided cancers may have different biologies and prognoses [18][19][20][21]. CRC immunogenicity, understood as the ability to induce an immune response, also differs between the right and left sides of the colon. There are two different types of colorectal cancer -those which are highly immunogenic (with multiple DNA mutations, chromosomal stability (CS), microsatellite instability (MSI) phenotype and the presence of multiple tumor-infiltrating lymphocytes (TILs), prevalent in the right colon) and those with low immunogenicity (with a limited number of DNA mutations, a microsatellite stable (MSS) and Results Fifty-eight colorectal cancer patients were evaluated; 33 were found to be HLA-A2-positive (56% of whole study group). The distribution of clinicopathological characteristics of patients in HLA-A2+ and HLA-A2-groups are listed in table II. The groups of patients with the presence or absence of HLA-A2 had similar structures in terms of age, gender, grade of differentiation (G), and TNM, and differed significantly only in terms of their T3 and T4 characteristics and location (left colon/rectum). The analysis of clinicopathological features showed that in the group of patients with T3 tumors, patients with HLA-A2 expression predominated, while in the group of patients with T4 tumors most patients were HLA-A2-negative. Among the patients with rectal tumors, the HLA-A2--negative phenotype dominated, while among patients with left-sided colon tumors, significantly more patients had HLA-A2 expression. All the patients were followed-up for at least five years. The 5-year survival rate for HLA-A2-positive patients in all stages was 72.7%, while for HLA-A2-negative patients it was 40% ( fig. 1). The Kaplan Meier plot ( fig. 1) showed that in the entire study group HLA-A2-positive patients lived longer then HLA-A2-negative ones. This difference was statistically significant (p = 0.027) according to the Wilcoxon-Gehan test. Therefore, in our patient cohort, we found that the expression of HLA-A2 was associated with prolonged survival. In a group consisting of stage II and III CRC analyzed together, the 5-year survival rates were 75% and 36% for HLA-A2-positive and HLA-A2-negative patients respectively. The difference between the groups however was not statistically significant ( fig. 2). An analysis of the prognostic factors in locally advanced cancer was performed. Variables with confirmed prognostic value, such as tumor stages I-III, radicality of resection, as well as HLA-A2 status were included in the Cox proportional hazard model. HLA-A2 status was an independent prognostic factor in this group of patients (p = 0.012; HR = 2.65; with 95% CI for HR: 1. 23-5.72). This finding showed that the lack of expression of HLA-A2 was an independent negative prognostic factor. Discussion The results from this study show that the presence of HLA-A2 in patients with left-sided colon and rectal cancers is an important prognostic for their outcomes. HLA-A2 is among the 8 most frequent HLA alleles (HLA A*01, A*02, A*03, A*24, B*07, B*08, B*44, C*07) in the Caucasian population [23]. In our study, the expression of HLA-A2 was observed in 56% patients with CRC. This percentage is slightly higher than the prevalence of HLA-A2 antigens in the Caucasian population in Central and Western Europe, which is around 45% (42.6-51.3%), and also higher than in the results of the study by Kiewe et al. who found a frequency of exactly 50% of HLA-A2 expression. [2] However, these findings are not statistically significant. The groups of patients with the presence or absence of HLA--A2 expression had similar structures in terms of age, gender, grade of differentiation (G) and TNM, and differed significantly only in terms of T3 and T4 characteristics and location (left colon, rectum). These patients were not preselected, and the researchers were not aware of their HLA status during patient recruitment. The location of tumors does not influence survival, because, as previously mentioned, rectal and left-sided colon cancer patients have very similar prognosis. Moreover, only 3 rectal cancer patients received pre-operative radiotherapy (two HLA-A2-negative and one HLA-A2-positive). The depth of the invasion itself, the T feature, is not a potent prognostic factor until the tumor cannot be radically resected. In order to overcome the imbalance in the above mentioned features, a multivariate proportional hazard model was constructed. In our analysis, there was a significantly higher survival rate for HLA-A2-positive patients, with 72.7% 5-year survival for the HLA-A2-positive patients and 40% for HLA-A2-negative patients. The difference between these groups was found to be statistically significant (p = 0.027). Our results conflict with the study by Kiewe et al. who found no statistically significant difference in 5-year survival and overall survival between HLA-A2-positive and HLA-A2--negative groups of patients with colorectal cancer [2]. One possible explanation for these differences might be differences in the patient cohort. In the group of patients with locally advanced cancer after radical surgery, the lack of expression of HLA-A2 was an independent prognostic factor which negatively affected survival (p = 0.012; HR = 2.65; with 95% CI for HR: 1. 23-5.72). This phenomenon is not observed in all cancers. In breast cancer, HLA positive status is a favorable prognostic factor, however, in ovarian cancer it is a negative prognostic factor [7]. The 2015 Consensus of Molecular Subtypes (CMS) is considered the most robust classification system currently available for CRC -with clear biological interpretability -and a basis for future clinical stratification and subtype-based targeted interventions. This study identified 4 consensus molecular subtypes: CMS1 (MSI-immune), CMS2 (canonical), CMS3 (metabolic), CMS4 (mesenchymal) [24]. In relation to the anatomical location of CRC, CMS1 dominates in the right colon, CMS2 in the left colon, and CMS3 and CMS4 tumors do not have a specific anatomic location. The group of patients in our investigation was more homogenous than in other studies, as only left-sided colon and rectal cancer patients were included. Earlier studies analyzed a mixed population of colorectal [2,25,26], colon [27], or only rectal cancer patients [15,28]. The authors of previously published papers did not take into account the fact that colorectal cancer may differ in biology, and thus the prognosis and response to treatment may vary, depending on the location of the tumor [18][19][20][21]. HLA-A2 expression is investigated in patients in several ways. One of the approaches uses the identification of HLA-A2 on the surface of cancer cells, other approaches identify its expression on somatic cells. In publications, HLA status is mostly characterized in terms of the tumor environment, and its involvement in the evasion of the immune response, by analysis of its expression by the immunohistochemistry in tumor-infiltrating immune cells or in tumor cells. On the other hand, HLA status is detected in host peripheral blood cells, reflecting its role in the recognition of tumor antigens, by identifying the HLA protein using techniques such as immunoassays, flow cytometry etc., or the gene alleles, mainly by polymerase chain reaction (PCR) [10]. The interpretation and comparison of studies examining HLA class I antigen expression are generally very difficult, because the methods used for the analysis of HLA class I antigen expression vary substantially [29]. According to a recently published study, exposure to an inflammatory environment might be responsible for upregulating HLA class I gene expression in tumor cells, but the presence of HLA class I molecules at the cell surface is precluded by defects in other components of the antigen processing machinery. Besides, RNA expression analysis can detect HLA class I not only in tumors but also in immune and other stromal cells expressing this HLA. Therefore, HLA class I phenotypes should be supported by genetic data confirming mechanistic defects, while RNA expression level appears insufficient to determine HLA class I tumor status [30]. Although there are some studies assessing HLA-A on CRC tumor surface in the available literature [15,[25][26][27][28]30], HLA-A2 expression on somatic cells in patients with CRC has not been extensively studied in the past [2]. In this study we decided to determine HLA-A2 expression on somatic cells from the peripheral blood, and therefore, it is very difficult to compare our findings with those of other studies. It would be interesting to assess HLA-A2 expression on somatic and cancer cells simultaneously, but unfortunately this was not possible due to organizational and financial constraints. Tumor cells do not present distinctly different HLA class I antigens than the host cell, however, it should be noted that one mechanism of cancer escape from the control of the immune system is the loss, or reduction, of the expression of HLA class I antigens on cancer cell surfaces. Therefore, tumor cells may differ from somatic cells in this respect. The loss of HLA class I is rather rare (16-20%) among MMR-p (MSS) tumors which dominate in the left half of the colon, which was confirmed on a larger cohort by Ijsselsteijn et al. [30]. The conclusion from their study, that HLA class I is an important determinant of metastatic homing in CRCs, is in-line with our observation of better prognosis for patients with left-sided colorectal cancer and HLA-A2 expression. Recently, Löffler et al. [23] provided comprehensive data on the HLA presented antigenic repertoire of CRC cells. They identified a set of 758 HLA class I and 310 HLA class II presented peptides (ligandome), exclusively expressed on colorectal carcinoma tissue, and proposed 12 naturally presented HLA class I ligands out of 38 preselected peptides (five peptides each were selected for the seven most frequent HLA allotypes in the Caucasian population and three additional peptides were selected for HLA-C*07), as putative candidates for future anti-CRC vaccination. Although among them only one HLA-A2 was presented, one cannot exclude the possibility that other non-selected peptides, including those related to HLA-A2, may be effective in the induction of an adaptive anti-tumor immune response, thus increasing the survival of HLA-A2 positive patients. Still more translational studies should be performed in order to understand CRC and immune system interactions. Conclusions In summary, the results of our study show that: 1. Patients with left-sided colorectal cancer and HLA-A2 expression lived statistically longer than HLA-A2-negative patients. 2. Lack of HLA-A2 expression was a negative prognostic factor in the group of patients with radical resections without metastases. 3. HLA-A2 status may affect the clinical course of patients with left-sided colon and rectal cancer, even though these tumors are considered less immunogenic than right-sided cancers. The question remains whether we should consider the status of HLA-A2 expression when qualifying patients for adjuvant treatment and choosing between more or less aggressive therapies to improve their treatment results. Funding. This research was funded by Ministry of Science and Higher Education of Poland Grants 2P05C 001 29 and K/ PBW/000421.
2020-07-09T09:02:33.912Z
2020-06-16T00:00:00.000
{ "year": 2020, "sha1": "0cc6e9489157bed8f3217daac4c33a5da2b3d882", "oa_license": null, "oa_url": "https://journals.viamedica.pl/nowotwory_journal_of_oncology/article/download/NJO.2020.0020/51134", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0fdb65f3e99ac19e46ce79bcb72d48d4ec6b6158", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232146652
pes2o/s2orc
v3-fos-license
HexDom: Polycube-Based Hexahedral-Dominant Mesh Generation In this paper, we extend our earlier polycube-based all-hexahedral mesh generation method to hexahedral-dominant mesh generation, and present the HexDom software package. Given the boundary representation of a solid model, HexDom creates a hex-dominant mesh by using a semi-automated polycube-based mesh generation method. The resulting hexahedral dominant mesh includes hexahedra, tetrahedra, and triangular prisms. By adding non-hexahedral elements, we are able to generate better quality hexahedral elements than in all-hexahedral meshes. We explain the underlying algorithms in four modules including segmentation, polycube construction, hex-dominant mesh generation and quality improvement, and use a rockerarm model to explain how to run the software. We also apply our software to a number of other complex models to test their robustness. The software package and all tested models are availabe in github (https://github.com/CMU-CBML/HexDom). hedral meshes can be created automatically, it has been widely used in industry. However, to achieve the same precision in FEA, a tetrahedral mesh requires more elements than an all-hex mesh does. As a result, many techniques have been developed to generate all-hex meshes [48,47,61,32,34] or converting imaging data to all-hex meshes [54,53,55]. Also, hex meshes can serve as multiple-material domains [59,62] or input control meshes for IGA [16,46,45,17,29,64,44,43]. Some applications of hex mesh generation in new engineering applications can also be found in [52,50,18,60,58]. Several literatures develop methods for unstructured hex mesh generation, such as grid-based or octree-based [35,36], medial surface [31,30], plastering [2,39], whisker weaving [7], and vector field-based methods [26]. These methods have been used to create hex meshes for certain geometries, but are not robust and reliable for arbitrary geometries. On the other hand, although an all-hex mesh provides a more accurate solution, a high-quality all-hex mesh is more difficult to create automatically. Compared with all-hex mesh generation, a hex-dominant mesh generation, which combines advantages of both tetrahedral and hex elements, is more automatic and robust for complex solid models. In the literature, several strategies have been proposed to generate hex-dominant meshes. An indirect method was suggested by [49], the domain is first meshed into tetrahedral elements and then merged into a hex-dominant mesh with the packing technology. Several other hex-dominant meshing techniques were also presented in [28,25,24]. Such indirect methods create hex-dominant meshes with too many singularities, and the tetrahedral mesh directly influences the quality of the hex-dominant mesh. Similar to unstructured all-hex mesh generation, the direct method is also more preferable for hex-dominant meshes [41,27]. The polycube-based method [40,9] is an attractive direct approach to obtain hexdominant meshes by using degenerated cubes. The polycube-based method was mainly used for all-hex meshing. A smooth harmonic field [42] was used to generate polycubes for arbitrary genus geometry. Boolean operations [21] were introduced to deal with arbitrary genus geometry. In [22], a polycube structure was generated based on the skeletal branches of a geometric model. Using these methods, the structure of the polycube and the mapping distortion greatly influence the quality of the hex mesh. The calculation of the polycube structure with a low mapping distortion remains an open problem for complex geometry. It is important to improve the quality of the mesh for analysis by using methods such as pillowing, smoothing, and optimization [34,57,63,32]. Pillowing is an insert-sheet technique that eliminates situations where two adjacent hex elements share more than one face. Smoothing and optimizing are used to further improve the quality of the mesh by moving the vertices. In our software, we implement all of the above methods to improve the quality of hex elements. In this paper, we extend our earlier semi-automatic polycube-based all-hex generation to hex-dominant meshing. The software package includes: 1) polycube based geometric decomposition of a surface triangle mesh; 2) generation of the polycube consisting of non-degenerated and degenerated cubes; 3) creation of a parametric domain for different types of degenerated unit cubes including prisms and tetrahedra; and 4) creation of a hex-dominant mesh. We first go through the entire pipeline and explain the algorithm behind each module of the pipeline. Then, we use a specific example to follow all the steps and run the software. In particular, when user intervention is required, the details of manual work are explained. The paper is organized as follows. In Section 2 we provide an overview of the pipeline. In Section 3 we present the HexDom software package, with a semi-automatic polycube-based hexdominant mesh generateion of a CAD file. Finally, in Section 4 we show various complex models with our software package. Fig. 1: The HexDom software package. For each process, the black texts describe the object and the red texts show the operation needed to go to the next process. Manual work is involved in further segmentation and introducing interior vertices. Regions A, B and C (green circles) in (d, f) contain a hex, prism and tetrahedral shaped structure, respectively. Pipeline design Our pipeline uses polycube-based method to create a hex-dominant mesh from an input CAD model. As shown in Fig. 1, we first generate a triangle mesh from the CAD model by using the free software LS-PrePost. Then we use centroidal Voronoi tessellation (CVT) segmentation [13,14,12] to create a polycube structure [40]. The polycube structure consists of multiple non-degenerated cubes and degenerated cubes. The non-degenerated cubes will yield hex elements via parametric mapping [6] and octree subdivision [63]. The degenerated cubes will yield degenerated elements such as prisms and tetrahedra in the final mesh . Here, we implement the subdivision algorithm separately for prism-shape regions and tetrahedral-shape regions. The quality of the hex dominant mesh is evaluated to ensure that the resulting mesh can be used in FEA. In case that a poor quality hex element is generated in hex-dominant meshes, the program has various quality improvement functions, including pillowing [57], smoothing, and optimization [32]. Each quality improvement function can be performed independently and one can use these functions to improve the mesh quality. Currently, our software only has a command-line interface (CLI). Users need to provide the required options on the command line to run the software. In Section 3, we will explain in detail the algorithms implemented in the software as well as how to run the software. HexDom: Polycube-based hex-dominant mesh generation Surface segmentation, polycube construction, parametric mapping, and subdivision are used together in the HexDom software package to generate a hex-dominant mesh from the boundary representation of the input CAD model. Given a triangle mesh generated from the CAD model, we first use surface segmentation to divide the mesh into several surface patches that meet the restrictions of the polycube structure, which will be discussed in Section 3.1. The corner vertices, edges, and faces of each surface patch are then extracted from the surface segmentation result to construct a polycube structure. Each component of the polycube structure is topologically equivalent to a cube or a degenerated cube. Finally, we generate the hex-dominant mesh through parametric mapping and subdivision. Quality improvement techniques can be used to further improve the mesh quality. In this section, we will introduce the main algorithm for each module of the HexDom software package, namely surface segmentation, polycube construction, parametric mapping and subdivision, and quality improvement. We will use a rockerarm model (see Fig. 1) to explain how to run CLI for each module. We will also discuss the user intervention involved in the semi-automatic polycube-based hex-dominant mesh generation. Surface segmentation The surface segmentation in the pipeline framework is implemented based on CVT segmentation [13,14,12]. CVT segmentation is used to classify vertices by minimizing an energy function. Each group is called a Voronoi region and has a corresponding center called a generator. The Voronoi region and its corresponding generator are updated iteratively in the minimization process. In [13], each element of the surface triangle mesh is assigned to one of the six Voronoi regions based on the normal vector of the surface. The initial generators of the Voronoi regions are the three principal normal vectors and their opposite normals vectors (± , ± , ± ). Two energy functions and their corresponding distance functions are used together in [13]. The classical energy function and its corresponding distance function provide initial Voronoi regions and generators. Then, the harmonic boundary-enhanced (HBE) energy function and its corresponding distance function are applied to eliminate non-monotone boundaries. The detailed definitions of the energy functions and their corresponding distance functions are described in [13]. The surface segmentation process was also summarized in the Surface Segmentation Algorithm in [51]. Once we get the initial segmentation result, we need to further segment each Voronoi region into several patches to satisfy the topological constraints for polycube construction (see Fig. 1(d)). We use two types of patches. The first type of segmented surface patch corresponds to one boundary surface of the non-degenerated cubes and quadrilateral surface of the prism-shape degenerated cubes. The second type of segmented surface patch corresponds to one triangular boundary surface of the degenerated cubes. The choice of types of patches depends on the following three criteria: 1) geometric features such as sharp corners with small angles and prism/tetrahedral-like features; 2) critical regions based on finite element simulation, such as regions with the maximum stress/strain and regions with a high load; and 3) requirements from user applications which enhance the capability of user interaction. For the first type of segmented surface patch, the following three conditions should be satisfied during the further segmentation: 1) two patches with opposite orientations (e.g., +X and -X) cannot share a boundary; 2) each corner vertex must be shared by more than two patches; and 3) each patch must have four boundaries. For the second type of segmented surface patch, we modified the third conditions to that each patch must have three boundaries. Note that we define the corner vertex as a vertex locating at the corner of the cubic region or degenerated cubic region in the model. The further segmentation is done manually by using the patch ID reassigning function in LS-PrePost. The detailed operation was shown in [51]. Polycube construction In this section, we discuss the detailed algorithm of polycube construction based on the segmented triangle mesh. Several automatic polycube construction algorithms have been proposed in the literature [11,20,13], but it is challenging to apply these methods to complex CAD models. The polycube structure does not contain degenerated cubes either. Differently, the polycube in this paper consists of cubes and degenerated cubes and is topologically equivalent to the original geometry. To achieve versatility for real industrial applications, we develop a semi-automatic polycube construction software based on the segmented surface. However, for some complex geometries, the process may be slower due to potentially heavy user intervention. The most important information we need for a polycube is its corners and the connectivity relationship among them. For the surface of polycube, we can automatically get the corner points and build their connectivity based on the segmentation result by using the algorithm similar to the Polycube Boundary Surface Construction Algorithm in [51]. The difference is that we need to adjust the implementation based on different patch types: finding its three corners for a triangular patch and finding its four corners for a quadrilateral patch. It is difficult to obtain inner vertices and their connectivity because we only have a surface input with no information about the interior volume. In fact, this is where users need to intervene. We use LS-PrePost to manually build the interior connectivity. You can find the detailed operation in Appendix A3 in [51]. As the auxiliary information for this user intervention, the Polycube Boundary Surface Construction Algorithm will output corners and connectivity of the segmented surface patches into .k file. Finally, the generated polycube structure is the combination of non-degenerated cubes and degenerated cubes splitting the volumetric domain of the geometry. Parametric mapping and subdivision After the polycube is constructed, we need to build a bijective mapping between the input triangle mesh and the boundary surface of the polycube structure. In our software, we implement the same idea as in [22]: using a non-degenerated unit cube or a degenerated unit cube as the parametric domain for the polycube structure. As a result, we can construct a generalized polycube structure that can better align with the given geometry and generate a high quality hex-dominant mesh. There are three types of elements in the hex-dominant mesh: hex, prism, and tetrahedral. The hex elements form non-degenerated cubic regions. Prism and tetrahedral elements form degenerated cubic regions. We will use octree subdivision to generate hex elements for non-degenerated cubic regions, while using subdivision to generate prism and tetrahedral elements for degenerated cubic regions. Through the pseudocode in the Parametric Mapping Algorithm in [51], we describe how to combine the segmented surface mesh, the polycube structure, and the unit cube to create an all-hex mesh. We use this algorithm to create the hex elements in nondegenerated cubic regions. Each non-degenerated cube in the polycube structure represents one volumetric region of the geometry and has a non-degenerated unit cube as its parametric domain. Region A in Fig. 1(d, f) shows an example of a nondegenerated cube and its corresponding volume domain of the geometry marked in the green circle. For degenerated cubes, there are two types of interface, a triangular face and a quadrilateral face. Region B in Fig. 1(d, f) shows a prism case, it contains two triangular faces and three quadrilateral faces. For the tetrahedral case shown in Region C in Fig. 1(d, f), it contains four triangular faces. Through the pseudocode in the Prism Parametric Mapping Algorithm, we describe how the segmented surface mesh, the polycube structure and the prismshape degenerated unit cube are combined to generate prism elements. Let {¯} =1 be the segmented surface patches coming from the segmentation result (see Fig. 2(a)). Each segmented surface patch corresponds to one boundary surface of the polycubē (1 ≤ ≤ ) (see Fig. 2(b)), where is the number of the boundary surfaces. There are also interior surfaces, denoted by¯(1 ≤ ≤ ), where is the number of the interior surfaces. The union of {¯} =1 and {¯} =1 is the set of surfaces of the polycube structure. For the parametric domain, let {¯} 5 =1 denote the five surface patches of the prism-shape degenerated unit cube (see Fig. 2(b)). Each prism-shape degenerated cube in the polycube structure represents one volumetric region of the geometry and has a prism-shape degenerated unit cube as its parametric domain. Fig. 2(b) shows an example of prism-shape degenerated cube and its corresponding volume domain of the geometry marked in the black boxes. Therefore, for each prism-shape degenerated cube in the polycube structure, we can find its boundary surface¯and map the segmented surface patch¯to its corresponding parametric surface¯of the prism-shape degenerated unit cube. To map¯to¯, we first map its corresponding boundary edges of¯to the boundary edges of¯. Then we get the parameterization of¯by using the cotangent Laplace operator to compute the harmonic function [63,5]. Compared to non-degenerated cubic regions algorithm, we introduce three parametric variables in mapping since one face is not axis-aligned. Note that for an interior surface¯of the polycube structure, we skip the parametric mapping step. The prism elements can then be obtained from the above surface parameterization combined with subdivision. We generate the prism elements for each prism-shape region in the following process. To obtain vertex coordinates on the segmented patch¯, we first subdivide the prism-shape degenerated unit cube (see Fig. 3(a)) recursively in order to get their parametric coordinates. The vertex coordinates of triangular faces of the prism-shape degenerated cube are obtained by linear subdivison, while the quadrilateral faces are also obtained by linear subdivision. The physical coordinates can be obtained by using parametric mapping, which has a one-to-one correspondence between the parametric domain¯and the physical domain¯. To obtain vertices on the interior surface of the prism region, we skip the parametric mapping step and directly use linear interpolation to calculate the physical coordinates. Finally, vertices inside the cubic region are calculated by linear interpolation. The entire prism elements are built by going through all the prismshape regions. for each surface in the prism-shape degenerated cube do 5: if it is a boundary surface¯then 6: if the surface is not axis-aligned then 7: Get the surface parameterization :¯→¯⊂ R 3 8: else 9: Get the surface parameterization :¯→¯⊂ R 2 10: end if 11: end if 12: end for 13: end for Parametric mapping and subdivision step: 14: for each prism-shape degenerated cube in the polycube structure do 15: Subdivide the prism-shape degenerated unit cube recursively to get parametric coordinates 16: for each surface in the prism-shape degenerated cube do 17: if it is a boundary surface¯then 18: Obtain physical coordinates using −1 ( ) 19: else if it is an interior surface¯then 20: Obtain physical coordinates using linear interpolation 21: end if 22: end for 23: Obtain interior vertices in the prism-shape degenerated cubic region using linear interpolation 24: end for We perform the similar procedure for the tetrahedra-shape degenerated cubes in the polycube structure. Through the pseudocode in the Tetrahedral Parametric Mapping Algorithm, we describe how the segmented surface mesh, the polycube structure and the tetrahedral-shape degenerated unit cube are combined to generate tetrahedral elements. Fig. 2(b) shows an example of tetrahedral-shape degenerated cube and its corresponding volume domain of the geometry marked in the dashed black boxes. The difference is that we use { } 4 =1 to denote those four surface patches of the tetrahedra-shape degenerated unit cube for the parametric domain. We also introduce three parametric variables in mapping when one of the surfaces is not axis aligned. Then, the tetrahedral elements can be obtained from this surface parameterization combined with linear subdivision. We generate tetrahedral elements for each tetrahedral-shape region in the following process. To obtain vertex coordinates on the segmented patch , we first subdivide the tetrahedral-shape degenerated unit cube (see Fig. 3(bottow row)) recursively in order to get their parametric coordinates by applying linear subdivison. The physical coordinates can be obtained by using the parametric mapping between the parametric domain and the physical domain . 1 and 1 are combined for linear interpolation to obtain vertices on the interior surface of the tetrahedra-shape degenerated cubic region. Finally, vertices inside the tetrahedra-shape degenerated cube region are calculated by linear interpolation. The entire tetrahedral elements are built by going through all the tetrahedral regions. Tetrahedral Parametric Mapping Algorithm Input: Segmented triangle mesh { } =1 , polycube structure Output: Tetrahedral elements in tetrahedral-shape degenerated cubic regions 1: Find boundary surfaces { } =1 and interior surfaces { } =1 in the polycube structure Surface parameterization step: 2: for each tetrahedral-shape degenerated cube in the polycube structure do 3: Create a tetrahedral-shape degenerated cube region { } 4 =1 as the parametric domain 4: for each surface in the tetrahedral-shape degenerated cube do 5: if it is a boundary surface then 6: if the surface is not axis-aligned then 7: Get the surface parameterization : → ⊂ R 3 8: else 9: Get the surface parameterization : → ⊂ R 2 10: end if 11: end if 12: end for 13: end for Parametric mapping and subdivision step: 14: for each tetrahedral-shape degenerated cube in the polycube structure do 15: Subdivide the tetrahedral-shape degenerated unit cube recursively to get parametric coordinates 16: for each surface in the tetrahedral-shape degenerated cube do 17: if it is a boundary surface then 18: Obtain physical coordinates using −1 ( ) 19: else if it is an interior surface then 20: Obtain physical coordinates using linear interpolation 21: end if 22: end for 23: Obtain interior vertices in the tetrahedral-shape degenerated cubic region using linear interpolation 24: end for Based on the Prism Parametric Mapping Algorithm, we implemented and organized the code into a CLI program (PrismGen.exe) that can generate prism elements by combining parametric mapping with subdivision. Here, we run the following command to generate the prism elements for the rockerarm model: 1 PrismGen .exe -i rockerarm_indexpatch_read .k -p 2 rockerarm_polycube_structure .k -o rockerarm_prism .vtk -s 2 There are four options used in the command: • -i: Surface triangle mesh of the input geometry with segmentation information (rockerarm_indexpatch_read.k); • -o: Prism mesh (rockerarm_prism.vtk); • -p: Polycube structure (rockerarm_polycube_structure.k); and • -s: Subdivision level. We use -i to input the segmentation file generated in Section 3.1 and use -p to input the polycube structure created in Section 3.2. Option -s is used to set the level of recursive subdivision. There is no subdivision if we set -s to be 0. In the rockerarm model, we set -s to be 2 to create a level-2 prism elements in the final mesh. The output prism elements are stored in the vtk format (see Fig. 4(a)) and they can be visualized in Paraview [1]. Based on Tetrahedral Parametric Mapping Algorithm, we implemented and organized the code into a CLI program (TetGen.exe) that can generate tetrahedral elements by combining parametric mapping with linear subdivision. Here, we run the following command to generate tetrahedral elements for the rockerarm model: We use -i to input the segmentation file generated in Section 3.1 and use -p to input the polycube structure created in Section 3.2. Option -s is used to set the level of recursive subdivision. There is no subdivision if we set -s to be 0. In the rockerarm model, we set -s to be 2 to create a level-2 tetrahedral mesh. The output tetrahedral elements are stored in the vtk format (see Fig. 4(b)) and they can be visualized in Paraview. Quality improvement We integrate three quality improvement techniques in the software package, namely pillowing, smoothing and optimization. Users can improve mesh quality through the command line options. We first use pillowing to insert one layer of elements around the boundary [63] of the hex elements. By using the pillowing technique, we ensure that each hex element has at most one face on the boundary, which can help improve the mesh quality around the boundary. After pillowing, smoothing and optimization [63] are used to further improve the quality of hex elements. For smoothing, different relocation methods are applied to three types of vertices: vertices on sharp edges of the boundary, vertices on the boundary surface, and interior vertices. For each sharp-edge vertex, we first detect its two neighboring vertices on the curve, and then calculate their middle point. For each vertex on the boundary surface, we calculate the area center of its neighboring boundary quadrilaterals (quads). For each interior vertex, we calculate the weighted volume center of its neighboring hex elements as the new position. We relocate the vertex iteratively. Each time the vertex moves only a small step towards the new position and this movement is done only if the new location results in an improved local Jacobian. If there are still poor quality hex elements after smoothing, we run the optimization whose objective function is the Jacobian. Each vertex is then moved toward an optimal position that maximizes the worst Jacobian. We presented the Quality Improvement Algorithm in [51] for quality improvement. HexDom Software and Applications The algorithms discussed in Sections 3 were implemented in C++. The Eigen library [10] is used for matrix and vector operations. We used a compiler-independent building system (CMake) and a version-control system (Git) to support software development. We have compiled the source code into the following software package: • HexDom software package: -Segmentation module (Segmentation.exe); -Polycube construction module (Polycube.exe); -Hex-dominant mesh generation module (HexGen.exe, PrismGen.exe, Tet-Gen); and -Quality improvement module (Quality.exe). The software is open-source and can be found in the following Github link (https://github.com/CMU-CBML/HexDom). We have applied the software package to several models and generated hexdominant meshes with good quality. For each model, we show the segmentation result, the corresponding polycube structure, and the hex-dominant mesh. These models include: rockerarm (Fig. 1); two types of mount, hepta and a base with four holes (Fig. 5); fertility, ant, bust, igea, and bunny (Fig. 6). Table 1 shows the statistics of all tested models. We use the scaled Jacobian to evaluate the quality of hex elements. The aspect ratio is used as the mesh quality metric for prism and tet elements which is the ratio between the longest and shortest edges of an element. The aspect ratio is computed with the LS-PrePost, which is a pre and post-processor for LS-DYNA [3]. From Table 1, we can observe that the generated hex-dominant meshes have good quality (minimal Jacobian of hex elements > 0.1). Figs. 5-6(a) show the segmentation results of the testing models. Then, we generate polycubes (Figs. 5-6(b)) based on the surface segmentation. Finally, we generate hex-dominant meshes (Figs. 5-6(c)). Conclusion and future work In this paper, we present a new HexDom software package to generate hex-dominant meshes. The main goal of HexDom is to extend the polycube-based method to hex-dominant mesh generation. The compiled software package makes our pipeline accessible to industrial and academic communities for real-world engineering applications. It consists of six executable files, namely segmentation module (Segmentation.exe), polycube construction module (Polycube.exe), hex-dominant mesh generation module (HexGen.exe, PrismGen.exe, TetGen.exe) and quality improvement module (Quality.exe). These executable files can be easily run in the Command Prompt platform. The rockerarm model was used to explain how to run these programs in detail. We also tested our software package using several other models. Our software has limitations which we will address in our future work. First, the hex-dominant mesh generation module is semi-automatic and needs user intervention to create polycube structure. Second, the degenerated cubic regions and non-degenerated cubic regions need to be handled separately. We will improve the underneath algorithm and make polycube construction more automatic. In addition, we will also develop spline basis functions for tetrahedral and prism elements to support isogeometric analysis for hybrid meshes.
2021-03-09T02:16:14.305Z
2021-03-06T00:00:00.000
{ "year": 2021, "sha1": "7046e35fd8d0feae95bb1f9968af2b96d174ba76", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "96965147a41bb78ac0802eb3335bd70596806131", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
3869754
pes2o/s2orc
v3-fos-license
The Influence of α-Tocopherol on Serum Biochemical Markers During Experimentally Induced Pleuritis in Rats Exposed to Dioxin Toxicity of dioxins is wide ranging. Amongst the organs, the liver is the most susceptible to damage by dioxins. Damage caused to liver cells results in promoting inflammatory processes. The aim of this work was to evaluate whether high doses of tocopherol will change the inflammatory response, monitored by biochemical indicators, by improving liver function in rats exposed to tetrachlorodibenzo-p-dioxin (TCDD). The study was conducted on a population of female Buffalo rats. The animals were divided into the following groups: Control Group A—representing physiological norms for the studied diagnostic indicators; Control Group B—subjects were administered a 1% ceragenin solution to induce pleuritis; Study Group 1—where rats were administered α-tocopherol acetate for 3 weeks, after which pleuritis was induced; Study Group 2—rats were administered a single dose of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD), while 3 weeks later, pleuritis was induced; and Study Group 3—rats were administered a single dose of TCDD and next, were administered α-tocopherol acetate for 3 weeks, followed by pleuritis induction. The results clearly show that administering tocopherol in the course of inflammation causes changes to the distribution and ratio of in the serum protein fractions, including acute phase proteins. The latter proteins are indicative to the improvement in liver function and linked to protein synthesis and stimulation of the antibody-mediated immunity. Moreover, in the course of inflammation caused by exposure of rats to TCDD, tocopherol significantly affected the acute phase protein concentration. INTRODUCTION The liver plays an important role in the synthesis of proteins involved in inflammatory reactions. The protein metabolism in hepatocytes is significantly affected by dioxins that are slowly and gradually accumulating in the body, mainly in adipose tissue and the liver. This accumulation leads to disruption in the synthesis of plasma proteins, i.e. albumin, α 1 , α 2 , β 1 , β 2 and γ globulin fractions and fibrinogen. Between 25 and 75% of the daily dioxin dose is deposited in the liver [1]. Administering 2,3,7,8tetrachlorodibenzo-p-dioxin (TCDD) results in changes in the endoplasmic reticulum of the hepatocytes and increases coproporphyrin concentration [2]. The latter in turn affects the synthesis of acute phase proteins, whose concentration increases rapidly during inflammation [3]. When rats are injected with dioxins, protein synthesis taking place in liver cells is disrupted on the genomic level. Pro-inflammatory interleukins, such as Il-1, Il-6 and TNF (tumour necrosis factor), act as regulators in the synthesis of plasma proteins [4][5][6] while increase in alanine aminotransferase activity in rats treated with TCDD is a sign of liver damage [7]. Our studies show that when rats were administered 5-μg/kg body weight of TCDD for a period of 5 weeks, they displayed significant macroscopic changes in the liver and microscopic histopathological changes in the hepatocytes. The changes manifested themselves in steatosis, and laboratory tests showed increased cholesterol concentration in plasma as well as disorders in oestrogen metabolism [8,9]. Dioxins significantly affect the structures of the connective tissue, disrupting its linearity, promoting destructive processes and disturbing spatial distribution during regeneration processes [10]. As a result, the disrupted collagen architecture in a parenchymatous organ, such as the liver, causes destructive changes in its vascularisation. Moreover, oxidative stress was shown to significantly affect the expression of the hepatic gene responsible for the synthesis of collagen [11]. Due to dioxins' strong pro-inflammatory characteristics, such as generation of free radicals and induction of COX-2 (cyclooxygenase-2) [12], we have attempted to eliminate the pro-inflammatory multidirectional effect of dioxins by using tocopherol's acetate derivative. The results of our previous studies showed that administering large doses of tocopherol improves many biochemical blood indicators [13,14]. Free radicals are generated as a result of dioxin metabolism with CYP1A1 enzymes. This process leads to formation of the reactive oxygen species (ROS) in many organs [12,[15][16][17][18][19][20][21][22]. Tocopherol deficit in cases of inflammation causes significant oxidative stress, which leads to fibrogenesis [11,23,24]. Vitamin E is a compound with strong antioxidative properties, which protects tissues from negative effects of the ROS [15,25,26]. Moreover, studies by Kloser et al. [27] show the antagonistic effect of tocopherol on the aryl hydrocarbon receptor (AhR). It means that the compound is helping in elimination of free radicals generated as a result of TCDD metabolism by blocking the Ah receptor and subsequently disrupting of cytochrome P450, family 1, subfamily A, polypeptide (CYP1A1) synthesis, thus preventing the generation of free radicals. This sequence of the events also provides an explanation for the layout of our study where we demonstrate beneficial changes in the biochemical parameters in rats injected with TCDD and subsequently receiving large doses of tocopherol for 3 weeks [13,25,28]. Lauritzen et al. [29] showed that during bacterial pneumonia, concentrations of the tocopherol and ascorbic acid in plasma were decreased, which led in turn to increased permeability of lysosomal membranes. The results of a study by Norazlin et al. [30] showed that administering tocopherol slowed the secretion of proinflammatory cytokines, Il-1 and Il-6, into plasma in animals exposed to nicotine. Our own studies showed similar effects in animals under the effects of TCDD, which received large doses of tocopherol for 3 weeks and in which we observed significant decreases in proinflammatory interleukins, especially TNF [13]. Studies by Devaraj et al. [31] confirmed that prolonged use of tocopherol limits the release of peroxide anions and hydrogen peroxide, decreasing Il-1 concentrations. Studies by Tsukamoto et al. [32] showed that administering tocopherol inhibits the expression of mRNA, Il-6 and TNF. This work is a continuation of previous studies. Here, we decided to assess the level of acute phase plasma proteins, such as haptoglobin and transferrin, while demonstrating the preventative role of tocopherol on liver damage. It is important to note that these proteins participate in securing iron release from broken-down erythrocytes during inflammation modulated by dioxins. We have undertaken to analyse the effect of a 3-week tocopherol regimen, assuming a dose of 30-mg/kg body weight, on plasma protein and selected biochemical parameters in rats which were under the effects of TCDD and in which we induced inflammation. MATERIAL AND METHODS Female Buffalo rats, weighing between 130 and 150 g, aged between 9 and 11 weeks, were inbred and provided by the Department of Pathologic Anatomy of Wroclaw Medical University. The air-conditioned rooms where the animals were held were equipped with hypertension, the air was exchanged 15 times per hour, the temperature was kept at 22°C, humidity at 55%, and the daylight cycle was 12/12 h. The rats were kept in polystyrene cages, each holding six specimens, with free access to water and food (BMurigran^). For the purposes of the study, the animals were divided into the following groups: Y Control Group A-representing physiological norms for the studied diagnostic indicators; Y Control Group B-rats with pleuritis, induced by administering 1% ceragenin solution, volume 0.15 ml, injected into the pleural cavity at 5/6 right intercostal space; Y Study Group 1-rats were administered α-tocopherol acetate for 3 weeks, dose 30-mg/kg body weight/day, and subsequently, pleuritis was induced following the same procedure as in the Control Group B; Y Study Group 2-rats were administered a single dose of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) diluted in dimethyl sulfoxide (DMSO), dose 5-μg/kg body weight, 3 weeks before pleuritis was induced with ceragenin; Y Study Group 3-rats were administered a single dose of TCDD, dose 5-μg/kg body weight, 3 weeks before pleuritis was induced with ceragenin; after being administered TCDD, the animals were administered αtocopherol acetate for 3 weeks, dose 30-mg/kg body weight/day. The study material constituted blood drawn from the abdominal aorta of the rats, which were anaesthetized with barbiturates, i.e., 30 mg/kg body weight of sodium pentobarbital. After desanguination animals were decapitated and rats' carcases were incinerated. The following chemical and pharmacological agents and diagnostic tests were used in the study: model TCDD solution, diluted in DMSO, concentration 1 μg/ml, acquired at the Laboratory for Trace Analysis, Institute of Chemistry and Inorganic Technology, Kraków Polytechnic; ceragenin (Sigma); α-tocopherol acetate (Hasco-Lek); and sodium pentobarbital (Biochemie GmbH). Study Methods The following biochemical tests were performed on rats' serum and plasma: Electrophoresis-the sera were separated in buffered agarose gel at 100-V electrical field for a period of 35 min, using a Sebia-Hydrasys system, (Horiba ABX). The gel scanning was performed with a Sebia-Hydrasys densitometer at 600 nm, and protein fraction concentration was given in g/dl; The concentration (mg/dl) of the acute phase protein markers, i.e. complements C3 and C4, haptoglobin, transferrin, as well as IgG and IgM, was determined by immunonefelometry with a Dade Behring nefelometer, using diagnostic sets provided by the manufacturer. Statistical Analysis The results were subjected to statistical analysis. We calculated the total number (n), arithmetical means (mean), standard deviations (SD) and ranges for minimal (min) and maximal (max) values. Having verified whether the calculated parameters have normal distribution (by comparing the variables' histogram with a Gauss curve diagram), the means for particular indicators for the A and B Control Groups and for the Control Group B and Study Groups, in which tocopherol and TCDD were administered, were compared using Student's t test with statistical significance set at the following: *p ≤ 0.05, **0.05 > p ≥ 0.01, and ***p ≤ 0.001. Calculations were performed using Statistica 7.01. Albumins In the Control Group B, blood samples withdrawn at 24 and 72 h since induction of the inflammation showed a significant decrease in absolute and relative albumin concentration. In the Study Group 1 with induced inflammation and administered tocopherol, we observed a significant increase in absolute and relative albumin concentration for the same time period, in comparison with the results from the Control Group B. In the Study Group 2 with induced inflammation and treated with dioxin, we observed a significant decrease in albumin concentration in comparison with that in the Control Group B. In a similar group with induced inflammation and treated with both dioxin and vitamin E, we observed an increase in the absolute and relative albumin concentration in comparison with that in the group that did not receive tocopherol ( Fig. 1). Globulin α 1 . In the Control Group B, with induced inflammation, the measurements were performed in three time periods and showed a significant increase in the absolute and relative concentration of this protein fraction in comparison with that in the group that had no induced inflammation. Administering tocopherol in the Study Group 1 with induced inflammation resulted in a decrease of both absolute and relative concentration of globulin α 1 , in comparison with that in the Control Group with induced inflammation. In the Study Group 2 with induced inflammation and treated with dioxin, we observed a significant increase in the concentration of this globulin fraction in the 120th hour sample in comparison with that in the Control Group B. The administration of tocopherol in a similar group that received dioxin resulted in a significant decrease in the concentration of this protein fraction in comparison with that in the group where rats were treated with dioxin that had not received tocopherol (Fig. 1). Globulin α 2 . In the Control Group B, with induced inflammation, the measurements performed in three time periods showed a significant increase in the absolute and relative concentration of this protein fraction. In the Study Group that apart from induced inflammation received tocopherol, after 120 h, we observed a significant decrease in the concentration of this globulin fraction in comparison with that in the Control Group B. In the Study Group with induced inflammation and treated with dioxin, we have also observed a significant decrease in the concentration of this globulin fraction after 120 h (in comparison with that in the Control Group B). However, in the group treated with both dioxin and tocopherol, we observed a decrease in the concentration of the globulin α 2 after 72 and 120 h since induction of the inflammation (Fig. 1). Globulin β. In the Control Group B, with induced inflammation, we did not document any significant differences for this protein fraction in comparison with the results obtained for the Control Group A, without induced inflammation. The administration of tocopherol in the Study Group 1 with induced inflammation did not result in statistically significant changes, as compared to that in the previous group. After 72 and 120 h in the Study Group 2 with induced inflammation and treated with dioxin, we observed an increase in the absolute and relative concentration of the globulin β, while in the group treated with both dioxin and tocopherol, there was a significant decrease in the absolute concentration of this globulin fraction ( Fig. 1). Globulin γ. In the Control Group B, we observed a slight decrease of the γ globulin fraction in comparison with that in the Control Group A, without induced inflammation. In the Study Group 1 rats with induced inflammation and injected with tocopherol, after 120 h, we observed a significant increase in the concentration of this protein in comparison to that in corresponding sample for the Control Group B. However, γ globulin concentration after 24 h was significantly increased in the Study Group 2 in comparison with that in the animals from the Control Group B. The administration of tocopherol in the Study Group 3 with induced inflammation and treated with dioxin resulted in a significant increase of the γ globulin fraction after 24 and 72 h, in comparison with that in the group treated with dioxin, but had not received tocopherol (Fig. 1). Total Serum Protein. In the Control Group B, with induced inflammation, the assays performed for samples taken at three time periods showed no differences in comparison with results obtained for the Control Group A. In the Study Group 1 after 24 h, we observed a slight increase in total serum proteins, with a tendency to remain at a constant level. In the Study Group 2, we did not observe any significant differences in the concentration of the total serum proteins after 24 and 72 h in comparison with that in the Control Group B. The total serum proteins increased in the Study Group 3 with induced inflammation and where rats were injected with both dioxin and tocopherol in comparison with those in the Study Group 2 (Fig. 1). Serum Albumin to Globulin Ratio. Serum albumin/ globulin ratio (RSA/RSG) was determined for the Control Group B (with induced inflammation) and compared to that of the Control Group A (without induced inflammation). The administration of tocopherol to rats with induced inflammation caused a slight increase of RSA/RSG in comparison with that in the Control Group B. In the Study Group with induced inflammation and treated with dioxin, RSA/ RSG was slightly lower than that in the Control Group B. The injection of tocopherol to rats with induced inflammation and treated with dioxin after 24 and 72 h caused a significant increase of this indicator in comparison with rats that have not received tocopherol (Fig. 2). Analysis of Biochemical Indicators (Table 2) Immunoglobulin M. In the Control Group B, with induced inflammation, after 24 h, we observed a decrease of the immunoglobulin M (IgM) concentration, while after 72 and 120 h, we observed a significant increase in concentration of this immunoglobulin. The administration of tocopherol to rats with induced inflammation caused an increase of the IgM, while a significant decrease of this parameter was observed in rats treated with dioxin. The administration of tocopherol in the group with induced inflammation and treated with dioxin caused an increase in the concentration of the IgM after 24 and 72 h from induction of the inflammation, in comparison to that in the rats that have not received tocopherol (Fig. 3). Immunoglobulin G. After 24 and 72 h, the rats in the Control Group B and Control Group A showed no significant changes in the concentration of this immunoglobulin. On the other hand, administration of tocopherol to the group of rats with induced inflammation caused a significant decrease in immunoglobulin G (IgG) concentration. When rats with induced inflammation were treated with dioxin, a significant decrease of the IgG concentration in serum was observed. Injection of both tocopherol and dioxin to rats with induced inflammation caused an increase in the concentration of this immunoglobulin in comparison to that in the rats that were not injected with tocopherol (Fig. 3). Complement Component 3. In the Control Group B, with induced inflammation, we observed 30-fold increase in the concentration of this protein, with the maximum value documented after 24 h from induction of the inflammation. In the group with induced inflammation and treated with dioxin, we observed an increase in concentration of this protein, which remained at that level throughout the experiment points. The administration of tocopherol in the group with induced inflammation and treated with dioxin did not cause any significant change in complement component 3 (C3) concentration (Fig. 4). Complement Component 4. In the rats with induced inflammation (Control Group B), the concentration of component 4 (C4) decreased by 50% after 24 and 72 h, in comparison to that in the rats that were not subjected to induced inflammation (Control Group A). When rats with induced inflammation received tocopherol, after 120 h, there was fourfold increase in the concentration of this indicator (Fig. 4). In the group with induced inflammation and treated with dioxin, we also observed a significant increase in the C4 concentration. The administration of tocopherol to rats with induced inflammation and treated with dioxin did not significantly change the concentration of the C4 in all three time points (Fig. 4). Transferrin. In the Control Group B, with induced inflammation, after 24 and 72 h, we observed a slight increase in transferrin (TRF) concentration, while after 120 h of inflammation, there was a significant decrease in the concentration of this indicator. The administration of tocopherol to rats with induced inflammation brought transferrin concentration at 120-h samples to the same level as observed after 24 and 72 h. In the group with induced inflammation and treated with dioxin, we observed a significant increase in transferrin concentration in all three time points. The administration of the tocopherol to rats with induced inflammation and treated with dioxin caused a noticeable decrease in this indicator to a level similar to that observed for rats that had induced inflammation and were injected with tocopherol (Fig. 5). Haptoglobin. In the Control Group B, with induced inflammation, concentration of haptoglobin increased 16fold. Administration of tocopherol to rats with induced inflammation did not cause significant changes to this indicator. In the group with induced inflammation and treated with dioxin, we observed an increase in the concentration of haptoglobin (Hapt), in comparison with that in the Control Group B. The administration of tocopherol to dioxin-treated rats caused an increase in Hapt, especially after 72 h (Fig. 5). DISCUSSION The transferrin and albumin tested in this study are the elements of the acute phase protein group which react negatively [33][34][35][36]. The results from the group of rats subjected to induced inflammation and to tocopherol showed a lower decrease in albumin concentration after 24 and 72 h than one reported in our previous study [28]. The group of rats with induced inflammation and treated with both TCDD and tocopherol showed a smaller decrease of the albumin concentration in comparison with the analogous group that did not receive tocopherol. The electrophoretic analysis of proteins in serum of the animals with induced inflammation and treated with tocopherol showed a decrease of α 1 and α 2 globulin fractions after 72 and 120 h, while these proteins were increasing in the control rats with pleuritis. On the other hand, the concentrations of γ globulin, total protein and albumin to globulin ratio (RSA/RSG) in tocopherol-treated rats displayed a tendency to increase in comparison with those in the animals that did not receive tocopherol. In overall, results suggest that administration of tocopherol to both groups of rats, with induced inflammation and treated with TCDD, and the animals that were not subjected to dioxin, Fig. 3. Influence of α-tocopherol on immunoglobulin M (IgM) and immunoglobulin G (IgG) concentration in serum of rats with pleuritis, with and without exposure to dioxin. led to a decrease in concentrations of α 1 , α 2 and β globulin fractions, as well as an increase of γ globulin, total protein and the RSA/RSG. The dynamics of the changes in different protein fraction concentrations are also time-dependent, as what can be seen by the example of the α 1 globulin fraction reaching maximum concentration after 72 h since the induction of inflammation, i.e. at the same time as the other proteins such as albumin, γ globulin and the albumin to globulin ratio. Moreover, in this time period, we observed the maximum increase in leucocyte count in peripheral blood, together with a decrease in neutrophils and fall in both erythrocyte count and haematocrit, the latter being linked to albumin to globulin ratio [25,37]. Vos et al. [38] showed that dioxins cause an increase in α and β globulins, and this observation is supported by results presented in this study. The aforementioned study used a similar experimental model, but not as extensive as the one presented in this study. Here, we analysed the changes in composition of the serum proteins in rats with induced inflammation and subjected to the TCDD treatment. In animals that received tocopherol, acute phase proteins' concentration changed between the 72nd and 120th hour of inflammation. The results did not show similar time period-related interdependencies regarding biochemical parameters, as in the case of the previously described control group with induced inflammation, where, depending on the indicator, the peak of changes occurred between the 72nd and 96th hour of inflammation [28]. In a light of these observations, we hypothesize that rats in the groups that received tocopherol, the monitored indicators do not reach their potential maximal values, but rather remained at a constant level. This suggestion is supported by the results obtained for other biochemical parameters monitored when using a similar experimental model [13]. The observed increase in IgM concentration at three experimental time periods (24,72 and 120 h) in the control group with induced inflammation also occurred in the study group that received tocopherol, as what can be construed as the effect of this vitamin on the synthesis of the abovementioned immunoglobulin. We have also shown a decrease in IgG concentration in the study group with induced inflammation and administered tocopherol. The immunosuppressive effect of dioxins [39,40] has also been confirmed in this study through the analysis of IgG and IgM in the study group of rats with induced inflammation and treated with TCDD. In this case, IgG concentration decreased sixfold and IgM concentration twofold in comparison with those in the control group with induced inflammation. Suppression in the production of several classes of immunoglobulins, i.e. IgG, IgE and IgM in B lymphocyte cultures, was shown by Karras et al. [41]. When one study group received tocopherol for 3 weeks, from the moment when dioxin was administered and until the inflammation was induced, we observed an increase in IgG and IgM in comparison with those in the control group that had not been treated with tocopherol. This shows that synthesis of these two immunoglobulins can improve after administering tocopherol. As inflammation promotes the oxidative stress [29], it causes a release of large amounts of free radicals [42][43][44][45]. The oxidative stress is strengthened after administration of dioxins. In spite of the fact that transferrin is recognized as one of the acute phase proteins, which are known to decrease during inflammation, in this study, this protein's Fig. 5. Concentration of transferrin (TRF) and haptoglobin (Hapt) in serum of rats with induced inflammation and exposed, or not to dioxin. concentration remained unchanged in the early stages of the inflammation. Only after 72 h into the experiment, there was a significant decrease in transferrin concentration. In the studies conducted on monkeys that received small doses of TCDD, Riecke et al. [46] demonstrated an increase in transferrin concentration in the liver. The analysis of our results indicates that administration of the tocopherol to rats with induced inflammation results in a reverse type of response, manifested by a considerable increase in TRF concentration. We have also observed that rats with inflammation exposed to dioxins had significantly increased transferrin concentration. The administration of tocopherol in a group of rats treated earlier with dioxin decreased the TRF concentration in comparison to that in the animals that have not received the vitamin E. Transferrin is known for its role in capturing iron from heme, released from broken-down erythrocytes, and transporting it to the endothelial reticulum. This process is probably associated with the significant decrease in the erythrocyte and haemoglobin count in later stages of inflammation [47]. Pro-inflammatory factors, such as IL-1, IL-6 and TNF, stimulate hepatocytes to synthesize acute phase proteins, haptoglobin and transferrin, which participate in preventing iron complexes and its free form from becoming available to bacteria [48,49]. Notably, in rats treated with dioxin, during inflammation, transferrin concentration increased while the erythrocyte and haemoglobin count decreased. The analysis of serum haptoglobin concentration in the group of rats with induced inflammation and treated with tocopherol showed no significant differences in comparison with that in a similar group which have not received the vitamin E. In the group of animals exposed to dioxin, a significant increase in haptoglobin concentration, together with an increase in transferrin concentration, was found in the 120h sample. Administering tocopherol in the group with induced inflammation and treated with dioxin was associated with a significant increase in haptoglobin concentration, the phenomenon that can be attributed to the improvement in the synthesis of this protein. However, production of the haptoglobin might be stimulated by decrease in serum pro-inflammatory interleukins IL-1β, IL-6 and TNF, that is caused by administration of tocopherol, as described by Ahmad et al. [23], Jialal et al. [50], Norazlin et al. [30] and Wang et al. [51]. Analysis of complement C3 and C4 concentrations shows that significant differences between C3 and C4 levels appear only in the control group with induced inflammation. In the initial stage of the inflammation, complement C4 concentration has significantly decreased, whereas complement C3 concentration has increased, even though C3 is characterized as a low-response protein according to Koj's classification [36]. Tocopherol administered to the group with induced inflammation exerts its effect by causing an increase in both complement C3 and C4 concentrations. The complement C3 and C4 concentrations have increased in serum of rats with induced inflammation and treated with TCDD. When rats with induced inflammation and treated with dioxin received tocopherol, there was no significant change in the concentration of these proteins. The elevated complement C3 and C4 concentrations in rats with inflammation and treated with dioxin may cause a significant decrease of the erythrocyte count, due to haemolytic properties of these proteins, or also due to morphological disorders in erythrocyte membranes occurring as a result of increased oxidative stress caused by the effects of TCDD [47]. CONCLUSIONS In cases of inflammation, tocopherol contributes to significant changes in serum and plasma protein concentrations. By using electrophoretic separation of the serum proteins, we have ascertained that tocopherol stimulates an increase in albumin and γ globulin concentration as well as the albumin to globulin ratio, whereas it prevents the increase of α 1 and α 2 globulin concentrations. Moreover, the increase of complement C4 concentration intensifies, while the period over which increased transferrin and complement C3 concentrations are being maintained has significantly lengthened without any significant effect on haptoglobin concentration. In cases of inflammation, tocopherol intensifies the increase of immunoglobulin M concentration and lowers immunoglobulin G concentration. The presented results clearly show that in case of inflammation, administration of tocopherol leads to changes in the serum protein fractions, including acute phase proteins, as what argues in favour of improved liver function associated with protein synthesis and stimulation of the antibody-mediated immunity. Tocopherol significantly affects acute phase protein concentration in inflammation after administration of TCDD by limiting the dioxin-induced increase in transferrin concentration in advanced stages of this process. However, it causes a significant increase in concentrations of haptoglobin, and it does not accelerate the appearance of complement C3 and C4 proteins. This proves that the effect of TCDD on some processes involved in inflammatory response is so long-lasting that even extended use of tocopherol cannot affect the course of these processes. COMPLIANCE WITH ETHICAL STANDARDS This study was approved by the Local Ethics Committee for Animal Experiments in Wroclaw (decision no. 23/01) and by the Research Ethics Committee of Wroclaw Medical University (decision no. KEBN-129/99: PB-327, PB-768).
2017-08-02T22:41:27.018Z
2017-03-15T00:00:00.000
{ "year": 2017, "sha1": "923b134575d5b8cd79b941fc2323aec4cf9eac2c", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc5429350?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "923b134575d5b8cd79b941fc2323aec4cf9eac2c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
1761935
pes2o/s2orc
v3-fos-license
Changes in Sensitivity of Rat Front Cortex Neurons to Acetylcholine (ACh) after α-Amino-3-Hydroxy-4-Isoxozole Propionic Acid (AMPA) Lesions of Nucleus Basalis Magnocellularis and After Embryonic Basal Forebrain Transplants The effects of unilateral S-AMPA lesions of nucleus basalis magnocellularis (nbm) and of subsequent ipsilateral embryonic basal forebrain transplants on the sensitivity of pyramidal cells in the frontal cortex to iontophorized acetylcholine (ACh) and carbachol were studied in anaesthetized rats. Each drug was applied with an ejection current of 30 nA for 20 s and the average response of 3 applications (separated by 1 min recovery periods) was obtained. Neurons were considered to be sensitive when their firing rate increased or decreased (Wilcoxon, P<0.05), either during or within 20 s of drug application. Neuronal firing rates were significantly reduced in the frontal cortex 8-10 weeks postlesion, when acetylcholinesterase (AChE)positive fibre staining was almost completely absent, but the percentage of ACh-sensitive neurons increased (68/82 neurons from 7 rats compared with 72/144 neurons from 12 control rats, P<0.0001); the duration of ACh’s action also significantly increased. Comparison with controls showed that this enhanced sensitivity to ACh after lesion could be explained solely by an increase in the proportion of neurons excited by ACh (Table 1). The modulatory effects of ACh were also studied on responses of cortical neurons evoked by afferent electrical stimulation (single square wave pulses, 5 ms duration, 1-3 mA, were delivered at intervals of not less than 10 s and 10 stimulations used for each procedure). ACh modulation of neuronal responses evoked by sensory stimulation was not significantly changed after the lesion. Sensitivity to carbachol and glutamate was unchanged after lesion. In normal rats, acute administration of an AChE-inhibitor, di-isopropyl fluorophosphate (DFP) significantly increased the frontal cortex neurons’ responsiveness to ACh from 50% in the control group to 87.4% in the DFP treated group (P<0.0001). DFP also decreased the latency and increased the duration of ACh action without changing the frontal cortex firing rate. The sensitivity of frontal cortex to carbachol and glutamate was not changed after DFP. Chronic administration of scopolamine (10 mg/kg s.c. in 0.9% NaC1 daily for 16 days) significantly increased the sensitivity of frontal cortex neurons to both ACh and carbachol. It also significantly increased the neuronal firing rate, prolonged the duration of ACh and carbachol action and decreased the latency of action of both drugs. In contrast, chronic administration of oxotremorine (0.5 mg/kg s.c. in 0.9% NaC1 twice daily for 11 days and 10 mg/kg in sesame oil for 9 days) significantly decreased the frontal cortex neurons’ sensitivity to both ACh and carbachol. It also significantly decreased the neuronal firing rate, decreased the duration and increased the latency of ACh and carbachol action. Cholinergic-rich transplants to the frontal cortex normalised neuronal sensitivity to ACh and its duration of action but did not restore the firing rate. Non-cholinergic transplants or cholinergie-rich transplants to the somatosensory cortex were ineffective. Histological examination showed a sprouting of ChAT and AChE from the transplant into the neocortex of The effects of unilateral S-AMPA lesions of nucleus basalis magnocellularis (nbm) and of subsequent ipsilateral embryonic basal forebrain transplants on the sensitivity of pyramidal cells in the frontal cortex to iontophorized acetylcholine (ACh) and carbachol were studied in anaesthetized rats. Each drug was applied with an ejection current of 30 nA for 20 s and the average response of 3 applications (separated by 1 min recovery periods) was obtained. Neurons were considered to be sensitive when their firing rate increased or decreased (Wilcoxon, P <0.05), either during or within 20 s of drug application. Neuronal firing rates were significantly reduced in the frontal cortex 8-10 weeks postlesion, when acetylcholinesterase (AChE)positive fibre staining was almost completely absent, but the percentage of ACh-sensitive neurons increased (68/82 neurons from 7 rats compared with 72/144 neurons from 12 control rats, P<0.0001); the duration of ACh's action also significantly increased. Comparison with controls showed that this enhanced sensitivity to ACh after lesion could be explained solely by an increase in the proportion of neurons excited by ACh (Table 1). The modulatory effects of ACh were also studied on responses of cortical neurons evoked by afferent electrical stimulation (single square wave pulses, 5 ms duration, 1-3 mA, were delivered at intervals of not less than 10 s and 10 stimulations used for each procedure). ACh modulation of neuronal responses evoked by sensory stimulation was not significantly changed after the lesion. Sensitivity to carbachol and glutamate was unchanged after lesion. In normal rats, acute administration of an AChE-inhibitor, di-isopropyl fluorophosphate (DFP) significantly increased the frontal cortex neurons' responsiveness to ACh from 50% in the control group to 87.4% in the DFP treated group (P<0.0001). DFP also decreased the latency and increased the duration of ACh action without changing the frontal cortex firing rate. The sensitivity of frontal cortex to carbachol and glutamate was not changed after DFP. Chronic administration of scopolamine (10 mg/kg s.c. in 0.9% NaC1 daily for 16 days) significantly increased the sensitivity of frontal cortex neurons to both ACh and carbachol. It also significantly increased the neuronal firing rate, prolonged the duration of ACh and carbachol action and decreased the latency of action of both drugs. In contrast, chronic administration of oxotremorine (0.5 mg/kg s.c. in 0.9% NaC1 twice daily for 11 days and 10 mg/kg in sesame oil for 9 days) significantly decreased the frontal cortex neurons' sensitivity to both ACh and carbachol. It also significantly decreased the neuronal firing rate, decreased the duration and increased the latency of ACh and carbachol action. Cholinergic-rich transplants to the frontal cortex normalised neuronal sensitivity to ACh and its duration of action but did not restore the firing rate. Non-cholinergic transplants or cholinergie-rich transplants to the somatosensory cortex were ineffective. Histological examination showed a sprouting of ChAT and AChE from the transplant into the neocortex of transplanted animals. The numbers in parentheses indicate the total number of neurones studied in each group. The results suggest that the increased sensitivity to ACh seen in lesioned rats was probably due to loss of AChE because similar effects were seen after DFP. The distance of the implant .from the frontal cortex appeared crucial to norrealizing sensitivity of frontal cortical neurons to ACh.
2018-05-08T18:14:46.737Z
1992-10-01T00:00:00.000
{ "year": 1992, "sha1": "d90f3a8a9e350f3b343cc42a0ac43cf542ed4346", "oa_license": null, "oa_url": "https://doi.org/10.1155/np.1992.297", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d90f3a8a9e350f3b343cc42a0ac43cf542ed4346", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Psychology" ] }
62797633
pes2o/s2orc
v3-fos-license
Effect of Wind Turbine Classes on the Electricity Production of Wind Farms in Cyprus Island This paper examines the effect of different wind turbine classes on the electricity production of wind farms in two areas of Cyprus Island, which present low and medium wind potentials: Xylofagou and Limassol. Wind turbine classes determine the suitability of installing a wind turbine in a particulate site. Wind turbine data from five different manufacturers have been used. For each manufacturer, two wind turbines with identical rated power (in the range of 1.5MW–3MW) and different wind turbine classes (IEC II and IEC III) are compared. The results show the superiority of wind turbines that are designed for lower wind speeds (IEC III class) in both locations, in terms of energy production.This improvement is higher for the location with the lower wind potential and starts from 7%, while it can reach more than 50%. Introduction Renewable energy sources (RESs) are clean, inexhaustible, and environmental-friendly alternative energy sources with negligible fuel cost.The worldwide demand for renewable energy is increasing rapidly because of the climate problem, and also because oil resources are limited.Wind energy appears as a clean and good solution to cope with a great part of this energy demand. Wind turbines present several advantages over conventional generation technologies for electricity generation.Reduction of greenhouse gases that contribute to global climate change and to local air quality is one of their major advantages.Additionally, they reduce the risk of fossilfuel price fluctuations and decrease the electricity-sector dependency.However, developing a utility-scale wind project is a complicated and time-consuming process involving developers, landowners, utilities, the public, and various local authorities.Although each wind energy project is unique and has different characteristics, basic features and related steps are common.In practice, the steps are iterative and overlap with one another depending on the specific project circumstances.The key steps of development and planning for a wind farm are site selection, detailed wind assessment, feasibility, construction, and operation [1]. This paper examines the effect of different wind turbine classes in the electricity production of wind farms in two areas of Cyprus Island that present low and medium wind potentials: Xylofagou and Limassol.Wind classes determine which turbine is suitable for the normal wind conditions of a particular site.Turbines with higher wind classes have larger blades and produce more energy in low and medium winds [2], but they are more sensitive in high wind gusts. In order to examine the effect of wind turbine class on electricity production, wind turbine data from five different manufacturers have been used.For each manufacturer, two Conference Papers in Energy wind turbines with identical rated power and different wind turbine classes (IEC II and IEC III) are compared for both sites.The rated power of chosen wind turbines is between 1.5 MW and 3 MW.The results of the present work show that for the studied sites, the increase in annual energy production of the IEC III wind class turbines, compared to IEC II class turbines, is significant in all cases. Wind Energy Basics. The wind speed at a given location is continuously varying.There are changes in the annual mean wind speed from year to year (annual) changes with season (seasonal), with passing weather systems (synoptic), on a daily basis (diurnal) and from second to second (turbulence) [3].The essential characteristics of the long-term variations of wind speed can also be usefully described by a frequency or probability distribution.A convenient mathematical distribution function that has been found to fit well with data is the Weibull probability density function, which is expressed in terms of two parameters, the shape parameter k and the scale parameter .The probability of wind speed being f (V) during any time interval is given by In any decision about location and type of wind turbine to be installed, knowledge of wind speeds at heights of 20 to 150 m above ground is very desirable.Many times, these data are not available, and some estimate must be made from wind speeds measured at about 10 m.This requires an equation which predicts the wind speed 1 at one height in terms of the measured speed 2 at another lower height [4].One popular equation that is used for this scope is the power law: where 1 is the height at which a wind speed estimate is desired (usually equal to wind turbine hub height) and 2 is taken as the height of measurement.The dimensionless parameter is called power law exponent and is determined empirically, as it varies with height, time of day, season of the year, nature of the terrain, wind speeds, and temperature.Power law exponent can vary from 0.10 (smooth terrains) to 0.40 (very rough terrains).When no specific site information is provided, a value of equal to 1/7 can be used. Wind Turbine Classes. Wind turbine class is one of the factors which need to be considered during the complex process of planning a wind power plant.Wind classes determine which turbine is suitable for the normal wind conditions of a particular site.They are mainly defined by the average annual wind speed (measured at the turbine's hub height), the speed of extreme gusts that could occur over 50 years, and how much turbulence is there at the wind site.There are three wind classes for wind turbines, which are defined by an International Electrotechnical Commission (IEC) standard and correspond to high, medium, and low wind, as Table 1 shows. The extreme 50-year gust values are based on the 3second average wind speed.Each one of the above-mentioned classes (i.e., I, II, or III) can be categorized in two subclasses: A and B. In subclass A, the standard deviation of wind speed measured at 15 m/s wind speed (which is defined as I 15 turbulence) is 18%.In subclass B, I 15 turbulence is equal to 16%. In order to select the proper wind turbine class for a specific site, all specifications of a wind turbine class (average speed, gust, and turbulence) must be fulfilled.Taking this into account, in a wind turbine designed for lower wind speeds, the design loads are going to be smaller; therefore its blades are larger and the hub height is taller.As a result, bigger rotors of class III capture more wind energy and yield higher capacity factors compared to class I or II rotors. Description of the Examined Locations. The contribution of wind turbines in electricity production of Cyprus is significant.In 2011, Cyprus had a total installed wind capacity of 134 MW, an increase of 52 MW (38.8%) on the previous year, resulting in a 5.4% wind share in total electricity production [5]. Although Cyprus is suitable for electricity generation from wind, in most places the annual wind speed is below 5 m/s.Under such conditions, the installation of a wind farm is prohibitive.Moreover, there are few areas with very high wind potential (over 6.5 m/s).However, there is a considerable portion of Cyprus in which wind speed is between 5 and 6 m/s.This wind speed interval represents the marginal value in which a wind farm can be economically viable [6].In these sites, the installation of wind turbines that can produce significant amount of power in low and medium wind speeds (i.e., they have high wind class) is critical.Figure 1 depicts a wind map of Cyprus [7].It has to be noticed that the wind speeds in this map refer to a height of 10 m above the ground level.Considering higher heights (typical values for wind turbine hub heights are in the order of 70 to 120 m), the wind speed increases as (2) shows, so a larger number of areas in Cyprus may be suitable for a wind farm installation. In this study, wind data for the sites of Xylofagou and Limassol have been used.These data are provided in the form of cumulative distribution functions and have been taken from [8].Moreover, this reference provides information about the mean annual wind speed for these sites: 3.8 m/s for Xylofagou and 4.4 m/s for Limassol.These values refer to anemometer height equal to 7 m.Considering typical values of 90 m for hub height and 1/7 for power law exponent, these values are increased to 5.5 m/s and 6.3 m/s, respectively, at the wind turbine height. Estimation of Produced Energy and Capacity Factor.The power output of a wind turbine varies with wind speed, and every wind turbine has a characteristic power performance curve.With such a curve, it is possible to predict the energy production of a wind turbine without considering the technical details of its various components.The power curve gives the electrical power output as a function of the hub height wind speed.An important parameter of a power curve is the rated power, which is generally equal to the maximum power output of the electrical generator.When the wind speed cumulative distribution function and the power curve data are known, the total electrical energy produced by a wind turbine for a specific period can be calculated.The procedure is shown in Figure 2. As a first step, the cumulative distribution function is transformed to probability density function for the wind speed.These data refer to the anemometer height, so they have to be transformed at hub height, using (2).In the wind speed probability density function, the integral between two specific wind speeds denotes the portion of the total time period in which the wind speed lies between these two specific values.In practice, this procedure can be implemented by using the histogram of wind speed.In this paper, the width of wind speed bins has been considered as 0.5 m/s.By knowing the durations at each wind speed bin and the power curve data (i.e., produced electrical power for specific wind speeds), the electrical energy production of a wind turbine can be calculated.In the last diagram of Figure 2, the total wind turbine energy production is equal to the integral of the diagram. After the calculation of annual wind energy WTyear , the annual capacity factor can be calculated.The capacity factor of a wind turbine at a given site is defined as the ratio of the energy actually produced by the turbine to the energy that could have been produced if the machine ran at its rated power over a given time period.A higher capacity factor value shows that the exploitation of the wind potential for the specific site is better.Considering time period equal to one year (8760 hours), the annual capacity factor is Annual Capacity Factor = WTyear ⋅ 8760 h . (3) Wind Turbine Data The effect of the wind class in the wind turbine energy production is examined for five different wind turbine manufacturers.For each manufacturer, two wind turbines with identical rated power and different wind turbine classes (IEC II and IEC III) are compared for both sites.The technical characteristics of the ten considered wind turbines are shown in Table 2 The study of Table 2 shows that six wind turbines have rated power of 3 MW, two of them have rated power 2 MW, and the remaining two have rated power 1.5 MW.It can be also seen that the wind turbines of class III have larger diameters compared to their corresponding wind turbines of class II.Moreover, with the exception of Vestas, all wind turbines of the same manufacturer have identical hub heights.Figures 3, 4, 5, 6, and 7 depict the power curve comparison for each manufacturer.In each diagram, the better performance of the class III wind turbines in low and medium wind speeds is obvious, especially in the case of Vestas models (Figure 5), which also present the greatest difference in rotor diameter (90 m for class II, 112 m for class III). Results and Discussion The results for the two considered locations (i.e., Xylofagou and Limassol) are presented in Tables 3 and 4. At each table, the annual produced electric energy and the annual capacity factor are calculated as well as the difference in energy production between wind turbines of the same manufacturer.It has to be noticed that the calculation of annual energy production cannot be accurate due to lack of detailed information for the specific sites.This information would include full data series of wind speed and wind direction, description of the terrain and the surrounding obstacles at the wind farm site, atmospheric pressure and air temperature data, and estimation of wake effects.However, the relative differences between the models of each wind turbine manufacturer refer to the same conditions and considerations, and therefore they can show more accurately the benefits of installing a higher class wind turbine in these locations. From the comparison of differences in Tables 3 and 4, it can be seen that in Xylofagou location (which has the lower wind potential) the improvement achieved by the class III turbines is higher than that in Limassol.With the exception of Vestas models, the differences lie at the same order of magnitude (11-18% for Xylofagou, 7-10% for Limassol).The enormous differences in Vestas models can be used as an upper bound for these areas (55.7% for Xylofagou and 33.3% for Limassol), and they are mainly explained by the remarkable difference between rotor diameters (90 and 112 m).The contribution of the different hub height (105 and 119 m) is not significant: by considering identical hub height of 105 m for both models, the difference would be 53% for Xylofagou and 31% for Limassol. Conclusion This paper examined the effect of wind turbine classes II and III on the electricity production of two locations in Cyprus Island that present low and medium wind potentials.It is proved that the class III wind turbines produce significant larger amounts of energy, mainly at the location with the lowest wind potential, which under specific circumstances can surpass 50%.Due to the fact that in the majority of areas with high wind potential (which usually represent a very small portion of the overall area) a wind farm already exists, the need for class III wind turbines installation seems to be now more commanding than ever, as it can be proved to be an economically viable investment which can be implemented at a significant larger number of alternative locations. Figure 2 :Figure 3 : Figure 2: Calculation procedure of the annual electrical energy produced by a wind turbine. Table 1 : Specifications for wind classes. Table 2 : Technical characteristics of considered wind turbines. Figure 1: Wind map of Cyprus. Table 3 : Results for the Xylofagou location. Table 4 : Results for the Limassol location.
2018-12-29T07:32:11.416Z
2013-05-23T00:00:00.000
{ "year": 2013, "sha1": "2351e83dcc110652a729d0877c406b76a42bf9fe", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/archive/2013/750958.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2351e83dcc110652a729d0877c406b76a42bf9fe", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
269298112
pes2o/s2orc
v3-fos-license
A pre‑whitening with block‑bootstrap cross‑correlation procedure for temporal alignment of data sampled by eddy covariance systems The eddy covariance (EC) method is a standard micrometeorological technique for monitoring the exchange rate of the main greenhouse gases across the interface between the atmosphere and ecosystems. One of the first EC data processing steps is the temporal alignment of the raw, high frequency measurements collected by the sonic anemometer and gas analyser. While different methods have been proposed and are currently applied, the application of the EC method to trace gases measurements highlighted the difficulty of a correct time lag detection when the fluxes are small in magnitude. Failure to correctly synchronise the time series entails a systematic error on covariance estimates and can introduce large uncertainties and biases in the calculated fluxes. This work aims at overcoming these issues by introducing a new time lag detection procedure based on the assessment of the cross-correlation function (CCF) between variables subject to (i) a pre-whitening based on autoregressive filters and (ii) a resampling technique based on block-bootstrapping. Combining pre-whitening and block-bootstrapping facilitates the assessment of the CCF, enhancing the accuracy of time lag detection between variables with correlation of low order of magnitude (i.e. lower than − 1 ) and allowing for a proper estimate of the associated uncertainty. We expect the proposed procedure to significantly improve the temporal alignment of the EC time-series measured by two physically separate sensors, and to be particularly beneficial in centralised data processing pipelines of research infrastructures (e.g. the Integrated Carbon Observation System, ICOS-RI) where the use of robust and fully data-driven methods, like the one we propose, constitutes an essential prerequisite. Introduction Combating climate changes requires an accurate quantification of greenhouse gases (GHG) emitted to and removed from the atmosphere by terrestrial ecosystems.To this end, an important research frontier in ecology is directed toward measuring the rates of exchange (or flux densities) of GHGs over natural and anthropogenic ecosystems (Houghton 2005;Bonan 2008;Pan et al. 2011).Surface layer fluxes of energy, water (H 2 O), carbon dioxide (CO 2 ), methane (CH 4 ) and nitrous oxide (N 2 O) are currently estimated by the eddy covariance (EC) technique (Foken et al. 2012).The EC technique employs a sonic anemometer (SA) for wind velocity components and a gas analyser (GA) for scalar atmospheric concentrations and requires high-frequency sampling rates (e.g. 10 observations per s).Eddy fluxes are derived from the covariance (normally within an averaging time of 30 min) between instantaneous fluctuations about the mean of the vertical wind speed (W) and the scalar of interest (S), which can be temperature, atmospheric concentrations of water vapour, carbon dioxide or any other trace gas. The calculation of EC fluxes requires the instantaneous quantities of vertical wind velocity and scalar to be simultaneously measured.Such a condition is seldom fulfilled in field measurements because, in general, there is not a perfect colocation of the SA and the GA sampling points to avoid possible wind flow distortions.This causes the same air parcel to first pass through one sensor and then through the other, creating a delay (time lag) in its wind and concentration measurements.In addition, in closed-path systems (i.e.those GAs with an internal sampling cell and an inlet tube), the sampled air parcel, sucked by a pump, has to travel from the intake to the measurement cell in the analyser (potentially for tens of metres) before being measured and merged with the concurrent wind data.This necessarily causes an additional and undesirable temporal delay with respect to time series sampled by the SA.Physical distance between sensors is not the only source of temporal mis-alignment.Also data flow delays, digital clock drifts, and artefacts in the data acquisition strategy could be responsible for introducing a significant temporal mis-alignment between EC time series measured by different instruments (Fratini et al. 2018). Correcting such mis-alignments between raw data is a key step in the calculation of fluxes.Failure to correctly synchronise the time series causes a systematic error on covariance estimates (Taipale et al. 2010;Langford et al. 2015).The use of a constant time lag derived from the physical characteristics of the sampling system is often inappropriate since the temporal mis-alignment between time series may vary during the sampling period.For open-path systems (i.e.those GAs without tube inlet), while keeping the physical distance between the sensors fixed, the temporal mis-alignment may vary according to wind speed and wind direction.For closed-path systems instead, while keeping the characteristics of the sampling line geometry (e.g.tube length, tube inner diameter, intake air flow rate, distance between sonic anemometer and tube inlet) unchanged, the temporal mis-alignment may vary with pump ageing, filter contamination, accumulation of dirt in the sampling line, which all impact the stability of the flow rate and therefore the travel time of air parcels through the sampling tube (Massman 2000;Shimizu 2007).Also the way some non-inert gases interact with the tube walls (e.g.adsorption-desorption) is responsible for generating a temporal mis-alignment between time series.For example, the transit time of water vapour along the intake tube of a closed-path system can vary substantially with relative humidity, due to adsorption/desorption processes at the tube walls (Ibrom et al. 2007;Massman and Ibrom 2008;Mammarella et al. 2009;Fratini et al. 2012). To overcome the limitations of using a constant time lag based procedure, the prevalent solution in EC data processing is to assess the cross-covariance function between W and S. The cross-covariance function provides a measure of the linear dependence between two time series, one of which is delayed with respect to the other.In ideal situations and according to the EC theory, the highest dependence occurs when W and S are perfectly aligned.Therefore, the actual time lag can be detected in correspondence of the step lag that maximises (in absolute terms) the cross-covariance between the two time series (Moncrieff et al. 1997;Rebmann et al. 2012).We refer to such an approach as covariance maximisation (CM hereinafter). However, the effectiveness of the CM procedure depends on the shape of the cross-covariance function, which in turn depends on the stochastic properties of the variables involved and on the amount of random uncertainty affecting flux estimates (Billesbach 2011;Nemitz et al. 2018;Vitale et al. 2019).Generally, the procedure is effective under second order stationary conditions and when the flux magnitude is moderate/high.In these circumstances, the cross-covariance function exhibits a distinct and pronounced peak (either positive or negative) and the actual time lag can be easily detected (see Fig. 1a).In other circumstances, in particular when fluxes are of small magnitude, the cross-covariance function can be characterised by multiple local extrema of similar magnitude (Fig. 1b-e).In some cases the time lag detection for fluxes of low magnitude can be facilitated by modern GAs capable of simultaneously measuring several GHGs species.For such instruments, and in case of inert gases, the detection of the time lag for low magnitude fluxes can be obtained by dynamically ascribing delays detected from co-measured variables having a stronger signal, generally CO 2 (Nemitz et al. 2018).However, in absence of reference highmagnitude fluxes or when trace gases are measured by different GAs with potential different time lags (e.g.due to their relative position or the use of different data acquisition systems), the detection of the actual time lag becomes challenging, in particular in automated EC data processing pipelines. A tentative solution to this problem has been described in Taipale et al. (2010) who suggested applying a preliminary smoothing filter on the cross-covariance function before detecting time lag in correspondence of the absolute maximum.While smoothing can help in reducing the influence of sporadic and isolated peaks significantly, the determination of the extremum in the covariance curve often fails for low magnitude fluxes, resulting in unreasonable time lags and, consequently, potentially biased flux estimates (Langford et al. 2015;Nemitz et al. 2018;Schallhart et al. 2018;Kohonen et al. 2020;Striednig et al. 2020). In addition, a further limitation of the CM-based procedures (with or without smoothing) is that the alignment of time series when the true unknown flux is 1 3 null or very close to zero can never be achieved.Fluxes are in fact calculated as covariances between W and S and a method which maximises the covariance will always search and select a time lag in correspondence of flux values different from 0 (either positive or negative), a phenomenon known as mirroring effect (Langford et al. 2015;Kohonen et al. 2020).In this regard, it should be noted that flux estimates equal to zero fall within the physical range of possible values and they are not to be understood as a rare event, in particular during periods of low background fluxes.Moreover, from an eco-physiological point of view, zero fluxes could occur not only in the absence of exchange between the atmosphere and the ecosystem, but also when there is a perfect balance between amounts assimilated and released by the ecosystem, for example during the switch between emission and assimilation of CO 2 in the morning and in the evening.Discarding zero fluxes can cause not only a systematic overestimation of the absolute flux, but also affects the density distribution of the observed data for values close to zero. The aim of this work is to propose a new approach that overcomes the limitations of the CM-based procedures.To this end, we developed a fully data-driven procedure where time lag is detected by assessing the cross-correlation function (CCF) between raw EC data subject to (i) a preliminary filtering procedure based on pre-whitening and (ii) a resampling technique based on block-bootstrapping.As recommended in leading textbooks on time series analysis (see for example Hamilton 1994;Cryer and Chan 2008), a proper assessment of the CCF between time series requires the variables to be stationary and pre-whitened.Stationarity is defined by a constant mean and equal variance at all times and can be achieved by detrending or differencing.Stationary condition is essential when assessing the CCF because dominant long-term trends may hide the correlation between short-term fluctuations.Pre-whitening consists instead of transforming (at least one of) the time series involved in CCF into a white noise (WN) process with the twofold purpose of reducing the influence the serial correlation has on the CCF estimates and making it possible to assess their statistical significance with standard criteria.However, even applying such arrangements, when the peak of the CCF has magnitude similar to those of the conventional confidence intervals the risk of detecting erroneous time lag increases drastically.By combining pre-whitening and bootstrapping, such a risk is avoided and the assessment of the CCF for time lag detection becomes more realistic, informative and suitable for variables having correlation of low order of magnitude, as in the case of low magnitude EC fluxes. The paper is structured as follows.In the following section a detailed description of the procedure is presented with special emphasis on the decision rules for the choice of the optimal time lag suitable for the alignment of raw EC data.Having described the EC data in Sect.3, an application of the proposed approach and a comparison with commonly used time lag detection procedures are reported in Sect. 4. Final remarks are provided in Sect. 5. Time lag detection via assessment of the cross-correlation function after pre-whitening In this section we describe the time lag detection procedure based on the assessment of the CCF between pre-whitened variables, focusing on the theoretical aspects motivating such preliminary data filtering.The following definitions are derived from Cryer and Chan (2008). 3 Let Y = Y t and X = X t be two stationary time series of length n indexed by time t, the correlation between X and Y at lag k = ±1, ±2, … , ±n can be estimated by the sample CCF defined by: where X and Y are the sample mean of X and Y, respectively, and the summations are done over all data where the summands are available. For white noise (WN) processes (i.e.sequences of uncorrelated random variables, each with zero mean and variance ), k is approximately normally distributed with zero mean and variance 1/n, where n is the total number of paired data.This leads to the conventional 5% significance limits of the CCF estimates equal to ±1.96∕ √ n .That is, any peak outside the interval ±1.96∕ √ n (or plus/minus two standard errors) is deemed significantly different from zero at 0.05 level.The approximate variance of 1/n applies only when data are independent and identically distributed (iid), a condition that is almost never met for any real, observed time series, because of the presence of autocorrelation (i.e. the current value of the series is dependent on preceding values and can be predicted, at least in part, on the basis of knowledge of those values).Under the assumption that both X and Y are stationary and that they are independent of each other, the sample variance of k is approximately: where r k (X) and r k (Y) are the autocorrelation estimates at lag k of X and Y, respectively. Suppose for simplicity that X and Y are both first-order autoregressive processes with coefficients X and Y , respectively, then k is approximately normally distributed with zero mean and variance approximately equal to: From Eq. (3) it can be seen that when X and Y are close to 1, the ratio of the sampling variance of k to the nominal value of 1/n approaches infinity.As a consequence, using the ±1.96∕ √ n rule in deciding the significance of the sample CCF may lead to many more false positives than the nominal 5% error rate, even when time series are independent of each other. The statistical significance of the CCF estimates is a typical representation of the so-called spurious correlations problem often encountered when analysing the relationship between time series variables (Yule 1926;Hamilton 1994).To avoid the risk of spurious correlations, a viable solution is to disentangle the linear association between X and Y from their autocorrelation.By examining Eq. ( 2), it can be seen that the approximate variance of k is 1/n if at least one of X and Y is an iid sequence.Such a condition can be achieved by transforming one of the variables in a new process that is close to a WN, a procedure known as pre-whitening (Cryer and Chan 2008).Since the purpose of pre-whitening is to filter the serial correlation, and it is not crucial to find the best and most parsimonious model for X exactly, prewhitening can be achieved by means of autoregressive models of order p, AR(p): where Xt is a WN, i are the AR coefficients and B is the backshift operator such that B m X t = X t−m .In this work, the choice of p was automatically selected by minimising the Akaike (1998) Information criterion (AIC).Once identified the p order, the AR coefficients were estimated via Yule-Walker method (Lütkepohl 2005). After transforming the X-variable, the same filter is used to transform the Y-variable in Ỹt , which does not need to be a WN.Since pre-whitening is a linear opera- tion, any linear relationship between the original series will be preserved and can be retrieved by assessing the CCF between transformed Xt and Ỹt variables (Cryer and Chan 2008). The time lag to be used for temporal alignment of raw EC time series can be retrieved in correspondence of the peak (in absolute terms) of the CCF between prewhitened variables: provided it is statistically significant at a pre-specified significance level. We will refer to such an approach for time lag detection of raw EC data using the name of the procedure, i.e. as pre-whitening (PW hereinafter). Time lag detection via assessment of the cross-correlation function after pre-whitening with bootstrap A time lag detection procedure based solely on the assessment of the CCF between pre-whitened variables (Sect.2.1) is effective when the order of magnitude of the correlation is equal to −1 .This is true for moderate/high magnitude EC fluxes because the signal dominates over the noise and the estimate of the CCF in correspondence of the true time lag will be far greater than the conventional significance limits.When the correlation is low, as is often the case with trace gases, things become more complicated because the peak of the CCF in correspondence of the true (unknown) time lag will not be so pronounced as to dominate over the other estimates of the CCF at different lags.For example, in the case of a sample size of 36000 paired observations and an order of magnitude of the correlation between variables < −1 , the peak of CCF is close to the 5% significance limits ( ±1.96∕ √ 36000 ≈ ±0.01 ).Therefore, it can often happen that the peak of the CCF is detected in correspondence of an erroneous time lag. If measurements from repeated sampling under the same conditions were available, it would be easier to distinguish between true and false peaks of the CCF, as the former would remain more stable than the latter, which instead would tend to cancel out after averaging. With this idea in mind, we mimicked a repeated sampling by means of a block bootstrapping (Härdle et al. 2003) with the twofold aim of (i) increasing the accuracy of time lag detection and (ii) obtaining a quantification of the associated uncertainty.Block bootstrap consists of breaking the series into roughly equal-length blocks of consecutive observations and resampling the block with replacement.Dividing the data into several blocks can preserve the original time series structure within a block.In particular, we built N B = 99 bootstrap samples of paired Xt and Ỹt values of size N equal to the length of time series, and where each sample is formed by randomly choosing N/L blocks (with replacement) with L = 20 s, a temporal win- dow large enough to include the true (unknown) time lag and preserve the correlation structure between variables in short time intervals. The CCF was then estimated for each of the N B block bootstrap samples and, to further eliminate the presence of erratic peaks due to noise, a smoothed version ( S k,j ) through a centred moving average of width hz∕2 + 1 time steps, where hz is the scanned acquisition frequency of raw data (i.e. 10 or 20 Hz), is computed.For each S k,j , the jth estimated time lag (TL j ) is then detected in correspondence of the peak (in absolute terms): By analysing the distribution of the resulting N B time lags, regardless of their signif- icance level, the most frequently observed value is selected as the reference time lag: The 95% highest density interval (HDI), i.e. the shortest interval for which there is a 95% probability that the true (unknown) time lag would lie within the interval, provides a measure of the associated uncertainty. We will refer to such an approach for time lag detection of raw EC data as prewhitening with bootstrap (PWB hereinafter). Optimal time lag selection for temporal alignment of long-term EC data Irrespective of the chosen procedure, time lag detection of EC data needs to be performed between the high-frequency (e.g.10-20 observations per s) time series of 30-60 min (averaging period), usually collected in raw EC data files of the same length.To cope with such large datasets (e.g.17,520 raw data files per year), the availability of robust and automated procedures is essential for EC data processing pipelines. In this context, uncertainty estimates are not only important for the accuracy evaluation of individual time lags of each scalar variable/raw data files, but also for defining a fully data-driven strategy for the temporal alignment of long-term EC datasets.While the actual time lag may vary over time for various reasons (see the introductory section for more details), it is expected to be fairly stable during the averaging time intervals.( 6) (7) TL PWB = Mode(TL j ). In this perspective, a low uncertainty indicates a low variability of time lags detected for each of the N B bootstrapped samples during the averaging time.Con- sequently, the reference time lag detected by the PWB procedure is more likely to be the actual one.In contrast, the highest level of uncertainty occurs when the correlation between variables is zero (i.e. for zero fluxes), since the detected time lag will be randomly chosen within the temporal window of lags, in each bootstrapped sample. With these concepts in mind and with the aim of identifying, for each averaging time, the optimal time lag (PWB OPT hereinafter) to be used for the temporal alignment of long-term EC data, we propose the following strategy articulated in three steps: • S1.In the first step, time lags detected by PWB and characterised by a low uncertainty are considered reliable and flagged as optimal.In this work, uncertainty is defined as low when the range of the 95% HDI is less than 0.5 s; • S2.In the second step, time lags with larger uncertainty (i.e.range of the 95% HDI > 0.5 s) are also considered reliable and flagged as optimal if they show no significant deviation from the optimal time lag identified in Step 1 in the closest preceding averaging period.In this work, deviation is considered as significant if greater than 0.5 s; • S3.Finally, the remaining time lags not satisfying the above criteria are considered unreliable and replaced with the optimal time lag identified in the closest preceding averaging period, according to S1 or S2 criteria. In the above strategy, the only parameters to be set are the threshold values that define the uncertainty associated with the estimated time lag as low (S1) and the deviation between detected time lags (S2).As reported earlier, we recommend setting them equal to 0.5 s, a conservative threshold value that can be considered as an upper limit of the time lag variability can take over 30-60 min or between consecutive averaging periods. Variable selection Time lag detection for EC data is commonly computed by assessing the CCF between S measured by a GA and W measured by a SA.As said in the introductory section, once variables have been aligned, flux exchange rates can be derived from the covariance between S and W. Sonic anemometers also provide an indirect measure of the air temperature, the so-called sonic temperature (T).Being sampled by the same instrument, W and T are perfectly aligned and sensible heat fluxes can be derived from the covariance between them, without resorting to any temporal alignment procedure.Since air parcels movement is governed by the laws of thermodynamics, any scalar S is correlated with T, and this correlation may be stronger than that existing between S and 1 3 W.That could facilitate the time lag detection procedure as the CCF between S and T would show a more pronounced peak compared to the CCF between S and W. For the pre-whitening phase, since the aim is to ensure that at least one of the transformed series is free of autocorrelation, it does not matter which variable is selected as X.For this reason, the PWB procedure considers all (four) possible combinations of S, W and T, for which at least one of X-and Y-variable is the atmospheric scalar concentration.Among the four time lags identified, the one to which a higher correlation (in absolute value) corresponds is chosen as the reference, regardless of its statistical significance. Despiking and detrending Time lag detection procedures were performed on despiked and detrended raw data.Wind components were preliminary subject to anemometer tilt correction via double rotation method (Rebmann et al. 2012).For despiking, the procedure described in Vitale (2021) was performed.Different detrending procedures were adopted. The CM procedure was performed on variables subjected to a linear trend removal, as one of the most used methods in EC data processing (Sabbatini et al. 2018;Nemitz et al. 2018). For PW and PWB procedures, any trend affecting raw data was removed by differencing, according to the results of the non-parametric variance ratio (VR) test described in Breitung (2002).Differencing is the sequential subtraction of consecutive values of a time-series to obtain sequential changes in time.Besides highlighting other useful properties, differencing a variable eliminates any trends present in it, whether deterministic (e.g.linear) or stochastic (Box et al. 2015).For the purpose of time lag detection, both X-and Y-variables were preliminary differentiated if the VR test provided evidence about the presence of a stochastic trend in one or in both the variables subjected to the pre-whitening procedure. Software implementation The PWB procedure is implemented in the RFlux package (downloadable at https:// github.com/ icos-etc/ RFlux) taking advantage of the capabilities of the boot package (Canty and Ripley 2021) that allow to run in parallel mode the processing of the N B block-bootstrapped samples. EC data In this work, raw data sampled from the following EC sites were used: • CH-Cha: Chamau, Switzerland (CH), managed grassland located in a pre-alpine valley (47.2102N, 8.4104 E, 393 m asl). In addition to the wind components measured by SA, scalar variables of CO 2 , CH 4 and N 2 O atmospheric concentrations were sampled, and considered in this work (see Table 1 for a description of the EC flux-station sites setup). The EC system at UK-EBu was equipped with an inlet overflow system (Nemitz et al. 2018), by which a high concentration of N 2 O was injected at set time intervals to measure the time delay between injection and detection by the closed-path analyser.This time delay, which is a function of the physical properties of the setup (e.g.flow rate through the sampling line and instrument response time), was the largest component of the total time lag and used in the derivation thereof.Time lags estimated by means of such an experimental approach (EXP hereinafter) were used for comparison with our data-driven approach. Benchmark methods and evaluation criteria To aid in comparison and achieve a better interpretation of the results, the following procedures were considered: • CM-W: maximisation of the cross-covariance function between S and W; • CM-T: maximisation of the cross-covariance function between S and T; • CM-W CTR : maximisation of the cross-covariance function between S and W constrained within a narrow window of plausible time lags; • CM-T CTR : maximisation of the cross-covariance function between S and T constrained within a narrow window of plausible time lags; • PW-W: assessment of the CCF between S and W after pre-whitening (Sect.2.1); • PW-T: assessment of the CCF between S and T after pre-whitening (Sect.2.1); • PWB: assessment of the CCF after pre-whitening with bootstrap estimated for all (four) possible combinations of S, W and T, for which at least one of X-and Y-variable is the atmospheric scalar concentration (see Sects.2.2 and 2.4); • PWB OPT : optimal time lag derived from PWB results according to the strategy described in Sect.2.3. Except for CM-W CTR and CM-T CTR , each procedure was performed within a broad window of time lags (e.g.±10 s) with the aim to evaluate their sensitivity in absence of constraints.The definition of the (narrow) window of plausible time lags for the constrained CM approaches was based on a preliminary data analysis to statistically evaluate the most likely time lags and their ranges of variation.Additional constraints based on EC system characteristics were also considered (for example for closed-path GAs a delay of the scalar respect to W greater than zero).For the constrained procedures, time lags detected at the window boundaries were discarded and replaced with a default value.In this work the modal value computed for the entire sampling period was used as the default reference. The evaluation of each procedure was carried out by looking at the stability of the detected time lags over the sampling periods, separately for each trace gas and for each EC site.To achieve this goal, descriptive statistics (minimum, first and third quartile, maximum, modal value and interquartile range-IQR) of the distribution of time lags detected by each of the above procedures for CO 2 , CH 4 and N 2 O trace gases were compared.For N 2 O sampled at the UK-EBu, time lags derived from the EXP approach were additionally used for comparison. Results and discussion In the following sections, we first report an application of the proposed PWB procedure on a selection of raw EC data files with the aim to highlight its advantages compared to the existing approaches.Then a performance evaluation over long-term periods of the procedures listed in Sect.3.2 is reported and discussed in Sect.4.2.An overall evaluation of the impact of different time lag detection procedures on flux covariance data distribution is provided in Sect.4.3.All statistical analyses were entirely performed in the R programming language (R Core Team 2023, version 4.3.1). Application results on a selection of raw EC data files In this section, we report the advantages of the PWB procedure over the widely used approach in EC data processing pipelines based on the covariance-maximisation using W (CM-W) and the one based on the assessment of the CCF between pre-whitened variables (without bootstrapping) via conventional confidence intervals (for illustrative purposes here we report the results obtained via PW-W specification). To this end, illustrative examples of time lags detected by each procedure on a selection of raw EC data are shown in Fig. 2. Data refer to W and N 2 O atmospheric concentrations sampled at UK-EBu and depicted in Fig. 1.To better appreciate the pros and cons, the above mentioned procedures were performed without setting a proper temporal window of plausible time lags, i.e. by detecting the time lag over a broad search temporal window of ±10 s. For moderate/high magnitude fluxes, the use of PW-W and PWB procedures in place of the CM-W does not lead to substantially different results.In such cases, the cross-covariance function exhibits a distinct peak and the time lag between variables can be easily derived from it.An example of such a situation is depicted in Fig. 2a.For this data sample, the time lag detected by each procedure resulted in close agreement and equal to 1.6 s, a sensible estimate given the physical properties of this EC system (see Sect. 4.2 for more details).The resulting flux estimate, after temporal alignment, is about 1 3 When the cross-covariance function does not exhibit a distinct peak, the detection of the actual time lag becomes more problematic leading to significant biases in flux estimates.For example, by applying the temporal alignment via CM-W procedure, flux estimates for the two examples in Fig. 1b and c would Fig. 2 Illustrative examples of time lag detection via covariance-maximisation using vertical wind speed (CM-W, left panels), cross-correlation function after pre-whitening using vertical wind speed (PW-W, middle panels) and after pre-whitening with bootstrap (PWB, right panels).Raw EC data refer to vertical wind speed (W) and nitrous dioxide (N 2 O) atmospheric concentrations depicted in Fig. 1.Numbers on the top of the x-axis indicate the time lag detected by each procedure.Horizontal dashed lines in the PW-W plots identify the 95% confidence interval.Shadow area in the PWB plots represents the uncertainty (range of the 95% HDI) associated with the detected time lag.Unlike the other methods, the PWB procedure provides consistent results in most examples (panels a-d).For the example in panel e, all methods detect a wrong time lag, but the uncertainty estimate of PWB allows the unreliable result to be identified and discarded be −0.1 (Fig. 2b) and −0.2 nmol N 2 O m −2 s −1 (Fig. 2c).If, instead, we assume that the actual time lag is around 1.7 s, flux estimates would be of opposite sign.Although such differences are small in magnitude, they have important implications in flux data interpretation.By convention, a positive flux value indicates that the ecosystem is a N 2 O source to the atmosphere, while a negative flux value indicates a sink. A strategy often advised to prevent bias in flux estimation is to narrow down the temporal window of plausible time lags over which to look for the peak of the crosscovariance function (Rebmann et al. 2012).For this strategy to be effective, however, the peak needs to be well defined.For example, considering the cases shown in Fig. 2, only in the conditions illustrated in panels a-c a narrower temporal window (e.g.0-5 s) would result in the correct identification of the actual time lag, and thus in an improvement for the two cases shown in panels b and c.In contrast, when the CCF does not exhibit a well-defined peak, as in the cases shown in panels d and e in Fig. 2, detecting the actual time lag remains challenging.Also, constrained CM procedures using a nominal time lag (default), although leading to an improvement in results as shown in the next section, may be ineffective when the shape of the crosscovariance function is such that the absolute maximum does not fall on the temporal window boundaries. The PW-W approach reduces the risk of spurious correlations through prewhitening, thus facilitating the time lag detection compared to the CM method.An example is depicted in Fig. 2b, where the actual time lag is detected in correspondence with a statistically significant peak without the need to narrow down the temporal window of plausible time lags.For low magnitude fluxes (Fig. 2c-e), however, the detection of the actual time lag by PW-W remains challenging and the evaluation of peaks by using conventional confidence intervals introduces considerable uncertainty.In fact, for low magnitude fluxes, the peak of the CCF in correspondence with the actual time lag will not be so pronounced as to dominate over the other estimates.In such cases, there is a real risk of detecting erroneous time lags and, consequently, introducing bias into flux estimates. The advantage that the PWB offers is to better discern well-defined and stable time lags from more uncertain ones.This is done by assessing the uncertainty associated with the detected time lag and quantified, after block-bootstrapping, by means of the 95% HDI.Among these illustrative examples and following the strategy outlined in Sect.2.3, it turns out that four detected time lags are considered as optimal because the range of 95% HDI is less than 0.5 s (Fig. 2a, b) or because they do not deviate more than 0.5 s from those identified as optimal in preceding averaging periods (Fig. 2c, d), while only one is considered unreliable (Fig. 2e) because too uncertain and anomalous.In cases like this, the closest (in time) PWB optimal time lag detected is recommended. Evaluation over long-term periods In this section, we report a comparison of different time lag detection procedures using the long-term EC data described in Sect.3.1.Due to space limitations, we report a graphical representation of time lags detected for only three study cases: CO 2 sampled at FI-Kvr (Fig. 3), CH 4 sampled at CH-Cha (Fig. 4) and N 2 O sampled at UK-EBu (Fig. 5).The full set of results is available in the supplementary material (SM).Descriptive statistics (minimum, first and third quartile, maximum, modal value and interquartile range-IQR) of the distribution of time lags detected by different procedures for CO 2 , CH 4 and N 2 O trace gases are summarised in Tables 1, 2 and 3 of the SM, respectively. Although the actual time lag is not expected to be constant over time for the reasons explained in the introductory section, overall, time lags detected by the proposed PWB OPT approach (see panel i of Figs. 3, 4 and 5) were more stable over time than those identified by CM-and PW-based procedures.Considering the IQR as a measure of the spread of detected time lags distribution, the one estimated for PWB OPT resulted in the lowest IQR for most the cases considered in this work, whereas those estimated for CM-W and PW-W had the largest spread. Negligible differences in terms of IQR were found for CO 2 at CH-Cha and DE-GsB grassland sites characterised by fluxes of moderate/high magnitude (Table 1 of SM).For CH 4 (Table 2 of SM) and N 2 O (Table 3 of SM), the IQR of PWB OPT was comparable or lower than those estimated for CM-based procedures, even when constrained within a narrow window of plausible time lags.This means that setting a narrower window when performing CM-based procedures, although leading to a marked improvement in results, does not ensure that the detected time lag converges to the actual one.In fact, there is a substantial portion of cases in which most of the time lags detected by CM-based procedures were found to diverge at the boundaries of the pre-fixed search window of plausible time lags, regardless of its width (see panels a-d of Figs. 3, 4 and 5).Such a setting is not strictly required for the proposed PWB strategy, which can instead be performed by setting a wide search temporal window of time lags, without loss in effectiveness. The assessment of the cross-covariance function using T in place of W facilitates the time lag detection via CM-based procedures, in particular for CH 4 and N 2 O trace gases where a significant reduction in terms of IQR was found.Despite such an improvement, the use of T does not entirely prevent time lags from being identified at the boundaries of the search window (see panel i of Figs. 3, 4 and 5 and Tables 1, 2 and 3 of SM). Such a risk is drastically reduced when the assessment of the CCF is performed with variables preliminarily subjected to the pre-whitening procedure (see panels e and f of Figs. 3, 4 and 5).For PW-based procedures, in fact, an improvement, in terms of stability of the results, was found when using T in place of W, as confirmed by the reduction of the IQR.However, when variables are characterised by a low order of correlation, as in the case of low magnitude fluxes, the assessment of the statistical significance of the CCF after pre-whitening using conventional confidence intervals was not suitable for detecting reliable time lags.As shown in panels e and f of Figs. 3, 4 and 5 and Figs.1-5 of the SM, most of the time lags, even those far from the modal value, were detected as statistically significant. The application of the strategy outlined in Sect.2.3 to achieve the optimal time lag to be used for temporal alignment, leads to an improvement of the PWB results in terms of stability.In fact, most time lags detected by PWB with large departures from the IQR were characterized by high uncertainty and, then, considered unreliable (panel g of Figs. 3, 4, 5).Red triangles in PW plots indicate time lags detected in correspondence with a peak statistically nonsignificant at 0.01 level.g Assessment of the CCF after pre-whitening with bootstrap (PWB).Plus signs in PWB plot indicate time lags with high uncertainty (range of the 95% HDI > 0.5 s).h Optimal time lags (PWB OPT ) according to the strategy described in Sect.2.3.i Violin plot with included boxplot of the distribution of detected time lags by each procedure; red lines indicate the modal value, grey areas for CM-W CTR and CM-T CTR indicate the predefined time intervals where plausible time lag must not be searched 1 3 The percentage of time lags flagged as optimal during S1 and S2 of the strategy outlined in Sect.2.3 is reported in Table 2.Among the three gases examined, time lags detected for CO 2 were the least uncertain.In particular, the 89%, 90% Fig. 4 Comparison of time lags detected by several procedures for methane (CH 4 ) sampled at CH-Cha.a-d Covariance maximisation using vertical wind speed (CM-W) and sonic temperature (CM-T) within a broad (± 10 s) and a constrained window (CTR, 0-2.5 s) of plausible time lags.Red points in c and d indicate time lags detected at the window boundaries.Horizontal red lines in c and d denote the modal value estimated without considering time lags detected at the window boundaries.e-f Assessment of the CCF after pre-whitening using vertical wind speed (PW-W) and sonic temperature (PW-T).Red triangles in PW plots indicate time lags detected in correspondence with a peak statistically non-significant at 0.01 level.g Assessment of the CCF after pre-whitening with bootstrap (PWB).Plus signs in PWB plot indicate time lags with high uncertainty (range of the 95% HDI > 0.5 s).h Optimal time lags (PWB OPT ) according to the strategy described in Sect.2.3.i Violin plot with included boxplot of the distribution of detected time lags by each procedure; red lines indicate the modal value, grey areas for CM-W CTR and CM-T CTR indicate the predefined time intervals where plausible time lag must not be searched and 97% of time lags detected by PWB at FI-Kvr, CH-Cha and DE-GsB, respectively, were considered reliable and flagged as optimal.For most of them (61%, 82% and 92% at FI-Kvr, CH-Cha and DE-GsB, respectively) the 95% HDI range e-f Assessment of the CCF after pre-whitening using vertical wind speed (PW-W) and sonic temperature (PW-T).Red triangles in PW plots indicate time lags detected in correspondence with a peak statistically non-significant at 0.01 level.gAssessment of the CCF after pre-whitening with bootstrap (PWB).Plus signs in PWB plot indicate time lags with high uncertainty (range of the 95% HDI > 0.5 s).h Optimal time lags (PWB OPT ) according to the strategy described in Sect.2.3.i Violin plot with included boxplot of the distribution of detected time lags by each procedure; red lines indicate the modal value, grey areas for CM-W CTR and CM-T CTR indicate the predefined time intervals where plausible time lag must not be searched was less than 0.5 s.Referring to CH 4 and N 2 O, characterised by a higher occurrence of low magnitude fluxes, the percentages of reliable time lags detected by PWB were lower than those achieved for CO 2 , varying around 70%, except for CH 4 at FI-Kvr (55%). Focusing on N 2 O sampled at UK-EBu site, 72% of the detected time lags by PWB were flagged as optimal and considered reliable having an associated uncertainty less than 0.5 s (37%) or deviating from time lags detected in preceding averaging periods no more than 0.5 s (35%).The remaining 28% were considered unreliable and occurred in cases of close-to-zero fluxes. The comparison with the experimental approach by Nemitz et al. (2018) shows that most of time lags achieved by the PWB OPT procedure do not deviate more than ± 0.25 s from direct measurements.Moreover, the actual time lags detected by both approaches seems to follow a common time trend (Fig. 6), the causes of which can have multiple sources not always manageable during field measurements, as said in the introductory section.Fig. 6 Comparison of time lags detected by the PWB OPT approach (cyan points) and through the experimental approach (EXP, solid black points) by Nemitz et al. (2018) for nitrous dioxide (N 2 O) sampled at UK-EBu.Panel a shows the temporal dynamics of detected time lags; panel b shows the violin plot with included boxplot of differences between the two approaches based on 283 paired data values Most (80%) of time lags detected by PWB OPT vary between + 1.45 and + 2.20 s (1st and 9th deciles, respectively), in strict agreement with the range (from + 1.50 to + 2.30 s) of the experimental measurements (see Table 3 Such a narrow range of variability was not achieved by any of the others CM-and PW-based procedures. Impact on flux estimates The impact on EC fluxes can be appreciated by comparing the flux density distributions computed after temporal alignment of W and the S scalar of interest using different time lag detection procedures. Figure 7 depicts a graphical representation of the density distributions of flux estimates computed after temporal alignment of time series using time lags detected by CM-W, CM-W CTR approaches and by the proposed PWB OPT procedure. As shown, the CM-W procedure leads to a significant loss of mass density distribution around zero flux values and is representative of the so-called mirroring Fig. 7 Comparison of the flux density distributions computed after temporal alignment of raw, high-frequency EC data using time lags detected by CM-W, CM-W CTR and PWB OPT procedures effect.There are no eco-physiological reasons that could explain such a behavior, since zero flux values fall within the physical range of possible values and they are not to be understood as a rare event, in particular during the monitoring of greenhouse trace gases having flux exchange rates small in magnitude.Running into this error can have a negative impact on subsequent analyses and should be avoided.For example, errors can propagate during the gap-filling stage and lead to an overestimation of the random uncertainty for procedures based on the use of half-hourly flux data (Richardson et al. 2008;Lasslop et al. 2008;Vitale et al. 2019). The mirroring effect is only mitigated when the CM is performed with constraints, whereas it completely disappears when fluxes are computed after temporal alignment via the PWB OPT procedure.The limitations of the CM-W CTR in solving the mirroring effect may depend on several interrelated factors such as (i) the number of occurrences of low magnitude fluxes characterising the ecosystem under investigation; (ii) the erratic behaviour of the cross-covariance function even within the prefixed window of plausible time lags which makes the CM ineffective; (iii) the inadequacy of a constant value of the default time lag in presence of drifts during the sampling period, or more in general the selection of a non-representative period for the estimation of the modal value.Such limitations are not of relevance for the PWB procedure, as it is completely data driven and robust to the presence of spurious peaks of the CCF. Conclusions and final remarks Greenhouse gases monitoring is crucial to combating climate changes.Beyond new instrumentations with increased accuracy and precision, the development and the application of advanced statistical tools in data processing pipelines can facilitate the analysis of such complex phenomena. In this work, a fully data-driven procedure for the detection of time lag for raw, high-frequency EC data was presented.The proposed PWB approach, based on the assessment of the cross-correlation function after pre-whitening with bootstrap, is designed to overcome the limitations of existing procedures when the correlation between variables is of low order of magnitude (i.e. for low magnitude EC fluxes). In particular, (i) pre-whitening avoids the risk that time lag is detected in correspondence with spurious peaks of the cross-covariance function as often occurs with the widely used procedure in EC data processing pipelines based on covariance-maximisation, whereas (ii) block-bootstrap allows an estimate of the associated uncertainty useful to reduce the false positives error rate, as occurs with approaches based on the assessment of the CCF after pre-whitening with standard criteria. The application on real, observed EC data showed that the performance of the proposed PWB method is really promising, in particular for the time lag detection of CH 4 and N 2 O trace gases that are currently under expansion in the global network of eddy covariance stations (see Delwiche et al. 2021), and which are characterised by complex and irregular patterns, like sudden emission peaks alternated to close-tozero fluxes. The results achieved by the methods comparison study presented in this work also suggest insights for further improvements of the widely used CM method: (i) we found a better stability in terms of IQR when time lags are detected using T in place of W, in particular for low-magnitude fluxes; (ii) we also found that a default time lag computed as the modal value of the distribution of time lags and based on a preliminary data analysis leads, on average, to results consistent with the PWB method.However, it must be considered that the use of a default value does not ensure to entirely solve the mirroring effect and that the CM method is sensitive to the choice of the predefined search window and to possible drifts of the real time lag during long-period samplings, e.g.due to drifts in the clocks or changes in the tube air flow rate. Errors in time lag detection could introduce significant biases in flux estimates, with implications on the ecosystem understanding in terms of their full GHGs balance.Similar considerations hold true for all GHGs fluxes measured in ecosystems characterised by low magnitude exchange rates, like in the case of CO 2 measured on water bodies, or during equilibrium phases between photosynthesis and total respiration processes. We expect that the proposed PWB procedure will become a standard for the centralised data processing pipelines of research infrastructures (e.g.ICOS-RI, Heiskanen et al. 2022) where the use of fully reproducible and objective procedures constitutes an essential prerequisite to move forward in the standardisation and harmonisation efforts ongoing in the context of the global FLUXNET initiatives (Papale 2020).This will be particularly important for non-CO 2 gases, characterised by generally lower magnitude fluxes than CO 2 and by periods with fluxes very close to zero.Although modern multi-species GAs offer the possibility to estimate the time lag by means of CM-based procedures for fluxes with high SNR (i.e.CO 2 ) and then use it to temporally align scalars representative of low-magnitude inert gas fluxes (e.g.CH 4 or N 2 O), the proposed PWB constitutes, to the best of our knowledge, the most effective solution currently available. Fig. 1 Fig. 1 Illustrative examples of cross-covariance function (right panels) between vertical wind velocity component (W, left panels) and nitrous oxide (N 2 O, middle panels) atmospheric concentrations sampled at 20 Hz scanned frequency (i.e.20 obs per sec) and collected in 60 min raw data file length.Numbers on the top of the x-axis indicate the time lag detected by the covariance-maximisation (CM) procedure in correspondence with the peak (in absolute terms) of the cross-covariance function Fig. 3 Fig. 3Comparison of time lags detected by several procedures for carbon dioxide (CO 2 ) sampled at FI-Kvr.a-d Covariance maximisation using vertical wind speed (CM-W) and sonic temperature (CM-T) within a broad (± 10 s) and a constrained window (CTR, 0-2 s) of plausible time lags.Red points in c and d indicate time lags detected at the window boundaries.Horizontal red lines in c and d denote the modal value estimated without considering time lags detected at the window boundaries.e-f Assessment of the CCF after pre-whitening using vertical wind speed (PW-W) and sonic temperature (PW-T).Red triangles in PW plots indicate time lags detected in correspondence with a peak statistically nonsignificant at 0.01 level.g Assessment of the CCF after pre-whitening with bootstrap (PWB).Plus signs in PWB plot indicate time lags with high uncertainty (range of the 95% HDI > 0.5 s).h Optimal time lags (PWB OPT ) according to the strategy described in Sect.2.3.i Violin plot with included boxplot of the distribution of detected time lags by each procedure; red lines indicate the modal value, grey areas for CM-W CTR and CM-T CTR indicate the predefined time intervals where plausible time lag must not be searched Fig. 5 Fig. 5 Comparison of time lags detected by several procedures for nitrous dioxide (N 2 O) sampled at UK-EBu.a-d Covariance maximisation using vertical wind speed (CM-W) and sonic temperature (CM-T) within a broad (± 10 s) and a constrained window (NW, 0-5 s) of plausible time lags.Red points in c and d indicate time lags detected at the window boundaries.Horizontal red lines in c and d denote the modal value estimated without considering time lags detected at the window boundaries.e-f Assessment of the CCF after pre-whitening using vertical wind speed (PW-W) and sonic temperature (PW-T).Red triangles in PW plots indicate time lags detected in correspondence with a peak statistically non-significant at 0.01 level.gAssessment of the CCF after pre-whitening with bootstrap (PWB).Plus signs in PWB plot indicate time lags with high uncertainty (range of the 95% HDI > 0.5 s).h Optimal time lags (PWB OPT ) according to the strategy described in Sect.2.3.i Violin plot with included boxplot of the distribution of detected time lags by each procedure; red lines indicate the modal value, grey areas for CM-W CTR and CM-T CTR indicate the predefined time intervals where plausible time lag must not be searched Fig. 1 Fig.1Comparison of time lags detected by several procedures for carbon dioxide (CO 2 ) sampled at CH-Cha.Panels a-d: covariance maximisation using vertical wind speed (CM-W) and sonic temperature (CM-T) within a broad (± 10 sec) and a constrained (CTR, ± 1 sec) window of plausible time lags.Red points in panels c and d indicate time lags detected at the window boundaries.Horizontal red lines in panels c and d denote the modal value estimated without considering time lags detected at the window boundaries.Panels e-f: assessment of the CCF after pre-whitening using vertical wind speed (PW-W) and sonic temperature (PW-T).Red triangles in PW plots indicate time lags detected in correspondence with a peak statistically non-significant at 0.01 level.Panel g: assessment of the CCF after pre-whitening with bootstrap (PWB).Plus signs in PWB plot indicate time lags with high uncertainty (range of the 95% HDI > 0.5 sec).Panel h: optimal time lags (PWB OPT ) according to the strategy described in Section 2.3 of the main paper.Panel i: violin plot with included boxplot of the distribution of detected time lags by each procedure; red lines indicate the modal value, grey areas for CM-W CTR and CM-T CTR indicate the predefined time intervals where plausible time lag must not be searched. Fig. 2 Fig. 3 Fig.2Comparison of time lags detected by several procedures for nitrous oxide (N 2 O) sampled at CH-Cha.Panels a-d: covariance maximisation using vertical wind speed (CM-W) and sonic temperature (CM-T) within a broad (± 10 sec) and a constrained (CTR, 0-2.5 sec) window of plausible time lags.Red points in panels c and d indicate time lags detected at the window boundaries.Horizontal red lines in panels c and d denote the modal value estimated without considering time lags detected at the window boundaries.Panels e-f: assessment of the CCF after pre-whitening using vertical wind speed (PW-W) and sonic temperature (PW-T).Red triangles in PW plots indicate time lags detected in correspondence with a peak statistically non-significant at 0.01 level.Panel g: assessment of the CCF after pre-whitening with bootstrap (PWB).Plus signs in PWB plot indicates time lags with high uncertainty (range of the 95% HDI > 0.5 sec).Panel h: optimal time lags (PWB OPT ) according to the strategy described in Section 2.3 of the main paper.Panel i: violin plot with included boxplot of the distribution of detected time lags by each procedure; red lines in box plots indicates the modal value, grey areas for CM-W CTR and CM-T CTR indicate the predefined time intervals where plausible time lag must not be searched. Fig. 4 Fig. 5 Fig.4Comparison of time lags detected by several procedures for methane (CH 4 ) sampled at DE-GsB.Panels a-d: covariance maximisation using vertical wind speed (CM-W) and sonic temperature (CM-T) within a broad (± 10 sec) and a constrained (CTR, 0-2 sec) window of plausible time lags.Red points in panels c and d indicate time lags detected at the window boundaries.Horizontal red lines in panels c and d denote the modal value estimated without considering time lags detected at the window boundaries.Panels e-f: assessment of the CCF after pre-whitening using vertical wind speed (PW-W) and sonic temperature (PW-T).Red triangles in PW plots indicate time lags detected in correspondence with a peak statistically non-significant at 0.01 level.Panel g: assessment of the CCF after pre-whitening with bootstrap (PWB).Plus signs in PWB plot indicate time lags with high uncertainty (range of the 95% HDI > 0.5 sec).Panel h: optimal time lags (PWB OPT ) according to the strategy described in Section 2.3 of the main paper.Panel i: violin plot with included boxplot of the distribution of detected time lags by each procedure; red lines indicates the modal value, grey areas for CM-W CTR and CM-T CTR indicate the predefined time intervals where plausible time lag must not be searched. Table 2 Percentage of optimal Table 1 Minimum, first quartile (Q1), third quartile (Q3), maximum, modal value and interquartile range (IQR) of the distribution of time lags detected by different procedures for CO 2 (see Section 3 of the main paper for the description of the acronyms). Table 2 Minimum, first quartile (Q1), third quartile (Q3), maximum, modal value and interquartile range (IQR) of the distribution of time lags detected by different procedures for CH 4 (see Section 3 of the main paper for the description of the acronyms). Table 3 Minimum, first quartile (Q1), third quartile (Q3), maximum, modal value and interquartile range (IQR) of the distribution of time lags detected by different procedures for N 2 O (see Section 3 of the main paper for the description of the acronyms).
2024-04-23T15:06:22.721Z
2024-04-21T00:00:00.000
{ "year": 2024, "sha1": "69810fb44780494a04cc72cdd771b95e57dc6287", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10651-024-00615-9.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "15875d1b7657b9b9c7a00fec32c0a26ab494c999", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
213561052
pes2o/s2orc
v3-fos-license
Development of a global 30-m impervious surface map using multi-source and multi-temporal remote sensing datasets with the Google Earth Engine platform . The amount of impervious surface is an important indicator in the monitoring of the intensity of human activity and environmental change. The use of remote sensing techniques is the only means of accurately carrying out global mapping of impervious surfaces covering large areas. Optical imagery can capture surface reflectance characteristics, while synthetic aperture radar (SAR) images can be used to provide information on the structure and dielectric properties of surface materials. 15 In addition, night-time light (NTL) imagery can detect the intensity of human activity and thus provide important a priori probabilities of the occurrence of impervious surfaces. In this study, we aimed to generate an accurate global impervious surface map at a resolution of 30-m for 2015 by combining Landsat-8 OLI optical images, Sentinel-1 SAR images and VIIRS NTL images based on the Google Earth Engine (GEE) platform. First, the global impervious and non-impervious training samples were automatically derived by combining the GlobeLand30 land-cover product with VIIRS NTL and MODIS 20 enhanced vegetation index (EVI) imagery. Then, based on global training samples and multi-source and multi-temporal imagery, a random forest classifier was trained and used to generate corresponding impervious surface maps for each 5°×5° cell of a geographical grid. Finally, a global impervious surface map, produced by mosaicking numerous 5°×5° regional maps, was validated by interpretation samples and then compared with five existing impervious products (GlobeLand30, FROM_GLC, NUACI, HBASE and GHSL). The results indicated that the global impervious surface map produced using the 25 proposed multi-source, multi-temporal random forest classification (MSMT_RF) method was the most accurate of the maps, having an overall accuracy of 95.1% and kappa coefficient of 0.898 as against 85.6% and 0.695 for NUACI, 89.6% and 0.780 for FROM_GLC, 90.3% and 0.794 for GHSL, 88.4% and 0.753 for GlobeLand30, and 88.0% and 0.745 for HBASE using all 15 regional validation data. Therefore, it is concluded that a global 30-m impervious surface map can accurately and efficiently be generated by the proposed MSMT_RF method based on the GEE platform. The global impervious surface map generated producer’s accuracy of impervious surfaces , U.I., user’s accuracy of impervious surfaces, P.N., producer’s accuracy of non-impervious surfaces , U.N., user’s accuracy of non-impervious surfaces , O.A., overall accuracy Introduction Impervious surfaces are usually covered by anthropogenic materials which prevent water penetrating into the soil (Weng, 2012), which are primarily composited by asphalts, sand and stone, concrete, bricks, glasses, etc. (Chen et al., 2015). Due to the rapid growth in the area covered by impervious surfaces, a series of climate, environmental and social problems are 35 emerging, including the urban heat island, traffic congestion, waterlogging and the deterioration of urban environment (Fu and Weng, 2016;Gao et al., 2012;Weng, 2001;Zhou et al., 2017;Zhuo et al., 2018). Furthermore, as an important indicator in the monitoring of the intensity of human activity and of ecological and environmental changes, the mapping of impervious surfaces is of great interest in many disciplines (Xie and Weng, 2017). Accurate large-area impervious surface mapping is, therefore, urgent and necessary. 40 Due to the frequent and large-area coverage that it provides, increasing attention has been paid to the use of remote sensing technology for impervious surface mapping. In recent years, a lot of effort has gone into mapping impervious surfaces at different spatial resolutions (Elvidge et al., 2007;Schneider et al., 2010;Schneider et al., 2009). For example, Schneider et al. (2010) used multi-temporal MODIS data to produce a 500-m global urban land map, achieving an overall accuracy of 93% and kappa coefficient of 0.65. Elvidge et al. (2007) combined the Defense Meteorological Satellite Program (DMSP) 45 Operational Linescan System (OLS) and LandScan population count data to produce a 1-km global impervious surface area map. However, since the complex characteristics of impervious landscapes and inherent resolution of human activity, coarseresolution global impervious surface maps were not suitable for many applications and policy makers at local or regional scales, for example, the urban-rural pattern planning and road network monitoring usually required the fine spatial resolution impervious surface products (Gao et al., 2012). 50 Recently, with the advent of free medium-resolution satellite data (e.g. Landsat and Sentinel-2), combined with rapidlyincreasing data-storage and computation capabilities, many regional or global fine-resolution impervious surface maps have been produced using Landsat and Sentinel-2 images (Chen et al., 2015;Gao et al., 2012;Goldblatt et al., 2018;Gong et al., 2019;Gong et al., 2013;Homer et al., 2015;Liu et al., 2018;Sun et al., 2017). Specifically, the National Land Cover Dataset (NLCD) produced the first 30-m map of the United States including impervious surface as three separate land-55 cover types (Developed low, Developed medium, and Developed high intensity) using Landsat imagery, DMSP OLS and USGS National Elevation Dataset (NED) digital elevation data, and achieving the user' accuracy of 0.48~0.66 (Homer et al., 2004). Similarly, the FROM_GLC produced the global 30-m impervious surface map as an independent land cover type with the user' accuracy of 0.307 (Gong et al., 2013); GlobeLand30 combined the pixel-based classification, segmentation and manual editing based on the high resolution imagery to develop the 30-m impervious surface map as an independent layer with 60 the user's accuracy of 0.867 (Chen et al., 2015). However, as sparse training samples of impervious surfaces cannot capture all relevant spectral heterogeneity when producing these land-cover products, the impervious surface layers usually suffered low accuracy except for GlobeLand30 (which includes manual interpretation). Therefore, a few studies proposed to independently produce the impervious surface products. For example, Liu et al. (2018) proposed the Normalized Urban Areas Composite Index (NUACI) method to produce a global 30-m impervious surface map and achieved an overall accuracy of 65 0.81-0.84 and a kappa values of 0.43-0.50. However, the NUACI product had a relatively poor performance in terms of producer's accuracy (0.50-0.60) and user's accuracy (0.49-0.61). Brown de Colstoun et al. (2017) combined the object-based segmentation, random forest classification and post-processing to develop the Global 30-m Man-made Impervious Surface (GMIS) and Human Built-up and Settlement Extent (HBASE) dataset in 2010 which achieved a kappa coefficient of 0.91 using scene-level cross validation in Europe (Wang et al., 2017b). Pesaresi et al. (2016) used the multi-temporal Landsat 70 imagery and symbolic machine learning method to produce the Global 30-m Human Settlement Layer (GHSL) in 2014, and achieved a total accuracy of 96.28% and kappa coefficient of 0.3233 based on Land Use/Cover Area frame Survey (LUCAS) reference data. Therefore, an accurate impervious surface map at fine spatial resolution is still urgently needed using an efficient mapping method. There are three critical challenges for global impervious surface mapping at medium spatial resolution. These include finding 75 an adequate image identification method, image selection scheme and image processing platform . First, although a wide range of methods have already been presented for impervious surface mapping, it is still hard to generate an operational and accurate global impervious surface map at 30-m resolution. The methods used so far can be divided into three main groups: spectral mixture analysis methods (Ridd, 1995;Wetherley et al., 2017;Wu, 2004;Wu and Murray, 2003;Yang and He, 2017;Zhuo et al., 2018), spectral index-based methods (Deng and Wu, 2012;Liu et al., 2018;Xu, 2010), and 80 image classification methods (Chen et al., 2015;Okujeni et al., 2013;Zhang et al., 2018a;Zhang et al., 2012;Zhang and Weng, 2016). The spectral mixture analysis methods have great advantages in terms of the repeatable and accurate extraction of quantitative sub-pixel information (Weng, 2012). However, these spectral mixture methods can produce underestimates in areas with high density impervious surfaces and overestimates in areas with low density impervious surfaces, and may have great difficulties to identify one suitable endmember to represent all types of impervious surfaces Weng, 85 2012). The spectral index-based methods have been widely applied in regional impervious surface mapping due to their simplicity, flexibility and convenience Sun et al., 2019b;Xu, 2010). However, the spectral index-based methods have great difficulty in finding the optimal threshold for separating the impervious pixels from bare areas and vegetation pixels . The image classification methods can efficiently combine remote sensing datasets from multiple sources Zhang et al., 2018a;Zhou et al., 2017) and have great capabilities in spectrally complex 90 impervious surface mapping (Okujeni et al., 2013), which has been an area of great interest in recent years (Goldblatt et al., 2018;Zhang et al., 2018b). However, it is very hard to select training samples for large-area impervious surface mapping using these methods (Weng, 2012). Second, although individual optical data sets have been successfully employed for regional or global impervious surface mapping, accurate estimation of impervious surfaces remains challenging due to the diversity of urban land-cover types, which 95 leads to difficulties in separating different land-cover types with similar spectral signatures (Zhang et al., 2014b). The incorporation of multi-source and multi-temporal remote sensing imagery has been demonstrated to improve impervious surface mapping accuracy (Weng, 2012;Zhu et al., 2012). For example, optical imagery is only able to capture surface reflectance characteristics, while synthetic aperture radar (SAR) data can provide details of the structure and dielectric properties of the surface materials (Sun et al., 2019b;Zhang et al., 2014b;Zhu et al., 2012). Zhang et al. (2016) found that the 100 addition of dual-polarimetric SAR features resulted in an accuracy improvement of 3.5% compared with using optical SPOT-5 imagery only and also that dual-polarimetric SAR data had a superior performance to single polarimetric SAR data for impervious mapping. Similarly, Shao et al. (2016) explained that the combination of GaoFen-1 optical imagery with Sentinel-1 SAR imagery efficiently reduced the confusion between impervious surfaces and water and bare areas. Furthermore, Zhu et al. (2012) found that the inclusion of multi-seasonal imagery increased the mapping accuracy from 77.96% to 86.86% and that 105 the further addition of texture variables increased the mapping accuracy to 92.69% for urban and peri-urban land-cover classification. The reasons for the accuracy increase were that the texture imagery could capture the local spatial structure and the variability in land cover categories and also that the temporal information could describe the phenological variability. Schug et al. (2018) also used the multi-seasonal Landsat imagery to successfully map impervious extent and land cover fractions. In addition, as an important data source for the measurement of socioeconomic activities, DMSP-OLS night-time 110 light (NTL) imagery have been widely used in many impervious-related applications (Li and Zhou, 2017). For example, Elvidge et al. (2007) successfully produced a global 1-km impervious map using DMSP-OLS NTL imagery, Goldblatt et al. (2018) combined DMSP-OLS NTL and Landsat-8 imagery to accurately produce 30-m impervious surface maps at a national scale. Therefore, the integration of multi-source and multi-temporal datasets is necessary and crucial to the production of accurate global impervious surface maps. 115 Lastly, the mapping of impervious surfaces at the global scale usually requires huge amounts of computation and large storage capabilities. Fortunately, the Google Earth Engine (GEE) cloud-based platform consists of a multi-petabyte analysis-ready data catalog co-located with a high-performance, intrinsically parallel computation service (Gorelick et al., 2017), meaning that the requirements for large-area image collection and very large computational resources can easily be met by using the free-access GEE cloud-computation platform. For example, Liu et al. (2018) produced multi-temporal global impervious 120 surface maps and Pekel et al. (2016) developed global high-resolution surface water maps and analyzed long-term changes using the GEE cloud-computation platform. Recently, Massey et al. (2018) produced a continental-scale cropland extent map for North America at 30 m spatial resolution for the nominal year 2010 based on the GEE platform. It can be seen, therefore, that the GEE is an efficient and useful computation platform for regional/global applications. So far, due to the limitations of data collection and computation capability, impervious surface mapping has mainly focused 125 on using a single type of remote sensing data or on case studies made at the regional scale. Although the GEE platform provides multi-petabyte analysis-ready data and efficient data-processing capabilities, an efficient method that can fully integrate these multi-source and multi-temporal datasets and produce accurate impervious surface maps at a spatial resolution of 30-m for the whole world is still lacking. The aims of this study, therefore, were (1) to produce a global 30-m impervious surface map from multi-source multi-temporal remote sensing datasets including Landsat-8 OLI, Sentinel-1 SAR, VIIRS NTL and MODIS 130 imagery using the GEE platform; (2) to investigate the accuracy of the global 30-m impervious surface mapping using validation samples and cross-comparison with five existing impervious surface products (GlobeLand30 (Chen et al., 2015), FROM_GLC (Gong et al., 2013), NUACI , GHSL (Florczyk et al., 2019) and HBASE (Wang et al., 2017a)). The results indicate that the global impervious surface map produced by the proposed method is accurate and is suitable for regional or global impervious surface applications. 135 Remote sensing datasets In this study, three kinds of data sources including Landsat-8 optical imagery, Sentinel-1 SAR data and STRM/ASTER DEM topographical variables were selected and collected for the mapping of impervious surfaces across the world on the GEE platform. Furthermore, the combination of VIIRS NTL imagery and MODIS EVI products was used to derive the set of global 140 impervious surface and non-impervious surfaces training data. All available Landsat-8 surface reflectance (SR) imagery from 2015 and 2016 (USGS, 2015), which had been archived on the GEE platform, were used in this study for the nominal year 2015 because of the frequent cloud contamination in the tropic areas. All the SR images were radiometrically corrected by the Landsat Surface Reflectance Code (LaSRC) atmospheric correction method (Hu et al., 2014;Vermote et al., 2016), and bad pixels including clouds, cloud shadow, and saturated pixels 145 were identified by the CFMask algorithm (Guide, 2018). The Sentinel-1 satellite provides C-band SAR imagery at a variety of polarizations and resolutions (Berger et al., 2012;ESA, 2016;Torres et al., 2012). Due to the high dielectric properties of the building materials, the unique geometry of manmade features, and the special radar echo properties of artificial structures, the impervious surfaces usually had stronger backscattered signals than other land-cover types (such as: barren land, cropland and so on) in the SAR imagery. In this study, 150 all available Sentinel-1 imagery from 2015 and 2016, which had already been calibrated and ortho-corrected then archived on the GEE platform, were also used for the nominal year 2015. In addition, each Sentinel-1 image on the GEE had been preprocessed with the Sentinel-1 Toolbox, including thermal noise removal, radiometric calibration and terrain correction (https://developers.google.com/earth-engine/sentinel1). Also, as HH-and HV-polarized Sentinel-1 SAR imagery does not cover the whole world (Sun et al., 2019a), a combination of dual-band cross-polarized (VV and VH) Interferometric Wide 155 Swath (IW) mode imagery in both 'ascending' and 'descending' orbits was used. The spatial resolution of this imagery was 10-m and the repeat cycle of the polar-orbiting two-satellite constellation is 6 days. The Shuttle Radar Topography Mission digital elevation model (SRTM DEM), provided by the NASA JPL at a resolution of 1 arc-second (approximately 30 m) and covering the area between 60° north and 56° south (Farr et al., 2007), was an useful auxiliary dataset for impervious surface mapping over mountainous areas because impervious surfaces mainly located in the 160 flat areas and Sentinel-1 SAR data usually reflected high backscatter similar to the impervious surfaces in mountainous areas (Ban et al., 2015). This dataset has undergone a void-filling process using other open-source data (ASTER GDEM2, GMTED2010 and NED) in the GEE platform. As for the high-latitude areas that lacked the SRTM data, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Model Version 2 (GDEM V2) (Tachikawa et al., 2011) was used instead. 165 The VIIRS NTL, collected by NASA/NOAA's Suomi National Polar-orbiting Partnership satellite (https://maps.ngdc.noaa.gov/viewers/VIIRS_DNB_nighttime_imagery/index.html), has the unique ability to record emitted visible and near-infrared (VNIR) radiation at night with a spatial resolution of 15 arc seconds (equivalent to 0.5 km at the equator) (Elvidge et al., 2017). Compared to the DMSP-OLS NTL data, the VIIRS NTL data provide higher spatial resolution, and finer radiometric resolution, which allows weaker surface radiation to be detected (Bennett and Smith, 2017). It is also the 170 main data source used for studying the expansion of impervious surfaces and related sociodemographic issues (Elvidge et al., 2017). In this study, a combination of VIIRS NTL, MODIS EVI imagery and GlobeLand30 land-cover products was used to derive the set of global training samples. The MODIS EVI imagery (MYD13Q1) from the MODIS V6 products contains the best available EVI data from among all the acquisitions obtained over a 16-day compositing period and has a spatial resolution of 250-m (Didan et al., 2015), which 175 was used to mitigate the NTL data's saturation problem and exclude false positive impervious samples (vegetated samples in the urban) when deriving the global training samples. In this study, the EVI imagery for 2015 in the GEE used the blue band to remove residual atmospheric contamination caused by smoke and sub-pixel thin clouds (https://developers.google.com/earth-engine/datasets/catalog/ MODIS_006_MYD13Q1). Global impervious surface products 180 In this study, five global impervious surface products (GlobeLand30, FROM_GLC, NUACI, HBASE and GHSL) − were used to validate the global impervious surface map produced using the MSMT_RF method. The GlobeLand30 data were also used to automatically derive the global impervious and non-impervious training samples. GlobeLand30 is an operational 30-m global land-cover dataset produced using the Pixel-Object-Knowledge-based method (POK-based) approach in 2000 and 2010 (Chen et al., 2015). In this study, the global impervious product derived from 185 GlobeLand30 in 2010 (GlobeLand30-2010, http://www.globallandcover.com/GLC30Download/index.aspx) was produced by combining pixel-based classification, multi-scale segmentation and manual editing based on the high resolution imagery and had been validated as having a user's accuracy of 86.7%. FROM_GLC, first produced in 2010, was the first 30-m resolution global land-cover dataset and was produced by supervised classification of 8,900 Landsat images (Gong et al., 2013). In this study, the second generation of FROM_GLC from 2015 190 (FROM_GLC-2015) (http://data.ess.tsinghua.edu.cn/) was used. This dataset was produced by using multi-seasonal Landsat imagery acquired between 2013 and 2015 and incorporates the day of year, geographical coordinates and elevation data . The NUACI-based maps, developed using the spectral index-based method applied to Landsat and DMSP-OLS NTL imagery, are multi-temporal global 30-m impervious surface datasets . In this study, the NUACI impervious map from 195 2015 (NUACI-2015) was used (http://www.geosimulation.cn/ GlobalUrbanLand.html). This map has been validated as having an overall accuracy of 0.81-0.84 and kappa coefficient of 0.43-0.50 at the global level . The GHSL (Global Human Settlement Layer), a global information baseline describing the spatial evolution of the human settlements in the past 40 years, was developed by using symbolic machine learning model trained by the collected highresolution samples, multi-temporal Landsat imagery in the epochs 1975, 1990(Florczyk et al., 2019. In this 205 study, the GHSL impervious surface map at 30-m for 2015 (GHSL-2015) (https://ghsl.jrc.ec.europa.eu/download.php) was employed for comparison analysis, which achieved an overall accuracy of 96.28% and kappa coefficient of 0.3233 validated using Land Use/Cover Area frame Survey (LUCAS) reference data (Pesaresi et al., 2016). (WIP), Bangkok (BGK) and Xi'an (XAN)), were selected ( Fig. 1). For each validation region, 600-1000 samples were randomly generated using the stratified random sampling strategy (Bai et al., 2015). As there were significant advantages to 215 using Google Earth for validation sample selection (Zhang et al., 2018c), each sample was labeled either as "non-impervious surface" or "impervious surface" based on visual interpretation of the available high-resolution remote sensing imagery in Google Earth. To ensure the reliability of each validation sample, two prior impervious products, including NLCD impervious products (Homer et al., 2015) and Copernicus land monitoring surfacehigh resolution layer imperviousness (Langanke et al., 2016) which were validated to achieve high overall, user's and product's accuracies exceeding 82% and 90% respectively, 220 Validation samples were overlaid to the high-resolution remote sensing imagery. In addition, the location of each sample was moved to the center of the relevant surface object (building, road, etc.) because of the greater spectral mixing effect and uncertainty at the boundary of the objects. Like the work of Sun et al. (2019b), if the impervious area in the 30-m ×30-m validation window was more than a predefined threshold of 50%, we will consider this validation point as impervious surface, otherwise, it would be labeled as non-impervious surface. After careful interpretation, a total of 11,942 samples including 4952 impervious samples and 6990 225 non-impervious samples were obtained. In order to minimize the subjective influence of interpretation, the validation samples were collected independently by three different scientists. If there was dispute between the interpretation results of three scientists, the validation point was discarded. Collection of global training samples As the reliability and representativeness of the training samples would affect the classification accuracy directly (Foody and Mathur, 2004), we proposed combining GlobeLand30, VIIRS NTL and MODIS EVI data to derive accurate impervious and 235 non-impervious samples. The GlobeLand30 land-cover product was used to derive global training samples because it had many advantages including: (1) the impervious surface layer in GlobeLand30 was accurately developed by combining the pixel-based classification, multi-scale segmentation and manual editing based on high resolution imagery and validated to achieve an user's accuracy of 86.7%; (2) it simultaneously contained the impervious surface and other land-cover types similar to impervious surface (such as cropland and bare land), so the global training samples including several non-impervious land-240 cover types could be easily collected to build the RF model for accurately mapping of impervious surface. However, as these was temporal interval of 5 years between GlobeLand30 and our study, it was assumed that the process of transforming nonimpervious surfaces into impervious surfaces was irreversible during the period 2010 to 2015, meaning that the global impervious training samples derived from GlobeLand30-2010 could also be used to represent the situation in 2015. Specifically, as GlobeLand30 used an object-based labeling method to remove the "salt-and-pepper effect" caused by the pixel-245 based classification method (Chen et al., 2015), the impervious surfaces consisted of independent blocks. Usually, a large number of mixed pixels and misclassifications occur at the boundary of image blocks or objects, and Yang et al. (2017) also found that GlobeLand30 exhibited higher accuracy in homogeneous areas. The land-cover heterogeneity was calculated as the number of land-cover types occurring in a local window (Jokar Arsanjani et al., 2016). According to the statistics of Chen et al. (2015), there were a little commission and omission errors in each scene when the area of impervious surface block was 250 less than 8×8. In this study, the local window size was set to 9×9 after balancing the sample reliability and completeness because the higher window size would cause the candidate samples miss those small and broken impervious objects (such as: rural villages). Therefore, if the land-cover heterogeneity in the 9×9 local window was greater than 1 (meaning that the landcover types within the window consisted of both impervious and non-impervious types), the center pixel was removed from the candidate training point set (CanTPS_Imp). 255 Secondly, to minimize the effects of mapping error in GlobeLand30-2010 and temporal interval between GlobeLand30-2010 and the input imagery for training samples in CanTPS_Imp, the VIIRS NTL data, revealed the intensity of socioeconomic activities, was imported to refine each training point in 2015. However, as the coarse spatial resolution of VIIRS NTL imagery might cause a 'blooming effect' in suburban areas, the EANTLI proposed by Zhuo et al. (2018) was applied to reduce the blooming effects: 260 where is the normalized NTL value, is the annual mean value of the time-series MODIS EVI products and is the actual value of the VIIRS NTL data. The EANTLI measured the likelihood of the pixel corresponding to an impervious surface, so it was reasonable to assume that the pixels where EANTLI exceeded a certain threshold were impervious surface pixels. In this study, as the candidate training 265 points in CanTPS_Imp were collected from homogenous 9×9 pixel areas (270 m×270 m), the EANTLI image in 2015 (EANTLI-2015) was first resampled to the 270 m to match with these candidate points. The GlobeLand30-2010 impervious surface map had a user's accuracy of 86.7%, and we assumed that the process of transforming non-impervious surfaces into impervious surfaces was irreversible during the period 2010 to 2015, so the impervious segmentation threshold was selected as being the lowest 15th quantile of the cumulative probability of all candidate impervious points for EANTLI-2015;namely, 270 if the cumulative probability of the impervious point in CanTPS_Imp was lower than the threshold, the candidate point was removed from CanTPS_Imp. As for the non-impervious pixels, there was usually a negative correlation between nonimpervious surfaces and EANTLI values, and the non-impervious surface samples turned into impervious surface would have high EANTLI values in 2015, so if the cumulative probability of a candidate non-impervious point in CanTPS_Imp was greater than the top 20th percentile of the cumulative probability of all candidate non-impervious points (the threshold being based on 275 the overall accuracy of 80.33% for GlobeLand30-2010 and a little potential conversion samples), the candidate non-impervious point was also removed. Lastly, although the candidate training points were refined using the GlobeLand30 land-cover product and EANTLI-2015 imagery, the volume of candidate training points was still huge and so it was necessary to further resample the CanTPS_Imp. As the non-impervious surfaces consisted many land-cover types (water, vegetation, cropland and bare soil) and some of them 280 were spectrally similar to the impervious surface. For example, the bare soil and high reflectance impervious surfaces usually shared similar surface reflectance especially in arid and semi-arid areas with large areas of bare soils because the composition of impervious surfaces included rock material which was also found in bare areas (Sun et al., 2019b;Weng, 2012), the cropland showed similar reflectance to these low reflectance impervious surfaces (such as rural village, old cities) because they were usually composited of vegetation and high reflectance artificial materials or bare soils (Li et al., 2015). Therefore, the non-285 impervious training samples were split into three independent groups including: bare area, cropland and other non-impervious land-cover types. Furthermore, many studies had demonstrated that the distribution and balance of training samples had great influence on the mapping accuracy. For example, Zhu et al. (2016) found unbalanced training samples directly resulted in rare land-cover types under-represented relative to more abundant classes. Since the impervious surface was usually sparser than the non-impervious land-cover types (bare soil, cropland and so on), the training samples with uniform distribution were 290 selected to ensure the rationality of training samples and capture all relevant spectral heterogeneity within impervious surfaces, namely, the approximate ratio of 1:3 was used to represent the proportion of impervious to non-impervious surfaces (bare area, cropland and other non-impervious land-cover types). In addition, as the land-cover distribution varied with geographical region, the stratified random sampling strategy was applied at every 5°×5° geographical grids to ensure the training samples locally adaptive. Using the stratified random sampling strategy with the uniform distribution, a total of 4,483,000 training 295 samples, including 3,499,000 non-impervious samples and 984,000 impervious samples, were collected over the land areas across the globe. Although a series of rules were applied to guarantee the high confidence of global training samples, due to the classification error in GlobeLand30 and the temporal interval between GlobeLand30 and input imagery, the global training dataset inevitably contained some erroneous samples. The relationship between the percentage of the erroneous samples and the mapping 300 accuracy of impervious surface was analyzed in the Discussion section 6.1, and the results indicated that the error in the training samples had little effect on the mapping accuracy Multi-source and multi-temporal impervious classification method To develop the global 30-m impervious surface map for 2015, the multi-source and multi-temporal random forest classification (MSMT_RF) method was proposed. The method is illustrated in Fig. 2. First, time series of Landsat-8 SR and Sentinel-1 SAR 305 imagery archived on the GEE platform were collected. Secondly, the temporalspectraltextural features and temporalSAR features were derived from the Landsat-8 and Sentinel-1 imagery using image compositing methods. Thirdly, based on the global training samples derived from GlobeLand30-2010, VIIRS NTL and MODIS EVI imagery, the random forest classifier was trained at each 5°×5° geographical grid cell using the temporalspectraltexturalSAR-topographical features. Finally, the global impervious surface map was compared with existing impervious surface products and further validated using the 310 visual interpretation samples. Multi-source and multi-temporal feature selection As mentioned above, the datasets used in this study had been acquired from various satellite sensors and had distinctive features. 315 Also the incorporation of multi-source and multi-temporal remote sensing data has been demonstrated to improve the accuracy of the mapping of impervious surfaces. In this study, three kinds of satellite imagery, including Landsat-8 SR, Sentinel-1 SAR and SRTM/ASTER DEM imagery, were collected for the global classification of impervious surfaces. After masking out the bad pixels (cloud, shadow and saturated pixels), the time-series Landsat SR imagery were needed to reduce the number of dimensions of the temporalspectral features to guard against the Hughes phenomenon (Zhang et al., 320 2019). Similar to what Hansen et al. (2014) and Zhang and Roy (2017) introduced to capture phenology, the 15th and 85th percentiles of Landsat SR were used instead of the minimum and maximum values to minimize the effects of residual shadows and cloud caused by the errors in the CFMask method (Massey et al., 2018). In addition, as Sun et al. (2017) explained that the growing season was the best time for impervious surface mapping over temperate continental climate zones and Zhang et al. (2014a) found that winter (dry season) is the best season to estimate impervious surface in subtropical monsoon regions, 325 the combination of 15th and 85th percentiles of Landsat SR was used to efficiently capture intra-annual variation information of various land-cover types. It should be noted that only the six optical bands (Blue, Green, Red, NIR, SWIR1 and SWIR2) were selected because the Coastal band was sensitive to the atmospheric scattering . Liu et al. (2018) found that the Normalized Difference Water Index (NDWI), Normalized Difference Vegetation Index (NDVI) and Normalized Difference Built-up Index (NDBI) were of great help in impervious surface identification; therefore, these three spectral 330 indexes were added to the spectral features, giving a total of 18 features for the two-epoch imagery. Furthermore, as the texture information contributed to the classification performance (Weng, 2012), the local textural measures based on the Gray Level Co-occurrence Matrix (GLCM) were adopted; however, because of the redundancy and similarity between texture features (Rodriguez-Galiano et al., 2012), only the variance, dissimilarity and entropy of the NIR band were selected from the 7×7 local window for the two-epoch imagery (Chen et al., 2016;Zhang et al., 2014b). The optimal window size for texture 335 measurements was highly dependent on the image spatial resolution and the land cover characteristics (Zhu et al., 2012) and Shaban and Dikshit (2001) computed texture measurements with different window sizes as inputs for urban area classification and suggested window sizes of 7 ×7 pixels perform best. As the Sentinel-1 SAR imagery had been pre-processed in the GEE platform, the annual mean and standard deviation of the VV and VH imagery were directly derived from the time-series of Sentinel-1 SAR imagery. Zhang et al. (2014b) found that 340 SAR texture features were also relevant to impervious surfaces and the dissimilarity, variance and entropy features of the VV and VH imagery were identified as effective indicators for the texture description of different urban land cover types. As Zhang et al. (2014b) explained the window size for calculating GLCM should be smaller as terrains are smaller under coarser resolution, the window size was chose as 9×9 pixels at 10-m spatial resolution, equivalent to 3×3 pixels in 30-m. Moreover, as the spatial resolution of the Landsat SR (30-m) was three times that of the Sentinel-1 imagery (10-m), the SAR data were 345 resampled to 30-m for integration with the Landsat SR data. Lastly, as Sentinel-1 SAR imagery usually had high backscatter similar to the impervious surface over mountainous areas, terrain information were useful auxiliary for removing these false positive at these areas (Ban et al., 2015). Similarly, Clarke et al. (1997) found that terrain variables were of great help in identifying impervious surfaces because they usually located in the flat areas. In this study, the elevation, slope and aspect, calculated from the SRTM/ASTER DEM data, were added to the 350 feature vector. This gave a total of 37 features for each pixel location, including 18 spectral features and 6 texture features from the Landsat imagery, 10 SAR features and 3 topographical variables. The features are listed in Table 1. Random forest classification model As in the work of Zhang and Roy (2017), there were two options for models to use in the global impervious surface 355 classification: global and local models. The global model is a single classifier, trained using the global training samples, and then used to classify the entire global data set. The local model, is trained using regional samples; the regional classification results are then mosaicked to produce the global map. Zhang and Roy (2017) confirmed that locally adaptive models achieve a higher classification accuracy than a single global model. Therefore, the global land surfaces were split into approximately 1000 5°×5° geographical grid cells after considering the data volume and amount of computation needed for the regional 360 mapping. In addition, to ensure the classification consistency across the cell boundaries, as in the work of Zhang et al. (2018c) and Zhang and Roy (2017), the training samples from the adjacent 3×3 geographical cells were also imported to train the classifier in classifying the central geographical cell. As for the specific techniques used in classifiers, according to our previous investigations , the Random Forest (RF) classifier is more capable of handling high-dimensional multicollinearity data. It is also less affected by noise and 365 feature selection as well as being more accurate and efficient than other widely used classifiers such as the SVM (Support Vector Machine), CART (Classification And Regression Tree) and ANN (Artificial Neural Network) classifiers. Therefore, the RF classifier was selected for the development of the global impervious surface map. The RF classifier has only two parameters: the number of classification trees (Ntree) and the number of selected predication features (Mtry). Furthermore, many researchers have demonstrated that there is almost no correlation between these two 370 parameters and the classification accuracy (Belgiu and Drăguţ, 2016;Du et al., 2015;Gislason et al., 2006); therefore, the default values of 500 for Ntree and the square root of the total number of training features for Mtry were selected. Accuracy assessment To completely analyze the performance of the MSMT_RF-based method, two validation methods including 'fraction-based' and 'pixel-based' were adopted. First, the 'fraction-based' validation method mainly illustrated the spatial agreement of 375 impervious surfaces between the MSMT_RF-based impervious surface map and several existing products (GlobeLand30-2010, The importance of multi-source and multi-temporal features Because of the spectral heterogeneity of impervious surfaces, it is very difficult to accurately map impervious surfaces using only optical remote sensing imagery (Zhang et al., 2014b). Although a few studies have demonstrated that the integration of multi-source and multi-temporal information can improve the mapping accuracy, these studies mainly focused on regions with 390 high impervious surface density (Zhang et al., 2014b;Zhu et al., 2012). At present, global impervious surface maps are still produced by optical imagery alone or by using a combination of optical and DMSP-OLS or VIIRS NTL imagery (Huang et al., 2016;Liu et al., 2018;Schneider et al., 2010). This is the first study that developed the global 30-m impervious surface map using multi-source and multi-temporal imagery. To quantitatively demonstrate the need for using multi-source, multitemporal information, we randomly selected six 5°×5° regions (red rectangles in Fig. 1) from six different continents and then 395 calculated the importance of the training features using the RF model. Specifically, the RF model computed the average increase in the mean square error by permuting out-of-bag data for a variable while keeping all the other variables constant, thus measuring the variable's importance (Pflugmacher et al., 2014). Training features that had a high importance were the drivers of the model decision and their values had a significant impact on the output values. The importance of all 37 training features for the six regions is illustrated in Fig.3. These results indicate that the Sentinel-1 400 SAR features (VV and VH) had the greatest contribution to the final decision in most regions because SAR images can provide information about the structure and dielectric properties of the surface materials. Next in importance were the 15th percentile of Landsat SR in the blue, green, red and SWIR2 bands and the corresponding NDVI and NDWI indices, as well as the texture variance and dissimilarity for Sentinel-1 SAR. The importance of these feature was close to or exceeded 5% in most cases. Then came the 85th percentile of Landsat SR in the NIR and SWIR1 bands as well as the SAR texture features, with a mean 405 importance about 3%. To intuitively understand the characteristics of different land-cover types on optical and SAR imagery, two regions (the 410 vegetation-prevalent region of Asia and bare soil-prevalent semi-arid region of Australia) were selected for comparison analysis. Fig. 4 illustrated the reflectance and backscatter statistics (mean and standard deviation) of five typical land-cover types (cropland, vegetation, bare soil, impervious surfaces and water body). Obviously, impervious surfaces had highest backscatter signals in VV because of the high dielectric properties of the building materials, the unique geometry of manmade features, and the special radar echo properties of artificial structures, followed by the vegetation land-cover types. Further, 415 since only a small part of the polarized signals (vertical turning horizontal) were returned to the sensor, the VH was significantly lower than VV but the ranking orders of different land-cover types in VH was similar to that of VV. Due to the complicated construction and heterogeneity of the impervious surfaces, the impervious surfaces also had highest standard deviation, for example, the urban central usually reflected higher VV and VH signals than the village buildings. If only Sentinel-1 SAR features were used to identify impervious surfaces, there would be serious confusion between the mountainous 420 vegetation with low reflectance impervious surfaces (such as: villages and small cities), fortunately, the optical reflectance features performed well to distinguish them because of significant spectral differences. However, if only the multi-temporal optical imagery were used to detect the impervious surfaces, there would be obvious confusion between impervious surfaces with bare soils and croplands, for example, the spectral characteristics of impervious surfaces, bare soils and croplands were overlapping in the Asia region (Fig. 4). In summary, only the combination of multi-source training features could guarantee 425 the classification accuracy across different impervious landscapes. Figure 4: The reflectance/backscatter characteristics of different land-cover types over Landsat optical and Sentinel-1 SAR imagery in the Asia and Australia regions. Secondly, although the 15th percentile had a higher importance than the 85th percentile in most of the spectral bands, we found 430 that there was a large degree of complementarity between the images from two different seasons (Fig. 3). For example, the importance of the 15th percentile in the NIR and SWIR1 bands was low while that of 85th percentile was high, and the total importance of the bi-seasonal spectral features exceeded 70% in some cases. The reasons that the temporal information was important for accurately mapping of impervious surface included: (1) some land-cover types such as cropland had similar spectra with impervious surface at fallow season, but with the growing season imagery imported, this misclassification could 435 be easily removed; (2) Sun et al. (2017) explained that the growing season was the best time for impervious surface mapping over temperate continental climate zones, and Zhang et al. (2014a) found that winter (dry season) is the best season to estimate impervious surface in subtropical monsoon regions. The multi-temporal information can address the problem of seasonal variability at different geographical zones. Fig. 4 (Australia region) also illustrated that the cropland and impervious surfaces were spectrally inseparable in the 15th percentile but the difference was obvious in the 85th percentile. Therefore, temporal 440 variability can be considered an important contribution for accurate impervious surface mapping. Thirdly, the importance of Landsat texture features was lower than 5% in these six regions because the Sentinel-1 SAR backscatter and texture features were able to provide information on the surface material and its spatial structure and variation. Due to the complexity of land-surfaces and different mechanism of optical and SAR imagery, the optical textures could complement a lot to SAR features at mountainous and semiarid areas (Asia and Australia regions). Some studies demonstrated 445 that these features contributed a lot to the improvement of impervious mapping accuracy. For example, Shaban and Dikshit (2001) emphasized that the integration of texture variables increased the accuracy from 86.86% to 92.69% because texture imagery could capture the local spatial structure and the variability of land-cover categories. Lastly, since most regions are located in the flat areas, only the cumulative importance of topographical variables over the region in Asia exceeded 5%. The reasons why topographical information reached high importance over mountainous areas 450 were because the impervious surfaces usually located in the flat areas (Ban et al., 2015) and Sentinel-1 SAR imagery had high backscatter signals over mountainous areas similar to the impervious surfaces, which increased the importance of topographical variables. Similarly, Clarke et al. (1997) explained that topographical variables (slope, aspect and DEM) contribute a lot to impervious surface mapping over mountainous areas. These features are, therefore, indispensable in the accurate mapping of impervious surfaces in mountainous regions. 455 Global impervious surface map The global distribution of the fraction of impervious area (FIA) at a spatial resolution of 0.05° is illustrated in Fig. 5, whilst the meridional and zonal total FIA for each 0.05° longitude and latitude bin are shown at the top and left of Fig. 3. From an intuitive and statistical perspective, globally, impervious surfaces are mainly concentrated in three continents: Asia (34.43%), North America (28.04%) and Europe (24.98%), followed by South America (5.89%), Africa (5.63%) and Australia (1.06%). 460 In addition, the zonal statistics indicate that 70% of the impervious surfaces are distributed between 30°N and 60°N because these regions contain the key areas of Asia, North America and Europe, which are the locations of the most developed countries and highest population densities. The meridional results illustrate that there are four peak intervals: 100° W to 70° W (United States), 10° W to 40° E (European Union), 60° E to 90° E (India) and 100° E to 130° E (China and southeastern Asia). The two peak values in the meridional direction are located in the centers of the United States and China. 465 Summaries of the impervious surface areas at a national scale were also produced. The statistical results indicated that the total impervious surface areas of the top 20 countries account for 75.96% of the total global area. Fig. 6 presents the top 20 counties 470 in terms of impervious surface area and corresponding fractions of the world total. Overall, there is a positive correlation between these statistical fractions and the land area, population and degree of economic development of these nations. Specifically, it was found that the U.S. has the biggest impervious surface area, accounting for more than 20% of the global total, and only the top 3 countries (U.S., China and Russia) exceed 5% of the total global area. The ranking is also basically consistent with the statistics produced by the Organization for Economic Co-operation and Development (OECD) for built-up 475 areas in 2014 (https://stats.oecd.org/Index.aspx?DataSetCode=BUILT_UP). Spatial variations of global impervious products To quantitatively analyse the spatial agreement between the MSMT_RF-based impervious surface map and the five existing 490 Scatter plots of the five products against the MSMT-2015 impervious map were then made, as illustrated in Fig. 8. The results indicate that there was a greater agreement between the MSMT-2015 map and GHSL-2015 (R 2 =0.783, RMSE= 0.038 and slope=0.921) than for other products. Specifically, as NUACI-2015 has been demonstrated to miss some small, fragmented villages and roads (Sun et al., 2019b), the slope of the regression line was less than 1.0 and R2 was the low value of 0.655 in this case. The scatter plot between FROM_GLC-2015 and MSMT-2015 indicated that there was a high degree of agreement 495 between FROM_GLC-2015 and MSMT-2015 results in 'high-fraction' regions (close to 1:1) but FROM_GLC-2015 was obviously lower than MSMT-2015 over 'low fraction' regions, so the slope of the regression line for FROM_GLC-2015 was also less than 1. The main differences between the GlobeLand30 and the MSMT_RF-based maps were due to the temporal interval of 5 years and the limitations of the minimum 4×4 mapping unit for GlobeLand30-2010 (Chen et al., 2015), so the scatters were mainly concentrated below the 1:1 line. The HBASE-2010 had higher impervious areas than MSMT-2015 500 especially for the 'high-fraction' regions, but the following section demonstrated that it suffered the over-estimation problem, so the regression slope was higher than 1 and R 2 only reached the value of 0.730. In addition, to intuitively understand the stability of regression model, the error bars, calculated as the standard deviation of reference data with the fitted results, were added to the scatter plots. It could be found that the error bars increased first and then stabilized as the impervious fraction increased. 505 Accuracy assessment using validation samples 510 The accuracy of the five global impervious surface maps over 15 validation regions with different impervious landscapes is presented in Table 2. Six evaluation metrics, including the producer's accuracy (which measures the commission error) and user's accuracy (which measures the omission error) of the impervious surface, the producer's and user's accuracy of nonimpervious surfaces as well as the overall accuracy and kappa coefficient, were used to assess the accuracy. Overall, the (Table 2). As for the GlobeLand30-2010, there was little omission for 550 the fragmented impervious objects over peripheral urban because of the temporal interval of 5 years and the minimum 4×4 mapping unit (Chen et al., 2015). The HBASE-2010 had biggest impervious areas among several global products but it misclassified the vegetation and bare soils into impervious surfaces in the urban central, so it had highest commission error of 9.5% in Table 2. As for the second bare soil prevalent city of Niamey, these products, except for the GHSL-2015 which had smaller impervious area than other products and missed the peripheral impervious objects, had similar performance with the 555 Phoenix: the NUACI-2015 had high omission error especially for the fragmented objects, the HBASE-2010 lost the impervious details and achieved highest commission error of 5.3% in Table 2, the GlobeLand30-2010 missed some small objects (the limitation of minimum 4×4 mapping unit) and peripheral impervious objects caused by the temporal interval, and the FROM_GLC-2015 had great performance on the dense impervious areas but it was under-estimated over peripheral areas. MSMT_RF Next, in the vegetation-prevalent region of New York, six products generally had similar identification results and accurately 560 captured the spatial distribution of New York city, so they achieved high mapping accuracy exceeding 90% in Table 2. However, from a detail perspective, there were still differences between these products. Specifically, NUACI-2015 performed well in the central of city but missed the sparse impervious objects over the peripheral city, for example, the enlargement region (red rectangle) illustrated the mixture of vegetation and sparse buildings over the peripheral city, the NUACI-2015 and Table 2. Fig.9 intuitively illustrated the performance of each product. GlobeLand30-2010 had smaller impervious areas in the central city because of the temporal interval and missed the road networks due to the minimum mapping unit of 4×4. As a result, the GlobeLand30-2010 achieved the lowest user's accuracy. NUACI-2015 captured impervious surfaces in the central city but missed the road networks and sparse village buildings in the peripheral cities. FROM_GLC-575 2015 and HBASE-2015 had similar performance in these two regions, which captured medium and large cities but missed the road networks and villages buildings. As HBASE-2010 contained the OpenStreetMap data to provide information on major road network (Wang et al., 2017a), the omission error of the HBASE-2010 was relatively low and only these village roads and buildings were missed, however, it still suffered serious over-estimation problem. Especially in the Bangkok city, the nonimpervious pixels (bare soils, water, and vegetation) was misclassified as impervious surfaces. Therefore, the HBASE-2010 580 reached the highest commission error among these impervious products in Table 2. In contrast to other classification-related studies that require manual efforts to collect training samples (Gao et al., 2012;Im et al., 2012;Zhang et al., 2016), we overcame the expensive cost of collecting accurate and sufficient training samples at a global scale. To ensure the accuracy and reliability of the training samples, a combination of the GlobeLand30-2010 land-cover 595 product, which had been validated to have a producer's accuracy (which measures the commission error) of 94.7% for impervious surfaces (see Section 5.4), and DMSP-OLS NTL imagery was adopted to guarantee the reliability of each sample. As it was difficult and challenging to evaluate the accuracy of all the training samples, we randomly selected 1% of the total training samples (in Section 3) including 34,990 non-impervious and 9,840 impervious points to measure the reliability of the global training samples. After careful checking, we found that these training samples achieved accuracies of 91.9% and 99.5% 600 for impervious and non-impervious surfaces, respectively. Meanwhile, even if the training samples still contained a small number of erroneous points, the random forest model has been demonstrated to be resistant to noise and presence of erroneous samples (Belgiu and Drăguţ, 2016). In this study, we randomly changed the category of a certain percentage of the 34,990 samples and used the ''noisy" samples to train the random forest classifier. Fig. 10 illustrates the overall accuracy and impervious producer's accuracy decreased for the increased percentage 605 of erroneous samples. It was found that the overall and impervious producer's accuracy remained stable when the percentage of erroneous samples increased from 1% to 20% while it rapidly decreased when the percentage of erroneous samples was higher than 20%. Similarly, Gong et al. (2019) also found that the decrease in overall accuracy was less than 1% when the error in the training samples was less than 20%. Therefore, the reliability and sensitivity analysis indicated that: (1) the random forest model is resistant to noisy training samples and performs well if the percentage of erroneous samples is lower than 20%; and (2) the training samples derived from the GlobeLand30 and DMSP-OLS NTL imagery were accurate enough for use in global impervious surface mapping. 615 Limitations of the proposed method Although the proposed MSMT_RF method has been demonstrated to have the ability to produce the accurate impervious surface products, there are still some limitations to the method. First, as the training samples derived from the GlobeLand30-2010 are restricted to a 9×9-pixel local window and further refined by the integration of MODIS EVI and VIIRS NTL imagery, low-density impervious samples might be omitted and cause further omission of low-density impervious surfaces (rural 620 villages, small roads and so on). Although, in this study, spatially adjacent training samples from the surrounding 33 areas were imported to reduce the omission of low-density samples, according to the accuracy assessment, higher omission errors were found in low and medium-density regions than in high-density regions. Therefore, our future work will pay more attention to the omission of low-density impervious surfaces. Secondly, as Weng (2012) pointed out, mixed pixels are common in medium-resolution imagery due to the limitations of the 625 spatial resolution and spectral heterogeneity of the landscape. The effectiveness of 'hard' classifiers is easily affected by these mixed pixels (low-density impervious pixels also constitute mixed pixels). Due to the proportion of impervious surfaces within a pixel, impervious surface areas are often overestimated in urban areas or underestimated in rural areas when using mediumresolution images (Lu and Weng, 2006). Therefore, our future work will focus on simultaneously producing the likelihood ('soft' probability) of each pixel being an impervious surface. At present, some scientists have produced continuous impervious 630 fractions at a regional scale: for example, Okujeni et al. (2018) used the support vector regression method to estimate the fraction of impervious surfaces at the pixel scale. Data availability and user guidelines The global impervious surface map data set generated in this paper are available on Zenodo: https://doi.org/10.5281/zenodo.3505079 . 635 To facilitate the readers to reproduce this work, Table 3 gives the details of the datasource and platform information of the datasets and processes in this study. The input remote sensing datasets and products came from three parts including: GEE platform, free access websites and our group. Specifically, five kinds of basic datasets in section 2.1 were available at GEE platform. The five impervious surface products in section 2.2 were downloaded from the free access websites from National Geomatics Center of China, Tsinghua University, Sun Yat-sen University, National Aeronautics and Space Administration 640 (NASA), and Joint Research Centre (JRC). The validation samples were produced by our group using visual interpretation. Further, the process of derivation of global training samples was implemented by using the multi-source datasets at localhost computation platform, and the random forest classification at each 5°×5° regional grid was developed by our group on the GEE platform using JavaScript language. The importance of multi-source and multi-temporal features and reliability and sensitivity of global training samples were analyzed at the localhost Python computation environment. 645 Table 3. The detailed information of the datasets and processes in this study Datasource and platform Detailed datasets and processing steps Due to the spectral heterogeneity and complicated make-up of impervious surfaces, large-area impervious mapping is challenging and difficult. In this study, a global 30-m impervious surface map was developed by using multi-source, multi-temporal remote sensing data based on the Google Earth Engine platform. First, the global training samples were automatically 650 derived from the GlobeLand30-2010 land-cover product together with VIIRS NTL and MODIS EVI imagery. Then, a local adaptive random forest model was trained using the training samples and multi-source and multi-temporal datasets for each 5°×5° geographical grid. Following that, the global impervious map produced by mosaicking a large number of 5°×5° regional impervious surface maps was validated by comparing it with several existing products (GlobeLand30-2010, FROM_GLC-2015, NUACI-2015, HBASE-2010and GHSL-2015
2020-01-23T09:08:23.733Z
2020-01-21T00:00:00.000
{ "year": 2020, "sha1": "908dc4a3eaed4e186e104cf76b8ad1356b5233b8", "oa_license": "CCBY", "oa_url": "https://essd.copernicus.org/articles/12/1625/2020/essd-12-1625-2020.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "770e06ce8c7a0016cc2462eac160501d510ac25c", "s2fieldsofstudy": [ "Environmental Science", "Mathematics" ], "extfieldsofstudy": [] }
21103585
pes2o/s2orc
v3-fos-license
Design Patterns for Augmented Reality Learning Games . Augmented Reality (AR) is expected to receive a major uptake with the recent availability of high quality wearable AR devices such as Microsoft’s Hololens. However, the design of interaction with AR applications and games is still a field of experimentation and upcoming innovations in sensor technology provide new ways. With this paper, we aim to provide a step towards the structured use of design patterns for sensor-based AR games, which can also inform general application development in the field of AR. Introduction "The convergence of mobile computing and wearable computing with augmented reality is naturally of great interest to interaction designers" [1], while "the convergence of wearable computing, wireless networking and mobile AR interfaces" is bringing "a new breed of computing called 'augmented ubiquitous computing'" [2]. Augmented Reality (AR) can be defined as "the fusion of any digital information with physical world settings, i.e. being able to augment one's immediate surroundings with electronic data or information, in a variety of formats including visual/graphic media, text, audio, video and haptic overlays" [3].Features, "most of which are present in most AR systems" [4] are listed as: "Sense properties about the real world; process in real time; output (overlay) information to the user; provide contextual information; recognize and track real-world objects; be mobile or wearable" [4]. Examples of AR systems utilizing senses other than sight include [5], whose application for cultural sciences students' field trips focused on audio augmentation, arguing that "just like a user should -while driving a car -use sight as much as possible to drive, we believe that with location based learning, a learner's eyes must be primarily used to examine the environment".Haptic feedback corresponding to virtual objects may be transferred from Virtual Reality to AR [6]. Location-based AR outputs information based on the user's position [3,7,8].Points of Interest (POI) are defined and associated with virtual assets -"when a user […] explores a space the POIs are revealed and the content can be accessed" [8].V ision-based AR functions by using computer vision techniques to identify and track patterns known as fiducials (visual markers [9]) in the environment [3,8].Both of these approaches have their advantages and disadvantages: Fiducials can only be used with systems trained to recognize them and if conditions like inadequate lighting do not interfere.Location-based systems can suffer from inaccuracy or loss of tracking [10]. A way to combine the advantages of both approaches may lie in hybrid systems as described by [11] or image understanding [12].The Microsoft HoloLens utilizes a depth camera and tracks head movements through various sensors.A technique called "spatial mapping" [13] is used to construct a three-dimensional model of the surroundings and display virtual content at the appropriate coordinates. Design patterns for general interaction [14] and games exist [15] and high-level patterns for Mixed Reality games have been proposed [16].We aim to close the gap between low-level interactions in sensor-based wearable AR systems and high-level game-design patterns for learning games by providing a framework of design patterns for AR games.While such a framework can generally be applied to all kinds of AR games, our main target is to guide the construction of AR learning games.However, pedagogical in relation to the design patterns are not in the focus of this research.Instead, here our focus is more on interactivity and visualization.Definitions, approaches, potentials and limitations of AR are presented, followed by our framework and its first prototypical implementation. Augmented Reality for Learning games AR has been applied to many domains, including "hands-free instruction and training, language translation, obstacle avoidance, advertising, gaming, museum tours, and much more" [4], maintenance and repair [17], or Big Data visualization [18], where AR "might solve many issues from narrow visual angle, navigation, scaling, etc.". Games are an application particularly well-suited for the medium of AR, as "augmented reality is an active, not a passive technology" [7], which emphasizes the "dialogue between the media and the context in which it is used" [3].Although commercial AR games can be said to go back as far as 2003's EyeToy [10], efforts were for a long time focused on research, until the advance of smartphone technology, which made devices with AR capabilities widely available [16]. Pokémon GO [19], an AR game based on both, the well-known Pokémon franchise and Ingress [20], is a rare example of a mobile AR game with a large player base.In the field of learning games [21] reports that Mixed Reality games offer the opportunity to "sense and feel being 'someone' else", while first person experiences make it challenging to develop empathy.Locatory is an AR adaptation of the game Memory, requiring players to find virtual cards spread around the environment and then match them to real landmarks, to foster orientation skills [22]. Game design patterns have been used to map cognitive and affective learning outcomes in AR games for learning [23].Similarly, a recent literature review identifies three design principles for learning-oriented AR -"enable and then challenge," "drive by gamified story," and "see the unseen" [24]. Knowledge about how to best approach the design of AR games is still lacking [16], a sentiment [25] shares: "Little is known on how to systematically apply gamedesign patterns to augmented reality".Similarly to these, [24] attempts to extrapolate design guidelines from the AR game Dino Dig, which despite having educational content was primarily intended to entertain. Design Patterns for AR-based Games Design patterns describe precisely how to use design techniques in order to achieve certain positive effects, at the same time providing insight and creating a shared vocabulary in the form of a pattern language [16,26].More precisely, design patterns "express a relationship between particular design contexts, forces […], and desired ('positive' or good) features" [26]. Björk and Holopainen [27] collected game design patterns, concerned with idea generation.General characteristics of patterns can be outlined as: "Operational and precise"; "positive"; "flexible"; "debatable (the Pattern is clear enough to criticize)"; "testable"; "end-user oriented."[26].A well-defined game design pattern language would allow for efficient communication, documentation and analysis "e.g. for purposes of comparative criticism, re-engineering, or maintenance" [28]. The patterns collected by Björk & Holopainen [27] do not utilize a problemsolution approach, with Björk, Lundgren, & Holopainen arguing that "not all aspects of design can or should be seen as solving problems, especially in a creative activity such as game design" [15].The Game Ontology Project describes, analyzes and studies games with pattern-like entries existing in a hierarchy the top level of which includes interface, rules, entity manipulation, and goals [29].More on the pedagogical side [30] define a framework for the construction of learning games. The literature revealed only a few pattern approaches for the domain of AR, mainly as interaction patterns.The examples below are presented informally but fit characteristics from [26].They provide data, which the framework presented in this paper was able to expand on and have thus been included. The Point Of Interest (POI) interaction pattern is often implemented in mobile AR browsers.When arriving at pre-defined points, users receive information about the environment through a choice of channels [5].Browsers may also direct the user towards nearby points of interest [3].The Head-Up Display (HUD) presents information from a fixed point of view, i.e. the information is not assigned some coordinate in 3D space [1].The Tricorder interaction pattern refers to scenarios in which information is scanned from the environment, adding "pieces of information to an existing real-world experience" [1].Holochess experiences consist of presenting entirely virtual objects to the AR environment.X -Ray V ision-based experiences allow "seeing beneath the surface of objects, people, or places" [1]. Design patterns for mobile games have been mapped to cognitive and motivational effects in educational AR games [31].A short, preliminary list of patterns "which take advantage of AR potential" comprising names and short descriptions consists of: Localization, video recording and view sharing, synchronous communication, contextualization, and object recognition [25].There are still challenges for AR to overcome, which inform the framework for interactions in AR in the remainder of this paper, applying a pattern approach, incorporating elements from the various sources discussed above, while adhering to the general characteristics laid out by [26]. A Pattern-based Framework for AR Learning Games Our pattern-based framework, mirroring above-mentioned approaches to game design patterns, is a classification of possible interactions, akin to the game mechanic terminology. Method.The framework and the software development are based on design patterns, which we defined according to the various approaches presented above [14,16,[25][26][27][28].Technologically, we build on the comparison of available AR systems and sensors performed by [32].In this first iteration of the framework, the pattern elements that were used in at least three of the six papers are present.They are: Name: A succinct name for the pattern. Forces/Problem: The issues the pattern is intended to combat.Feature/Solution: A description of one way to solve the problem. Effects/Consequences: The positive and negative consequences of applying the pattern, including design choices required for implementing the pattern. Requirements: We introduced requirements, which must or may be met to implement a pattern.This allows game designers interested in implementing patterns to ascertain whether a given pattern fits their criteria. Scope.Challenges to AR can roughly be sorted into those pertaining to technology, user interface and social acceptance [33].Due to the scope of this paper and the framework's focus on the interactive medium of games, of these only user interfaces -visualization and interaction -will be covered.Additionally, some patterns focus on the development side of AR applications.The content of the patterns listed below is derived from the literature mentioned, a brainstorming session with participants of the WEKIT project [34], and the characteristics of existing AR games acquired through play testing.We grouped the patterns into six groups: directional, environmental, input, non-visual feedback, media-related, and multi-user, displayed in table 1. Positions and orientations of users must be tracked and synch"d in real-time. Combined patterns for AR Learning Games The idea of the framework described above is twofold: (1) We aim to work towards a best practice collection of patterns, which enable players of AR learning games to intuitively understand interaction mechanisms.This aspect has been presented in the review of existing AR games and the patterns found within, which found its place in the effects/consequences column of each pattern. (2) We want to provide a design toolbox, enabling game designers and developers to construct complex games utilising and combining the basic patterns described.This section is about the second aspect.In line with [27], we see our pattern collection as part of a more general game design language.In this sense, not only should the patterns described here be combinable with each other, but also should a game designer be enabled to combine these patterns with patterns from other collections, such as those reviewed above.Consequently, we see the examples of proposed pattern combinations presented below as a demonstration of the possible use of our framework (table 2).Requirements consist of an environmental feature, an amount, and whether the amount represents an upper or lower limit.This data is compared to that gathered by the HoloLens to assess if the requirements defined on application level are met.EnvironmentRequirementGUI represents one way of visualizing the available information. The spatial understanding features of the device are further utilized in SpatialUnderstandingSpawner, an implementation of Environment-Adaption which uses predefined sets of rules and constraints from SpawnInformation to find suitable spots for instantiating objects, e.g. on a wall or on a floor, far from the player.EnvironmentRequirementsFromUnderstandingSpawner provides a bridge between the two previous mechanisms by generating simple requirements from the parameters of a SpatialUnderstandingSpawner. In first internal usability tests we could show the general functionality and operability of the patterns.In a next step, we aim to integrate the patterns into the general expertise-training framework of the WEKIT architecture [34]. EnvironmentRequirements EnvironmentRequirementGUI represents one way of visualizing the available information Consist of an environmental feature and an upper or lower limit.This is assessed against data from HoloLens.Environemnt Adaptation Spatial understanding features are utilized in SpatialUnderstandingSpawner Uses predefined sets of rules and constraints from SpawnInformation to find spots for instantiating objects, e.g. on a wall/floor, far away. Information Filter InformationFilter executes tasks based on data from InformationFilterMetric-derived classes, such as IFMetric_Distance (distance between object and player). The Information Filtering pattern performs actions according to different levels of proximity to the user, but can be extended to cover other metrics. Conclusions We presented an overview of AR definitions, approaches, and applications.We highlighted approaches towards specifying design patterns and created a framework of design patterns for AR games, provided ideas towards the construction of games based on these patterns, and exemplarily adapted a sample of them for Microsoft's Hololens using the Unity game engine. The work reported here can be expanded in several ways.The framework only covers a fraction of AR interactions, as the scope was limited to user interaction and usability with the Microsoft HoloLens, and it is likely not complete.The results can be seen as a proof of concept.Although focused on learning games, the framework can be used in other AR contexts such as commercial applications, learning applications, or simulations, which aim to make use of AR.Consequently, we see the main contribution of this paper to be a step towards broadening the understanding of AR interaction and application development based on design patterns.As next steps, we aim to apply the framework to the WEKIT AR training solution and to evaluate in pilot application cases related to aircraft maintenace, medical equipment operations, and space craft subsystem integration [34].Based on these evaluation, we aim to further develop the framework to cover a broad range of AR use cases and interaction scenarios. Figure 1 : Figure 1: Pattern framework classesWe implemented a prototype based on concepts of the open source ARLearn system for mobile serious games[35], adapted for HMDs as a proof of concept.The patterns have been developed for the HoloLens, using the Unity game engine and the HoloToolkit[36].Some of the patterns (Directed Gaze, Directed Movement, Gaze Cursor, Gesture-based Interaction, and Voice Commands) are already available in HoloLens, others were newly implemented in our prototype (figure1, table3).Interactable, InteractableCheck_Gaze, and InteractableCheck_Position together form the basis for Point of Interest and Gaze Point of Interest.The scripts inheriting from InteractableCheck provide different ways of detecting Interactable objects.The EnvironmentRequirements class implements the pattern of the same name.Requirements consist of an environmental feature, an amount, and whether the amount represents an upper or lower limit.This data is compared to that gathered by the HoloLens to assess if the requirements defined on application level are met.EnvironmentRequirementGUI represents one way of visualizing the available information.The spatial understanding features of the device are further utilized in SpatialUnderstandingSpawner, an implementation of Environment-Adaption which uses predefined sets of rules and constraints from SpawnInformation to find suitable spots for instantiating objects, e.g. on a wall or on a floor, far from the player.EnvironmentRequirementsFromUnderstandingSpawner provides a bridge between the two previous mechanisms by generating simple requirements from the parameters of a SpatialUnderstandingSpawner.In first internal usability tests we could show the general functionality and operability of the patterns.In a next step, we aim to integrate the patterns into the general expertise-training framework of the WEKIT architecture[34]. Table 1 . Basic Patterns Table 2 . Combined Patterns usable for learning games
2018-01-23T22:45:21.767Z
2017-12-05T00:00:00.000
{ "year": 2017, "sha1": "a0b62061a00b48be5da881baba6fbcbdbdaa59e1", "oa_license": "CCBYNCSA", "oa_url": "https://research.ou.nl/files/10719598/Emmerich-Klemke-Hummes-Patterns-in-AR-preprint.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "a0b62061a00b48be5da881baba6fbcbdbdaa59e1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
267847522
pes2o/s2orc
v3-fos-license
Myopathy due to carnitine palmitoyltransferase II deficiency: updating genetic aspects of the first publication in Brazil Carnitine palmitoyltransferase II (CPT II) deficiency is an autosomal recessive inherited disorder related to lipid metabolism affecting skeletal muscle. The first cases of CPT II deficiency causing myopathy were reported in 1973. In 1983, Werneck et al published the first two Brazilian patients with myopathy due to CPT II deficiency, where the biochemical analysis confirmed deficient CPT activity in the muscle of both cases. Over the past 40 years since the pioneering publication, clinical phenotypes and genetic loci in the CPT2 gene have been described, and pathogenic mechanisms have been better elucidated. Genetic analysis of one of the original cases disclosed compound heterozygous pathogenic variants (p.Ser113Leu/p.Pro50His) in the CPT2 gene. Our report highlights the historical aspects of the first Brazilian publication of the myopathic form of CPT II deficiency and updates the genetic background of this pioneering publication. INTRODUCTION The carnitine palmitoyltransferase enzymes (CPTs) catalyse the transfer of long-chain fatty acids from the cytoplasm into mitochondria, where the oxidation of fatty acids takes place. 1 There are two sub-forms of CPT: CPT I, at the outer mitochondrial membrane, and CPT II, located in the inner membrane. 1][5][6][7][8] In 1983, Werneck et al. reported the first publication of myopathy due to CPT II deficiency in Brazil, describing two brothers (►Figure 1A) aged 25 ('case 1') and 19 ('case 2') years, who presented muscle pain and decreased strength after prolonged exercise, which was made worse by colds. 9At that time, one of the patients developed recurrent myoglobinuria and episodic renal failure; creatine kinase levels were normal between the crises but increased 100 times during myoglobinuria episodes; needle electromyography suggested denervation; and muscle biopsy showed increased lipid droplets by the 'oil red O' stain and increased activity of succinic dehydrogenase histochemical reaction. In the 70s and 80s, the way to confirm CPT deficiency was by biochemical analysis of the muscle enzymatic activity, using a variety of biochemical methods that were not easily available.In the early years, one of the Brazilian authors had attended the 'Houston Merritt Clinical Research Center for Muscular Dystrophy and Related Disorders' of the Columbia University (New York, USA), which had expertise in muscle diseases of fatty acid metabolism.The proximity of Brazilian researchers to the leadership of this centre (Prof Salvatore DiMauro) allowed for the exchange of knowledge and, therefore, with the consent of patients, muscular enzymatic analysis was performed in New York.The biochemical analysis of muscle samples showed decreased carnitine-palmityl-transferase activity, with normal values for carnitine-octanoyl-transferase and carnitine-acetyltransferase, confirming the deficiency of CPT in the muscle of 'cases 1 and 2' (►Table 1).Due to this contribution, one of the authors of the first worldwide publication (Prof Salvatore DiMauro) 2 also shares the authorship of the first Brazilian publication in this field. 9he article published by Werneck et al. also discussed the metabolic pathway of fatty acid utilisation as an energy source for muscle during exercise in normal and pathological conditions with the knowledge of the time (►Figure 1B). 9 Over the past 40 years since this publication, clinical phenotypes and genetic loci in the CPT2 gene were described, and pathogenic mechanisms have been better elucidated.[7] The myopathic form of CPT II deficiency (MIM #255110), also known as the 'adult form', is the most common presentation; it is relatively mild and considered benign, but difficult to diagnose.The patients reported by Werneck et al. had this phenotype. 9This form presents most frequently in childhood or early adulthood and is usually characterised by attacks of myalgia, weakness, and/or rhabdomyolysis, precipitated mainly by exercise, but also by cold, fever, infection, or prolonged fasting. 1,3,6,8In severe cases, rhabdomyolysis may lead to myoglobinuria and, consequently, renal failure. 1,5,6owever, persistent weakness is uncommon in patients with this phenotype. 1,3There is a male predominance in CPT II deficiency, but the mechanism of this predominance is not clear. 1The creatine kinase levels are markedly elevated during attacks, but generally remain within the reference range or are slightly elevated between attacks. 1,3,4Although it is useful for differential diagnosis, muscle histological investigation in CPT II deficiency shows only unspecific myopathic changes with slight lipid accumulation. 3,4In other words, the muscle histology cannot establish CPT II deficiency as there is no myopathological hallmark, in contrast with other muscle lipid disorders (i.e.carnitine deficiency). 3,4In all patients, muscle CPT II deficiency must be confirmed biochemically or genetically. 3[6][7]9 A definitive diagnosis requires the identification of mutations in the CPT2 gene (OMIM Ã 600650).Several mutations in the CPT2 gene have been published with a correlation between genotype, metabolic dysfunction, and clinical presentation. 16][7] However, the biochemical consequences of the distinct mutations remain controversial. In this publication, we update the genetic background of the original publication by Werneck et al in 1973. 9In the original publication, genetic analysis of the CPT2 gene was not available.However, 'case 2' is still attending medical appointments in the outpatient clinic of the Neuromuscular Centre at the 'Hospital de Clínicas da Universidade Federal do Paraná' (Curitiba, Brazil).At 60 years of age, he presents recurrent episodes of weakness and myoglobinuria after exercise, cold, or prolonged fasting (currently: one episode/month), as well as renal disease (glomerular filtration rate: 32.6 ml/min/1.73m 2 ) due to recurrent rhabdomyolysis and gout crisis.He has been taking L-carnitine 2g/day since infancy and, recently, some supplements (coenzyme Q10 100mg / C vitamin 400mg / NADH 10mg / L-valine 500mg / L-leucine 1g / L-isoleucine 500mg by day).His neurological examination is normal, except for the presence of bilateral 'pes cavus' (similar to at 19 years old). 9Additionally, serum creatine kinase varied from 435 to 1754 U/L (Normal: <145 U/L) between the attacks; also, electrophysiological tests, such as needle electromyography and nerve conduction studies, were normal.The genetic analysis Myopathy due to carnitine palmitoyltransferase II deficiency Lorenzoni et al. showed one heterozygous variant (p.Ser113Leu) in the CPT2 gene by PCR/RFLP and compound heterozygosity for p.Ser113Leu/p.Pro50His in the CPT2 gene by next-generation sequencing.The maternal and paternal allelic origin of these variants has not been determined.More than 90 pathogenic variants have been identified in CPT2 gene, the majority are predicted to produce amino acid substitutions or small deletions.Among Caucasian patients, there is a high frequency of a few mutations, especially p.Ser113Leu and p.Pro50His, which were found in our patient. 74][5][6][7] The p.Pro50His mutation is also associated with a mild subset of CPT II deficiency and has been observed in about 6.5% of mutant alleles of patients with the myopathic form. 3,4,6,7espite these steps focused on identifying the diagnosis and management strategies of CPT II deficiency, this muscle disorder is still a mystery for many medical doctors, clinical geneticists, and even patients.In summary, our report highlights the historical aspects of the first publication of the myopathic form of CPT2 deficiency and updates the genetic background of the pioneering publication in Brazil. Table 1 The enzymatic muscular activity described in the original article ÃÃ pmoles of acetylcarnitine; ÃÃÃ nmoles/mg.
2024-02-25T05:23:25.385Z
2023-09-12T00:00:00.000
{ "year": 2024, "sha1": "929388e962ddd75e9f04a065d42f7760bb3e7088", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5e5f5f46edf7cf4dad9eca06d17a29cf9787a6cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218728006
pes2o/s2orc
v3-fos-license
Long Noncoding RNA DLGAP1-AS1 Promotes the Aggressive Behavior of Gastric Cancer by Acting as a ceRNA for microRNA-628-5p and Raising Astrocyte Elevated Gene 1 Expression Purpose: The long noncoding RNA DLGAP1 antisense RNA 1 (DLGAP1-AS1) plays well-de fi ned roles in the malignant progression of hepatocellular carcinoma. The purpose of this study was to determine whether DLGAP1-AS1 affects the aggressive behavior of gastric cancer (GC). Methods: DLGAP1-AS1 expression in GC tissue samples and cell lines was determined by reverse-transcription quantitative PCR. GC cell proliferation, apoptosis, migration, invasion, and tumor growth in vitro as well as in vivo were examined by the Cell Counting Kit-8 assay, fl ow-cytometric analysis, transwell migration and invasion assays, and xenograft model experiments, respectively. Results: DLGAP1-AS1 was overexpressed in GC tissue samples and cell lines. Among patients with GC, the increased level of DLGAP1-AS1 correlated with tumor size, TNM stage, lymph node metastasis, distant metastasis, and shorter overall survival. The knockdown of DLGAP1-AS1 suppressed GC cell proliferation, migration, and invasion in vitro, as well as promoted cell apoptosis and hindered tumor growth in vivo. Mechanistically, DLGAP1-AS1 functioned as a competing endogenous RNA for microRNA-628-5p (miR-628-5p) in GC cells, thereby increasing the expression of the miR-628-5p target astrocyte elevated gene 1 (AEG-1). Functionally, the recovery of the miR-628-5p/AEG-1 axis output attenuated the effects of DLGAP1-AS1 knockdown in GC cells. Conclusion: DLGAP1-AS1 is a pleiotropic oncogenic lncRNA in GC. DLGAP1-AS1 plays a pivotal part in the oncogenicity of GC in vitro and in vivo by regulating the miR-628-5p/ AEG-1 axis. DLGAP1-AS1, miR-628-5p, and AEG-1 form a regulatory pathway to facilitate GC progression, suggesting this pathway as an effective target for the treatment of GC. was also transfected into SNU-1 and AGS cells as the control. The proliferation, apoptosis, migration, and invasiveness of the aforementioned cells were investigated using CCK-8 assay, fl ow-cytometric analysis (propidium iodide; PI), and transwell migration and invasion experiments, respectively. *P < 0.05 and **P < 0.01. Introduction Gastric cancer (GC) is the fourth most common cancer and the third major cause of cancer-associated deaths globally. 1 Approximately 850,000 new GC cases and 650,000 associated deaths are registered every year. 2 Currently, surgical resection followed by chemoradiation and adjuvant chemotherapy is the first-line therapeutic strategy for patients with GC. 3 Tremendous advances in the diagnosis and management of GC have been made in the past several decades; unfortunately, the therapeutic efficacy of the existing modalities is still not ideal, with an overall 5-year survival rate of only 20%. 4,5 Recurrence and metastasis are the major obstacles for the curative treatment of GC. 6 In addition, chemoresistance contributes to the poor therapeutic outcomes for patients with GC diagnosed at an advanced stage. 7 Multiple factors, including Helicobacter pylori infection, diet, smoking, and obesity, play important roles in gastric carcinogenesis and GC progression; however, the detailed molecular events underlying GC pathogenesis are not well understood. Hence, an in-depth understanding of the mechanisms underlying GC initiation, progression, and chemoresistance is urgently needed for identifying promising diagnostic options and therapeutic interventions. Long noncoding RNAs (lncRNAs) belong to a cluster of transcripts over 200 nucleotides in length and lacking protein-coding capacity. 8 They can modulate gene expression at the epigenetic, transcriptional, and post-transcriptional levels, and these regulatory roles are carried out through various mechanisms, including interactions with RNA, proteins, and DNA. [9][10][11] Intriguingly, lncRNAs have attracted much attention due to their significant correlations with carcinogenesis and cancer progression. [12][13][14] An increasing number of studies have shown that numerous lncRNAs are abnormally expressed in GC. [15][16][17] Notably, there is increasing evidence supporting a close relationship between lncRNA dysregulation and malignant characteristics in GC. 18,19 MicroRNAs (miRNAs, miRs) are classified as singlestranded noncoding short RNAs approximately 19-25 nucleotides in length. 20 MiRNAs serve as major posttranscriptional regulators of gene expression by directly interacting with the 3′ untranslated regions (3′-UTRs) of their target mRNAs, which can result in the subsequent degradation of a target mRNA or suppression of its translation. 21 MiRNAs are implicated in nearly all known physiological and pathological processes, including carcinogenesis and cancer progression. 22 Accordingly, comprehensive research into the involvement of lncRNA and miRNAs in GC progression may facilitate the development of promising treatment options, and thereby improve clinical outcomes among patients with this disease. A lncRNA termed DLGAP1-AS1 performs welldefined functions in the malignant progression of hepatocellular carcinoma. 23 Nonetheless, it is not known whether DLGAP1-AS1 plays a role in the regulation of GC oncogenicity. In this study, we attempted to quantify DLGAP1-AS1 expression in GC and determine the clinical relevance of DLGAP1-AS1 in GC. We further aimed to investigate the role of DLGAP1-AS1 in the malignant characteristics of GC and clarify the underlying molecular events. MiR-628-5p is weakly expressed in pancreatic ductal adenocarcinoma, 24 epithelial ovarian cancer 25 and glioma, 26 and inhibits the malignancy of these cancer types. On the contrary, miR-628-5p is highly expressed in osteosarcoma and promotes cancer progression. 27 AEG-1 is upregulated in GC, which is correlated with adverse clinical features and poor prognosis. [28][29][30] Functionally, AEG-1 performes cancer-promoting actions in gastric carcinogenesis and cancer progression, and is involved in multiple aggressive phenotype. [31][32][33][34][35] Yet, as far as we know, there has been no study that has explored the issue of DLGAP1-AS1, miR-628-5p, and AEG-1 in GC. Herein, we also attempted to address the functions and associations between DLGAP1-AS1, miR-628-5p, and AEG-1 in GC. Tissue Samples and Cell Lines Sixty-three pairs of samples of tumor tissues and the corresponding adjacent non-tumor tissues were collected from patients with GC at Gaomi People's Hospital. All these patients underwent surgical resection and had not been treated with chemotherapy, radiotherapy, or other anticancer modalities. The experimental protocols of our current study were approved by the Ethics Committee of Gaomi People's Hospital and were performed in accordance with the Declaration of Helsinki. In addition, all participants provided written informed consent prior to surgical resection. GC patients were followed-up, ranging for 60 months. All tissue samples were snap-frozen in liquid nitrogen after collection and then transferred to a -80°C cryogenic freezer. Five human GC cell lines, MKN-45, HGC27, SNU-1, AGS, and MGC-803, were purchased from the Type Culture Collection of the Chinese Academy of Sciences (Shanghai, China). A human gastric epithelial cell line, GES-1, was obtained from American Type Culture Collection (Manassas, VA, USA). Dulbecco's modified Eagle's medium (DMEM; Gibco, Thermo Fisher Scientific, Inc., Waltham, MA, USA) containing 10% of fetal bovine serum (FBS; Gibco, Thermo Fisher Scientific, Inc.), 100 U/mL penicillin, and 100 μg/mL streptomycin was utilized for cell culture. The cells were grown at 37°C in a humidified incubator supplied with 5% of CO 2 . Reverse-Transcription Quantitative PCR (RT-qPCR) TRIzol Reagent (Invitrogen, Thermo Fisher Scientific, Inc.) was used for total RNA extraction from tissue samples or cells. After the extraction, the quantity and purity of total RNA were determined on a NanoDrop spectrophotometer (ND-1000; Nanodrop Technologies, Thermo Fisher Scientific, Inc.). To measure miR-628-5p expression, cDNA synthesis was carried out using a miRcute Plus miRNA First-Strand cDNA Synthesis Kit, and the synthesized cDNA was then subjected to PCR amplification using the miRcute Plus miRNA SYBR Green qPCR Kit (both form Tiangen Biotech Co., Ltd., Beijing, China). U6 small nuclear RNA acted as the control for miR-628-5p. All gene expression levels were calculated using the 2 −ΔΔCq method. Subcellular Fraction Extraction About 1 × 10 7 cells were harvested and used for separating nuclear and cytoplasmic RNA by means of a Cytoplasmic and Nuclear RNA Purification Kit (Norgen, Ontario, Canada). The nuclear and cytoplasmic fractions were analyzed using RT-qPCR to determine the distribution of DLGAP1-AS1 expression in GC cells. GAPDH and U6 served as the cytoplasmic and nuclear controls, respectively. Cell Counting Kit-8 (CCK-8) Assay Transfected cells were collected at 24 h post-transfection and resuspended in the culture medium. Hundred microliters of the cell suspension, containing an estimated 2,000 cells, was inoculated into wells of 96-well plates. Six replicate wells were set for each group. The CCK-8 assay was performed to analyze cellular proliferation at four time points: 0, 24, 48, and 72 h after inoculation. At every time point, 10 μL of the CCK8 solution (Dojindo, Kumamoto, Japan) was added into each well prior to incubation at 37°C with 5% CO 2 for an additional 2 h. The absorbance was read at a 450 nm wavelength on the spectrophotometer. Growth curves were drawn accordingly. Flow-Cytometric Analysis of Apoptosis Transfected cells were collected after 48 h of incubation, washed twice with ice-cold phosphate-buffered saline, and then used for measurement of the apoptosis rate using the Annexin V-Fluorescein Isothiocyanate (FITC) Apoptosis Detection Kit (BioLegend, Inc., San Diego, CA, USA). The transfected cells were resuspended in 1× binding buffer and transferred to a 5 mL culture tube, followed by incubation with 5 µL of Annexin V-FITC and 5 µL of the propidium iodide solution provided with the kit. Following 15 min incubation at room temperature in darkness, the proportion of apoptotic cells was measured on a flow cytometer (BD Biosciences, Franklin Lakes, NJ, USA). Transwell Migration and Invasion Assays Twenty-four-well transwell chambers (8 μm pore size; BD Biosciences, Franklin Lakes, NJ, USA) were used to determine the migratory and invasive abilities of the cells. For the migration assays, 5 × 10 4 transfected cells resuspended in FBS-free DMEM were seeded in the upper compartments. For the invasion assay, the chambers were precoated with Matrigel (BD Biosciences, Franklin Lakes, NJ, USA) prior to cell seeding. The upper compartments were loaded with the same number of cells as that used in the migration assay. For both assays, DMEM containing 20% of FBS was employed as a chemoattractant in the lower compartments. Transfected cells were incubated at 37°C in a humidified incubator with 5% of CO 2 . After 24 h, the cells that passed through the pores in the membrane were fixed with 4% polyformaldehyde and stained with 0.5% crystal violet. After extensive washes, images were captured using a light microscope (200× magnification; Olympus Corporation, Tokyo, Japan). Six fields of view were randomly chosen, and the average cell number was determined. Xenograft Model Experiment Lentiviral vectors carrying DLGAP1-AS1 short hairpin RNA (shRNA; sh-DLGAP1-AS1) and negative control shRNA (sh-NC) were generated by Shanghai GenePharma Co., Ltd. AGS cells growing in the logarithmic growth phase were collected and seeded into 6-well plates. To obtain cells with stable DLGAP1-AS1 silencing, AGS cells were transfected with lentiviral vectors carrying sh-DLGAP1-AS1 or sh-NC and were then selected by incubation with puromycin. The animal experiments were approved by the Animal Ethical Committee of Gaomi People's Hospital. All experimental steps were performed in accordance with the Animal Protection Law of the People's Republic of China-2009 for experimental animals. Female BALB/c nude mice (weighing 19-21 g, aged 5-7 weeks) were bought from Shanghai Lingchang Biotech Co., Ltd. (Shanghai, China) and were maintained under specific pathogen-free conditions. AGS cells stably transfected with either sh-DLGAP1-AS1 or sh-NC were harvested and injected into the flank of nude mice through subcutaneous inoculation. Starting at 2 weeks after inoculation, the length and width of the growing tumor xenografts were measured every 4 days using calipers. Finally, all the nude mice were euthanized by means of cervical dislocation. The tumor xenografts were excised, photographed, and weighed. The volume of tumor xenografts was calculated via the following formula: volume=0.5 × width 2 × length. The fragment of the wild-type (wt) 3′-UTR of AEG-1 predicted to interact with the miR-628-5p and mutant (Mut) AEG-1 3′-UTR was produced by Shanghai GenePharma Co., Ltd., and inserted into the pmirGLO vector (Promega Corporation, Madison, WI, USA). The constructed luciferase reported plasmids were named Wt-AEG-1 and Mut-AEG-1, respectively. The same experimental procedures were applied to synthesize Wt-DLGAP1-AS1 and Mut-DLGAP1-AS1. Cells were seeded in 24-well plates 1 day before transfection. Cotransfection of either the miR-628-5p mimic or miR-NC and either Wt or Mut reporter plasmids was performed using the Lipofectamine ® 2000 Reagent. Finally, the transfected cells were collected at 48 h posttransfection, and luciferase activity was analyzed using a Dual-Luciferase Reporter Assay System (Promega Corporation, Madison, WI, USA). Firefly luciferase activity was normalized to Renilla luciferase activity. RNA Immunoprecipitation (RIP) Assay The RIP assay was carried out using a Magna RIP™ RNA-Binding Protein Immunoprecipitation Kit (Millipore; Bedford, MA, USA) to evaluate the interaction between DLGAP1-AS1 and miR-628-5p in GC cells. GC cells were lysed in pre-cooled complete RIP lysis buffer, and the cell lysate was incubated with magnetic beads conjugated with human anti-Argonaute 2 (AGO2) or control anti-immunoglobulin G (IgG) antibody. Subsequent to proteinase K treatment, the enrichment of DLGAP1-AS1 and miR-628-5p by AGO2 was examined via RT-qPCR. Western Blot Analysis A radioimmunoprecipitation assay (RIPA) kit containing proteinase inhibitors (Beyotime Institute of Biotechnology, Haimen, China) was utilized for total-protein isolation from cells. The concentration of isolated total protein was assessed using a Bicinchoninic Acid Kit (Beyotime Institute of Biotechnology, Haimen, China). Equivalent amounts of protein were loaded onto each lane and separated by SDS-PAGE in a 10% gel, followed by transfer onto polyvinylidene fluoride membranes. After blocking with 5% defatted milk powder diluted in Tris-buffered saline containing 0.5% Tween 20 (TBST), the membranes were incubated overnight at 4°C with a primary antibody against AEG-1 (cat. No. ab124789, 1:500 dilution in TBST; Abcam, Cambridge, UK) or GAPDH (cat. No. ab128915, 1:500 dilution; Abcam). After three washes with TBST, a goat anti-rabbit horseradish peroxidaseconjugated secondary antibody (cat. No. ab205718, 1:5000 dilution; Abcam) was added and incubated at room temperature for 2 h. Enhanced Chemiluminescence Reagent (Bio-Rad Laboratories, Hercules, CA, USA) was employed to measure the protein signals. Statistical Analysis All the results are presented as the mean ± standard deviation from at least three independent experiments. The relationship between DLGAP1-AS1 and the clinical features of patients with GC were evaluated using the χ 2 test. Comparison of the differences between two groups was carried out using Student's t-test. One-way analysis of variance followed by Tukey's post hoc test was conducted to examine differences among multiple groups. The expression correlation between DLGAP1-AS1 and miR-628-5p was tested via Spearman correlation analysis. The Kaplan-Meier method was utilized to plot survival curves, followed by the log rank test to compare survival outcomes. All the data were analyzed using the SPSS 20.0 statistical software (SPSS, Inc., Chicago, IL, USA), and differences were considered statistically significant when the P value was less than 0.05. DLGAP1-AS1 Is Upregulated in GC To characterize the expression profile of DLGAP1-AS1 in GC, 63 pairs of GC tissue samples and corresponding adjacent non-tumor tissues were collected, and DLGAP1-AS1 expression was determined via RT-qPCR. DLGAP1-AS1 was highly expressed in GC tissue samples compared with the corresponding adjacent non-tumor tissue samples ( Figure 1A). Furthermore, analysis of DLGAP1-AS1 expression in GC cell lines (MKN-45, HGC27, SNU-1, AGS, and MGC-803) and the human gastric epithelial cell line GES-1 was performed through RT-qPCR. DLGAP1-AS1 was upregulated in all five GC cell lines relative to the normal cell line GES-1 ( Figure 1B). The correlation between DLGAP1-AS1 expression and the clinical characteristics of patients with GC was elucidated in detail. All patients with GC were subdivided into either DLGAP1-AS1 low or DLGAP1-AS1 high expression groups based on the DLGAP1-AS1 median expression level among the GC tissue samples. This analysis revealed that the expression of DLGAP1-AS1 significantly correlated with tumor size (P = 0.023), TNM stage (P = 0.011), lymph node metastasis (P = 0.017), and distant metastasis (P = 0.027) among the patients with GC (Table 1). Kaplan-Meier survival curve analysis indicated that patients with GC and high DLGAP1-AS1 expression had significantly shorter overall survival compared to patients with low DLGAP1-AS1 expression ( Figure 1C; P = 0.032). These results suggest that DLGAP1-AS1 may be involved in the progression of GC. Silencing DLGAP1-AS1 Expression Inhibits GC Cell Proliferation, Migration, and Invasion and Promotes Cell Apoptosis The GC cell lines SNU-1 and AGS demonstrated relatively higher DLGAP1-AS1 expression compared with the other three GC cell lines; hence, for our following experiments, these two cell lines were selected as the models to investigate the role of DLGAP1-AS1 in the malignancy of GC. si-DLGAP1-AS1 was transfected into SNU-1 and AGS cells to decrease endogenous DLGAP1-AS1 expression, which was confirmed by RT-qPCR ( Figure 2A). The results of the CCK-8 assay revealed that the proliferative ability of SNU-1 and AGS cells significantly decreased following DLGAP1-AS1 downregulation ( Figure 2B). Flow-cytometric analysis was performed to determine the apoptosis rate of DLGAP1-AS1-deficient SNU-1 and AGS cells. Downregulation DLGAP1-AS1 increased the apoptotic rate of SNU-1 and AGS cells ( Figure 2C). We further evaluated whether DLGAP1-AS1 affects the migration and invasiveness of GC cells in vitro using transwell migration and invasion assays. The knockdown of DLGAP1-AS1 impaired the migratory ( Figure 2D) and invasive ( Figure 2E) abilities of SNU-1 and AGS cells. These results clearly suggest that DLGAP1-AS1 plays oncogenic roles in the malignant phenotype of GC. DLGAP1-AS1 Serves as a Sponge of miR-628-5p in GC Cells To explore the mechanisms involved in the oncogenic actions of DLGAP1-AS1, subcellular fraction extraction was performed to investigate the localization of DLGAP1-AS1 expression in GC cells. The data confirmed that DLGAP1-AS1 was mostly distributed in the cytoplasm of SNU-1 and AGS cells ( Figure 3A). Recent studies revealed that cytoplasmic lncRNAs act as competing endogenous RNAs (ceRNAs) to directly interact with miRNAs and reduce their expression, resulting in the upregulation of their target mRNAs. [36][37][38] Hence, we hypothesized that DLGAP1-AS1 may work as a ceRNA in GC. Based on the results of bioinformatics prediction, miR-628-5p ( Figure 3B) was selected for further evaluation due to its crucial functions in the oncogenicity of multiple human cancers. [25][26][27] A luciferase reporter assay was carried out to confirm the targeting relationships between DLGAP1-AS1 and miR-628-5p in GC cells. In SNU-1 and AGS cells, exogenous miR-628-5p expression effectively decreased the luciferase activity of Wt-DLGAP1-AS1; however, the luciferase activity of Mut-DLGAP1-AS1 was unaffected in response to miR-628-5p overexpression ( Figure 3C). In addition, DLGAP1-AS1 and miR-628-5p were vastly enriched in the AGO2 antibody-treated group relative to the IgG antibodytreated group, as determined through RIP assay ( Figure 3D). RT-qPCR was performed to detect miR-628-5p in 63pairs of GC tissue samples and corresponding adjacent non-tumor tissues. MiR-628-5p was downregulated in GC tissues compared with adjacent non-tumor tissues ( Figure 3E), and the expression levels of DLGAP1-AS1 and miR-628-5p were inversely correlated in the 63 GC tissues ( Figure 3F; r = −0.5472, P < 0.0001). We then knocked down DLGAP1-AS1 expression in SNU-1 and AGS cells and detected the expression of miR-628-5p to further assess the interaction between DLGAP1-AS1 and miR-628-5p. Transfection with si-DLGAP1-AS1 led to a significant upregulation of miR-628-5p in SNU-1 and AGS cells ( Figure 3G). These results suggest that DLGAP1-AS1 may act as a molecular sponge for miR-628-5p in GC cells. AEG-1 Is a Direct Target Gene of miR-628-5p in GC Cells and Is Positively Regulated by DLGAP1-AS1 After verifying the downregulation of miR-628-5p in GC, we next studied the specific roles of this miRNA in GC cells. After miR-628-5p mimic was introduced into SNU-1 and AGS cells, miR-628-5p was remarkably upregulated compared with cells transfected with miR-NC ( Figure 4A). CCK-8 assay and flow-cytometric analysis demonstrated that ectopic miR-628-5p expression resulted in a significant decrease in cell proliferation ( Figure 4B) and increase in cell apoptosis ( Figures 4C and D) in SNU-1 and AGS cells. Furthermore, transwell migration and invasion assays revealed that the migratory ( Figure 4E) and invasive ( Figure 4F) abilities of SNU-1 and AGS cells were greatly reduced after miR-628-5p overexpression. Identification of the direct targets of miR-628-5p is an essential step toward a better understanding of its participation in gastric carcinogenesis and GC progression. To elucidate the mechanism by which miR-628-5p suppressed GC progression, bioinformatics analysis was performed for miR-628-5p target prediction. AEG-1 was selected for further analysis as it is known to be closely associated with the progression of GC, [28][29][30][31][32][33][34][35]39 and the 3′-UTR of the AEG-1 mRNA was predicted to directly interact with miR-628-5p ( Figure 4G). To test this hypothesis, the luciferase reporter assay was carried out to evaluate the direct interaction between miR-628-5p and the 3′-UTR of AEG-1. Transfection of the miR-628-5p mimic reduced the luciferase activity of the plasmid harboring the wild-type miR-628-5pbinding sites (1 and 2). By contrast, the luciferase activity barely changed in SNU-1 and AGS cells after cotransfection with the plasmid carrying the mutant AEG-1 3′-UTR (Mut-AEG-1) plus the miR-628-5p mimic ( Figure 4H). To further investigate the association between miR-628-5p and AEG-1 in GC, RT-qPCR analysis was performed to measure AEG-1 expression in the 63 pairs of GC tissue samples and corresponding adjacent non-tumor tissue samples. The mRNA expression of AEG-1 was higher in the GC tissue samples than in the corresponding adjacent non-tumor tissues ( Figure 4I). In addition, Spearman correlation analysis revealed an inverse correlation between the expression levels of miR-628-5p and AEG-1 mRNA among the GC tissue samples ( Figure 4J; r = −0.5365, P < 0.0001). Furthermore, the mRNA ( Figure 4K) and protein ( Figure 4L) expression levels of AEG-1 were dramatically lower in SNU-1 and AGS cells overexpressing miR-628-5p, as evidenced by RT-qPCR and Western blotting. Collectively, these results clearly identified AEG-1 as a direct target gene of miR-628-5p in GC cells. DLGAP1-AS1 functioned as a molecular sponge for miR-628-5p in GC cells, and AEG-1 functioned as a direct target of miR-628-5p. Accordingly, we further investigated whether DLGAP1-AS1 may influence the expression of AEG-1 in GC cells. The mRNA and protein levels of AEG-1 in SNU-1 and AGS cells after si-DLGAP1-AS1 or si-NC transfection were determined through RT-qPCR and Western blotting, respectively. As expected, depletion of DLGAP1-AS1 decreased the AEG-1 mRNA ( Figure 4M) and protein ( Figure 4N) expression levels in SNU-1 and AGS cells. These results demonstrated that DLGAP1-AS1 functioned as a ceRNA for miR-628-5p and consequently raised the expression of AEG-1 in GC cells. The AEG-1 overexpression plasmid (pcDNA3.1-AEG-1) or empty pcDNA3.1 plasmid was cotransfected with si-DLGAP1-AS1 into SNU-1 and AGS cells. The efficiency of pcDNA3.1-AEG-1 was determined by Western blotting ( Figure 6A). pcDNA3.1-AEG-1 or the empty pcDNA3.1 plasmid was transfected into DLGAP1-AS1 deficient-SNU -1 and AGS cells. Then, functional experiments were performed in these cells, and the results revealed that the recovery of AEG-1 expression abrogated the effects of DLGAP1-AS1 downregulation on the proliferation ( Figure 6B), apoptosis ( Figure 6C), migration ( Figure 6D), and invasiveness ( Figure 6E) of SNU-1 and AGS cells. The above results provided additional evidence that DLGAP1-AS1 worked as a ceRNA to facilitate the malignancy of GC cells at least partly by increasing the output of the miR-628-5p/AEG-1 axis. For personal use only. DLGAP1-AS1 Knockdown Hinders GC Tumor Growth in vivo To explore the effects of DLGAP1-AS1 on the tumor growth of GC cells in vivo, AGS cells stably transfected with either sh-DLGAP1-AS1 or sh-NC were inoculated into the flanks of nude mice to establish a transplanted tumor model. The tumor xenografts grew more slowly ( Figures 7A and B), and the resultant tumor weight was significantly lower ( Figure 7C) in the sh-DLGAP1-AS1 group than in the sh-NC group. Further analysis revealed that DLGAP1-AS1 was still decreased ( Figure 7D) and miR-628-5p was increased ( Figure 7E) in the tumor xenografts derived from DLGAP1-AS1-downregulated AGS cells. Furthermore, Western blotting showed that the amount of AEG-1 protein was decreased in the tumor xenografts obtained from the sh-DLGAP1-AS1 group ( Figure 7F). These results suggest that the depletion of DLGAP1-AS expression inhibited the tumor growth of GC cells in vivo. Discussion Multiple lncRNAs have been found to be aberrantly expressed in GC, and this abnormal expression is strongly involved in the initiation and progression of GC. [40][41][42] It is , and transwell migration and invasion assays were respectively used for testing the proliferation, apoptosis, migration, and invasiveness of SNU-1 and AGS cells treated as described above. *P < 0.05 and **P < 0.01. therefore important to explore the biological functions of dysregulated lncRNAs in GC as this may contribute to the development of effective therapeutic strategies for and improvement of the clinical outcomes of patients with GC. In this study, we evaluated DLGAP1-AS1 expression in GC and investigated the effects of DLGAP1-AS1 on the malignancy of GC in detail. To the best of our knowledge, this is the first study on the expression pattern and involvement of DLGAP1-AS1 in GC. DLGAP1-AS1 expression is upregulated in hepatocellular carcinoma. 23 DLGAP1-AS1 downregulation inhibits the proliferation and promotes the apoptosis of hepatocellular carcinoma. 23 Nonetheless, whether DLGAP1-AS1 is deregulated in GC and, if so, whether its deregulation is closely related to the malignant characteristics of GC had not been elucidated. In this study, we demonstrated that DLGAP1-AS1 is overexpressed in GC tumors and cell lines. Increased DLGAP1-AS1 expression significantly correlated with tumor size, TNM stage, lymph node metastasis, and distant metastasis among our patients with GC. In addition, patients with GC and high DLGAP1-AS1 expression had shorter overall survival compared with Figure 6 Restored astrocyte elevated gene 1 (AEG-1) expression reverses the effects of DLGAP1-AS1 knockdown in gastric cancer (GC) cells. (A) SNU-1 and AGS cells were transfected with the AEG-1 overexpression plasmid pcDNA3.1-AEG-1 or empty pcDNA3.1 plasmid. Western blotting was conducted to evaluate AEG-1 protein expression. (B-E) The pcDNA3.1-AEG-1 was introduced into DLGAP1-AS1 small interfering RNA (si-DLGAP1-AS1)-transfected SNU-1 and AGS cells. Negative control siRNA (si-NC) was also transfected into SNU-1 and AGS cells as the control. The proliferation, apoptosis, migration, and invasiveness of the aforementioned cells were investigated using CCK-8 assay, flow-cytometric analysis (propidium iodide; PI), and transwell migration and invasion experiments, respectively. *P < 0.05 and **P < 0.01. Dovepress Deng et al patients who had low DLGAP1-AS1 expression. Functionally, interference with DLGAP1-AS1 expression suppressed GC cell proliferation, migration, and invasion in vitro, as well as induced cell apoptosis and impaired tumor growth in vivo. Investigation of the molecular mechanisms underlying the tumor-promoting effects of DLGAP1-AS1 in GC may help identify effective targets for anticancer therapies. As a factor considerably affecting post-transcriptional modulation, lncRNAs competitively decrease the binding of miRNAs to their target mRNAs by "sponge" adsorption, positively regulating the expression of oncogenic or tumor suppressive genes. 43 Our bioinformatics prediction indicated that miR-628-5p has a putative DLGAP1-AS1 binding site. Further experimental validation found that DLGAP1-AS1 could directly interact with miR-628-5p in GC cells. In addition, miR-628-5p was weakly expressed in GC tissues and demonstrated a negative correlation with DLGAP1-AS1 expression. Furthermore, knockdown of DLGAP1-AS1 resulted in a notable increase of miR-628-5p expression in GC cells. After identifying AEG-1 as a target of miR-628-5p, we next investigated the regulatory relationship among DLGAP1-AS1, miR-628-5p and AEG-1. AEG-1 was positively regulated by DLGAP1-AS1 in GC cells, and the regulatory influence was exerted through miR-628-5p sponging. In addition, rescue assays revealed that increasing the output of the miR-628-5p/ AEG-1 axis neutralized the DLGAP1-AS1 deficiencymediated GC progression inhibition. All in all, our study identified a ceRNA regulatory pathway in GC involving DLGAP1-AS1, miR-628-5p, and AEG-1. Upregulation of miR-628-5p in osteosarcoma is correlated with poor clinical outcomes in patients. 27 MiR-628-5p acts as an oncogenic miRNA in osteosarcoma and is involved in the control of cell proliferation, migration and invasion. 27 On the contrary, miR-628-5p is downregulated in epithelial ovarian cancer 25 and glioma, 26 and it performs anti-oncogenic roles in the progression of these malignancies. In this study, we first confirmed that miR-628-5p expression is low in GC. Exogenous miR-628-5p expression played an inhibitory role in the aggressive behavior of GC cells. Ethical Approval and Informed Consent The experimental protocols used in the current study were approved by the Ethics Committee of Gaomi People's Hospital and were performed in accordance with the Declaration of Helsinki. In addition, all participants provided written informed consent prior to surgical resection. The animal experiments were approved by the Animal Ethical Committee of Gaomi People's Hospital. All experimental steps were performed in accordance with the Animal Protection Law of the People's Republic of China-2009 for experimental animals. Publish your work in this journal Cancer Management and Research is an international, peer-reviewed open access journal focusing on cancer research and the optimal use of preventative and integrated treatment interventions to achieve improved outcomes, enhanced survival and quality of life for the cancer patient. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. For personal use only.
2020-04-30T09:07:06.783Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "0347a1219291d148c7f396dacf9437e8a22c5f7b", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=74107", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "61a5a1cabddf1e557d252e835c41c3554e509dc9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
268161230
pes2o/s2orc
v3-fos-license
Two-Phase Crude Oil–Water Flow Through Different Pipes: An Experimental Investigation Coupled with Computational Fluid Dynamics Approach The present study deals with two-phase non-Newtonian pseudoplastic crude oil and water flow inside horizontal pipes simulated by ANSYS. The study helps predict velocity and velocity profiles, as well as pressure drop during two-phase crude-oil–water flow, without complex calculations. Computational fluid dynamics (CFD) analysis will be very important in reducing the experimental cost and the effort of data acquisition. Three independent horizontal stainless steel pipes (SS-304) with inner diameters of 1 in., 1.5 in., and 2 in. were used to circulate crude oil with 5, 10, and 15% v/v water for simulation purposes. The entire length of the pipes, along with their surfaces, were insulated to reduce heat loss. A grid size of 221,365 was selected as the optimal grid. Two-phase flow phenomena, pressure drop calculations, shear stress on the walls, along with the rate of shear strain, and phase analysis were studied. Moreover, velocity changes from the wall to the center, causing a velocity gradient and shear strain rate, but at the center, no velocity variation (velocity gradient) was observed between the layers of the fluid. The precision of the simulation was investigated using three error parameters, such as mean square error, Nash-Sutcliffe efficiency, and RMSE-standard deviation of observation ratio. From the simulation, it was found that CFD analysis holds good agreement with experimental results. The uncertainty analysis demonstrated that our CFD model is helpful in predicting the rheological parameters very accurately. The study aids in identifying and predicting fluid flow phenomena inside horizontal straight pipes in a very effective way. INTRODUCTION Transporting crude oil from remote sources to refineries poses significant challenges, primarily when dealing with heavy crude oil.This transportation relies on pipelines with powerful pumps, but it encounters issues such as pressure loss and friction-induced deceleration or acceleration. 1To mitigate these challenges, a common approach involves mixing the heavy crude oil with water, which reduces viscosity and aids in transportation.However, this introduces complexity through multiphase oil−water flow, where various parameters like the velocity of the mixture, transport pipe diameter, temperature, volume fraction, and pressure significantly impact the flow behavior.Furthermore, the substantial viscosity difference between crude oil and water complicates this process, necessitating careful engineering considerations for efficient and safe transportation of crude oil over long distances from its remote sources to processing facilities. 2 Computational modeling and analysis of two-phase crude oil−water systems may enhance our present knowledge related to the transport mechanisms of oil−water mixtures in pipelines.The mainstream experimental and research work on two-phase oil−water flow focuses on flow pattern identifications such as intermittent, stratified, dispersed, coreannular, and the amalgamation of all the above. 3,4However, computational modeling and analysis, like 3D computational fluid dynamics (CFD), have grown over the years as very important simulation tools due to their flexibility and costeffectiveness for in-house studies. 5The application of the CFD-based solver ANSYS for single and two-phase crude oil flow through pipes has been utilized by Kumar et al., 6 Parvini et al., 7 and Walvekar et al. 8 Pouraria et al. 9 applied CFD-based models for determining the flow patterns of oil−water.They utilize a general k-epsilon turbulence model in conjunction with the Eulerian−Eulerian method.The numerical results acquired were juxtaposed with experimental data found in the literature, focusing on either the in situ Sauter mean diameter or water volume fraction.The comparison between the numerical findings and the previously published experimental data revealed a satisfactory level of agreement.Similarly, Bannwart 10 developed a suitable model for investigating aspects of oil−water core annular flow.In that study, mass and momentum-balanced phenomenological models were exploited to investigate volume fractions and changes in pressure.The predicted outcomes were weighted against vertical and horizontal oil−water core annular flows.The study demonstrated that predicted values followed the actual ones with high accuracy.Bandyopadhyay and Das 11 investigated non-Newtonian fluid flow and proposed empirical correlations for different piping components.They conducted experiments to estimate the frictional pressure drop across various piping components for non-Newtonian pseudoplastic flow under laminar conditions.They proposed an empirical correlation model for pronouncing frictional pressure drop in physical and dynamic variable forms.Castro-Gualdroń et al. 12 incorporated CFD techniques for simulating the homogenization of crude oil on a small scale.The effect of the mesh size and time step size was studied since, in this type of simulation, computational effort became a major parameter that had to be reduced to a minimum.Experimental data taken from two different points in the tank at regular time intervals was available to compare the results of the simulations, concluding in good agreement.Desamala et al. 13 conducted a detailed CFD study of twophase oil−water flow characteristics inside horizontal pipes.The simulation successfully predicted slug, stratified wavy, stratified mixed, and annular flow, with the exception of the dispersion of oil in water and the dispersion of water in oil.Simulation results were validated with horizontal literature data, and good conformity was observed.Abed and Auda 14 conducted an experimental and simulation study of oil−water flow inside the horizontal pipe to investigate heat transfer effects on the same.Yuan et al. 15 suggested a novel real-gas model for characterizing and predicting gas leakage in pipelines where gas is flowed at high pressures.They derived thermodynamic formulas based on the basic governing equations and concluded that the newly derived model worked better than the previous models in the same fields of study.Wang et al. 3 simulated flow behaviors of heavy oils in pipelines and predicted mesoscopic flow with the drag force model and KTGF models.From their experimental and modeling results, the filtered model was adopted for the flow as it provided better accuracy.Zhang et al. developed a VOF-DEM model for studying the particle dynamic behaviors in fractured-vuggy reservoirs.They predicted that the injection velocity, particle volume fraction, and diameter may alter the accumulation of particle plugging agents and affect channel flow control. 16aleh et al. 17 investigated the flow of heavy crude oil through pipelines using CFD.They used Ansys Fluent software for the study of the rheological flow behavior of Iraqi oil and found that the developed model was on par with the experimental results.Meriem-Benziane et al. 18 used CFD to study oil−water flow in a pipeline and their boundary layer separation investigation.Their analytical analysis was in good agreement with the numerical investigations.Songyi et al. 19 suggested a novel model for predicting the dependence of the shape of the interface between oil and water on their stratified flow.To solve the momentum equations, they used contact angle theory and the minimum energy method.Ballesteros et al. 20 used CFD to study the liquid holdup characteristics in a two-phase low liquid-loaded flow.Their investigation implied that smooth flow was possible using pipe inclination in the downward direction, while a lesser liquid holdup was seen for the same.Alade 21 introduced two new models, namely Cross-Logistic and Logistic, along with CFD, to study the complex flow behavior of crude oil and bitumen-solvent mixtures for using them to flow through porous media.Shadloo et al. 22 used an artificial neural network to estimate the pressure drop for the 2-phase flow of crude oil through long horizontal pipes using the experimental data for the same.Their findings suggested that their model predicted the pressure drop with much more accuracy than other empirical models employed for the same.Zheng et al. 23 used response surface methodology and sensitivity analysis to predict and optimize the viscosity of nano-oil containing ZnO2 nanoparticles. The mentioned studies cover a broad spectrum of investigations on fluid flow and transport phenomena, offering valuable insights into various aspects of crude oil flow.Among them, the CFD model can offer detailed insights into crude oil behavior within pipelines, aiding researchers in understanding the impact of factors like viscosity, temperature, pressure, and flow rate on flow patterns, turbulence, and mixing.However, studies on the velocity profile, pressure drop, phase analysis, and flow pattern of crude oils through pipelines have not reached widespread research.In this regard, the current research investigates the phenomenon of non-Newtonian pseudoplastic crude oil−water mixture flow through horizontal pipes.The study includes the presence of water as a secondary phase in crude oil, which gives rise to complex phenomena like emulsion formation, which may affect the crude oil quality.The ANSYS Fluent CFD solver has been used to solve the flow structure, static pressure, pressure drop, friction factor, shear strain, and wall shear stress.These simulated results can be used for the optimum selection of design parameters for the horizontal pipelines used for transporting non-Newtonian crude oil−water flow.Information regarding the pumping cost of the crude oil−water flow through the pipeline can be gathered from the pressure drop analysis.The finding can contribute to the efficient identification and prediction of heavy crude flow phenomena within horizontal straight pipes. EXPERIMENTAL DESCRIPTION The experimental setup comprises an oil bath of 30 L volume containing a temperature regulator and mechanical impeller, a stainless steel double pipe heater, a gear pump (DFD), valves for the regulation of crude oil flow, and thermocouples for temperature reading.The test segment comprises three horizontal independent stainless steel pipes (SS-304) of 0.0508 m or 2 in.(relative roughness (ε/d) = 0.000295), 0.0381 m or 1.5 in.(relative roughness (ε/d) = 0.000393), 0.0254 m or 1 in.diameter (relative roughness (ε/d) = 0.00059), and 2.5 m length (Figure 1).The relative roughness was measured by dividing the roughness ε of the pipe by the diameter.The value of epsilon ε was provided by the manufacturer, while the internal diameter was calculated using calipers.The entire pipeline and exposed surfaces are totally insulated for minimizing the loss of heat.The crude oil as already mentioned, is collected from the western oilfield of India.Circulation of the crude oil through pipes has been done by a gear pump (Moyno_ 500 Pumps, 600 series).The flow rate of crude oil through different pipelines is measured by a digital flow meter (Everest EMAG series).Pressure drop was measured by a pressure transducer (manufactured by Rosemount Co. series 3051s) that measures the pressure loss linked to the inlet and outlet at a distance of 2.5 m.Fluid was kept in circulation mode in anticipation of the loop reaching its stabilized state for different flow rates and temperatures.Temperature was maintained by a shell and tube heat exchanger.A thermocouple (J-type, ThermoSensors) was placed in the middle of the flow line to measure temperature.All the experiments have been run at four different temperatures of 25−40 °C.The relay section comprised flow meters, pressure transducers, and temperature sensors, all relaying to a control panel through which the set values were adjusted.Water was added to the crude oil in the crude oil bath in different proportions and mixed thoroughly using a stirrer.A de-emulsifier was added to the mixture to prevent emulsion formation.The heated water from the water bath was circulated inside the jackets to make sure all the waxes or crude oils stuck in the inner walls of the pipelines were removed after the completion of each experiment. COMPUTATIONAL FLUID DYNAMICS CFD is an approach that uses algorithms to obtain extremely accurate outcomes by considering various phenomena, such as fluid dynamics and mass transfer. 24,25In this study, a fully developed length is assumed for model development to reduce costs.Optimal computer memory has been utilized for grid generation, which reduces convergence time during the simulation process.Initial investigations indicated laminar flow since the Reynolds number was found to be less than 2300.Non-Newtonian pseudoplastic crude oil−water flow through a horizontal pipe is a highly multifaceted study and is usually governed by the continuity and momentum equations in laminar flow.The continuity and momentum equations can be defined as eqs 1 and 2, respectively In eq 1, α i = fraction for the ith phase; ρ i = density of the ith phase; ▽ = nabla, defined as and u i = velocity of the ith phase, ms −1 ; m ij = mass flow rate, kg s −1 . In eq 2, g = acceleration due to gravity, ms −2 ; i = = stress− strain tensor of the ith phase; R ij = interaction force between the phases, N; F i = external body force, N; F lift = lift force, N; F vm = virtual mass force, N. The interphase exchange force can be defined by eq 3 where K ij = fluid−fluid exchange coefficient.The flow being laminar, the power law model was selected as the viscous model, and the mixture model was selected for the multiphase model for CFD simulation.The rheology of crude oil−water flow is found to be reliant on the apparent viscosity of the fluid flowing through the horizontal pipe.From the study, it has also been found that apparent viscosity depends on velocity and shear rate.The apparent viscosity of the crude oil−water mixture reduces with the increase in flow velocity and shear strain rate.The fluid inside the horizontal pipe follows the non-Newtonian pseudo-plastic power law model.The apparent viscosity of a non-Newtonian fluid is given as eq 4 i k j j j y where μ app is apparent viscosity, which is expressed in (Pa s). The boundary conditions were that the inlet and outlet of the pipes were considered to be velocity inlet and pressure outlet, respectively.Figure 2 shows the general steps involved in CFD simulation. Assumption. To simulate the flow inside the pipe, the following assumptions and concepts were taken into consideration for the current study: •The temperature of the crude oil−water mixture through the horizontal pipes was maintained at 25 to 40 °C. •Crude oil−water is assumed to be incompressible and an isothermal non-Newtonian pseudoplastic fluid (The composition of crude oil makes it non-Newtonian as it includes suspended particles, saturates, aromatics, resins, asphaltenes, etc.Moreover, the shear stress-shear rate plot confirmed the crude oil as a pseudoplastic fluid). •The model that has been developed is limited to a flow model (not a density model or a segregation model, etc.). •The model is assumed to follow a two-phase laminar, non-Newtonian pseudoplastic power law model.We assume no-slip conditions at the walls (to showcase the minimum velocity adjacent to the wall as a result of the high adhesive force between the pipeline wall and the fluid molecule as opposed to the center of the pipe). 3.2.Computational Approach, Convergence Criteria, and Grid Independency Test.Gambit 2.4.6 was utilized to generate a 3D tetrahedral grid geometry due to its excellent modeling flexibility and its ability to easily perform mesh skewness tests with minimal deviation.The boundary conditions and the continuum are specified within the geometry.The skewness of the tetrahedral grid was thoroughly examined, and it was found to be below 0.9.To get the simulated results, the geometries are exported to the CFD pressure-based solver (Fluent 6.3).To streamline the CFD procedures, a first-order upwind scheme is employed for the solution, and a simple pressure−velocity coupling is selected with relaxation. Criteria of convergence were selected to be 10 −5 for all the equations except the transport equation (whose convergence H Solution initialization: activating the residual plotting during the calculation, and enabling the default convergence criteria of 10 −5 for all residuals except for the transport equation, which was selected to be 10 −3 . Effect of Diameter on Pressure Drop.An important parameter that affects pressure drop is the pipe diameter and the relative roughness of the pipe (ε/d).Higher relative roughness leads to an increase in the skin friction factor.It is obvious that local turbulence near the surface of the pipeline changes with pipe relative roughness, and that would affect the thickness of the viscous sublayer.Therefore, when the thickness of the viscous sublayer becomes equal to or greater than the pipe roughness, the pipes are considered hydrodynamically smooth.The experimental results on the effect of pipe diameter on the drop in pressure for the flow of crude oil through different pipelines are shown in Figure 4A−C.The results indicate that pressure drop decreases with increasing pipe diameter when other parameters are fixed.The reduction in pressure drop was from about 125−42.5 kPa at a flow rate of 40 LPM under the same temperature conditions.This is attributed to the fact that huge eddy currents exist in the pipes with larger diameters, absorbing more energy from the main flow.In smaller pipelines, several small eddy currents are formed, whereas for larger-diameter pipes, the number of large eddies will be greater. Effect of Velocity. The velocity profiles of crude oil flow through pipelines were calculated using the results obtained from pressure drop analysis during experimental investigations.Figure 5A,B illustrates the contour of static pressure (Pa), which increases with the increase in velocity of the crude oil−water through a horizontal pipe having a constant diameter (2 in.) operating at a constant temperature (25 °C).Pressure drop analysis showed that the change in pressure across the pipe increased with the inlet velocity of the fluid.The static pressure inside the pipe gradually decreases along the length of the pipe.distance from the inlet.In the study, all the parameters like pipe diameter (2 in.), temperature, and concentration are kept constant, and the only parameter that has been varied is velocity.The study has been conducted for 2 dissimilar velocities (low = 0.11 m/s and high = 0.62 m/s).The study shows that the velocity of the fluid flow is at its maximum at the center and gradually decreases toward the wall.Adhesive force near the wall is maximum as compared to cohesive force. 26This makes the velocity near the wall's minimum. Figures 7 and 8 illustrate the contour of the wall shear stress and rate of shear strain for the crude oil−water mixture through the horizontal pipe at 25 °C at every 0.42 m interval from the inlet.It indicates that shear stress and strain rate are maximum near the wall and gradually decrease toward the center of the pipe.The velocity of the fluid at the wall is less because of friction between the pipe wall and the fluid, and hence drag is generated near the wall.The velocity of the adjacent layer fluid is changed up to the center from the wall, causing velocity distribution (shear strain = velocity gradient = ).Therefore, velocity changes from the wall to the center, causing velocity gradient/shear strain rate.However, at the center, no velocity variation (velocity gradient) between the layers of the fluid is seen.This means velocity at the center is the maximum and unique.According to the Newtons law of viscosity, wall shear stress (τ) = ( ) × , where n = flow behavior index, and μ app is apparent viscosity.Hence, the shear stress of the fluid layer near the wall will be high and less at the center. Effect of Temperature. The effect of temperature on the non-Newtonian pseudoplastic crude oil−water fluid flow inside the 2 in.horizontal straight pipe was investigated.Figure The study also revealed that static pressure and pressure drop inside the horizontal straight pipe decrease with an increase in pipe diameter.The static pressure in the 1 in.horizontal straight pipe is found to be the maximum, and in the 2 in.horizontal straight pipe, it is the minimum. Figure 11A−C indicates the contour of the velocity (m/s) inside the horizontal straight pipes of various diameters.From the contour, it has been found that pipe size affects the flow pattern inside the pipe.The velocity of the crude oil−water fluid flow decreases with an increase in pipe diameter, because at unvarying flow rates, velocity becomes inversely related to the cross-sectional area. 4.5.Phase Analysis.Figure 12 shows the phase analysis of crude oil−water flow through a 2 in.horizontal straight pipe at 25 °C at four equal intervals from the inlet.In this study, velocity, diameter, and temperature are kept constant, while the concentration of water is varied.The crude oil mostly accumulates in the midsection of the pipe due to the density difference, as seen in the figure.Since the density of the crude oil is lighter than that of the water, the crude oil remains in the midsection of the horizontal straight pipe.The bulk velocity of the fluid mixture strips the denser component (water) compared to the lighter component (crude oil) from the mixture toward the wall side as an exchange of momentum transfer.This results in the water being more concentrated toward the wall side and the crude oil at the center. Comparison of Experimental and CFD Results. The results obtained from CFD were compared with experimental results in each pipeline and depicted in Table 1 and Figure 13A−C. As seen in the figures, pressure drop increased with velocity in both the experimental and CFD work in each pipeline.Similarly, if we look at Table 1, there is a small difference between the experimental and simulated results (Figure 14).Maximum deviations were observed in the 2 in.diameter pipeline, while minimum deviations were observed in the 1 in.diameter pipeline.The deviation between experimental and simulated data is very small (i.e., <5%), which means that the simulated results and the flow equations used for simulation studies can properly predict the flow behavior of crude oil with minimal error.The results of velocity profile and flow patterns were also compared with various data reported in literature, as shown in Table 2.It is noteworthy that our CFD predictions exhibit a close match with the trends reported in the literature, affirming the predictive capability of our simulation methodology.This consistency lends credibility to our numerical approach and supports the validity of our findings.4.7.Uncertainty Analysis.The precision of the developed CFD model was evaluated by implementing four error parameters, such as Nash-Sutcliffe efficiency (NSE), the RMSE-standard deviation ratio of observation (RMSE-standard deviation of the observation ratio (RSR)), and mean square error (MSE).The following error parameters can be estimated using eqs 5−7. 33,34y y y NSE 1 ( ) In eqs 5−7, the experimental values and CFD predicted values are denoted by y i,act , and y i,prd , respectively.The mean of the experimental values is represented by y i,mean .The recommended best values of NSE, RSR, and MSE are 1, 0, and 0, respectively. 35The calculated values are considered acceptable if they are found close to the best values.Table 3 shows the uncertainty analysis.The estimated values of all three error parameters are close to the best, demonstrating the model's acceptability and accuracy.Moreover, the total error values for three pipe diameters (1 in., 1.5 in., and 2 in.) models show a similar trend, and the values demonstrate very less CONCLUSIONS This extensive investigation delved into the complex dynamics of non-Newtonian pseudoplastic mixtures of crude oil and water as they traverse horizontal pipes.The study provided valuable insights into critical factors such as pipe diameter, velocity, and temperature, elucidating their profound impact on flow patterns and pressure drop.Utilizing CFD, specifically the ANSYS Fluent solver, proved essential for simulating and analyzing these intricate fluid behaviors.The research revealed key findings: larger pipes exhibited reduced pressure drop due to the formation of more substantial eddies, velocity profiles displayed a peak at the pipe center diminishing toward the wall, influenced by adhesive forces, and temperature fluctuations affected static pressure, with higher temperatures resulting in lower static pressure.Phase analysis uncovered water's affinity for the wall and crude oil's concentration in the center.Comparison with experimental data confirmed the CFD model's accuracy, with deviations below 5%, and uncertainty analysis affirmed the model's reliability.Additionally, the predicted flow pattern from CFD aligned well with the existing literature.Future studies in this domain could focus on exploring the impact of varying parameters such as pipe roughness, fluid rheology, and composition on the observed phenomena.Investigating the behavior of multiphase flow under different operational conditions, such as varying pressures and temperatures, could provide a more comprehensive understanding.Furthermore, studies assessing the scalability of the findings to real-world pipeline systems and the development of improved modeling approaches for even more accurate predictions would contribute to the continued advancement of knowledge in crude oil−water transport dynamics. ■ ASSOCIATED CONTENT The authors express their gratitude to the School of Biotechnology and Chemical Technology, KIITUniversity, and the Chancellor, Vice-Chancellor, and Management of Mahindra University, Hyderabad, India, for the required infrastructure and visionary leadership. Figure 1 . Figure 1.Schematic of the experimental setup for pipeline studies. Figure 3 . Figure 3. Tetrahedral mesh of the horizontal straight pipe. Figure 5 . Figure 5. Contour of static pressure (Pa) for crude oil−water flow through a 2 in.horizontal pipe at 25 °C and velocity: (A) 0.11 m/s and (B) 0.62 m/s. Figure Figure 6A,B illustrates the velocity of the crude oil−water flow inside the horizontal straight pipe at 25 °C at every 0.5 m Figure 6 . Figure 6.Contour of velocity (m/s) for crude oil−water flow at every interval of 0.5 m from the inlet through a 2 in.pipe at 25 °C and velocity: (A) 0.11 m/s (B) 0.62 m/s. Figure 7 . Figure 7. Contour of wall shear stress (Pa) for crude oil−water flow through a 2 in.horizontal pipe at every 0.42 m interval from the inlet at 25 °C and velocity: (A) 0.11 m/s and (B) 0.62 m/s. Figure 8 . Figure 8. Contour of shear strain rate (s −1 ) for crude oil−water flow inside the 2 in.horizontal straight pipe at every 0.42 m interval from the inlet at 25 °C and velocity (A) 0.11 m/s and (B) 0.62 m/s. Figure 10 . Figure 10.Contour of static pressure (Pa) of non-Newtonian pseudoplastic crude oil−water flow at 25 °C and velocity of 0.11 m/s and the diameter of the horizontal straight pipe (A) 1 in., (B) 1.5 in., and (C) 2 in. Figure 11 . Figure 11.Contour of velocity (m/s) of non-Newtonian pseudoplastic crude oil−water flow at every 0.5 m intervals from the inlet at 25 °C and velocity of 0.11 m/s and diameter of the horizontal straight pipe (A) 1 in.(B) 1.5 in.(C) 2 in. Figure 12 . Figure 12.Contour of the crude oil−water fluid flow through a 2 in.straight horizontal pipe at every 0.625 m interval from the inlet at 25 °C and a velocity of 0.11 m/s. Figure 13 . Figure 13.Comparison of pressure drop and velocity between experimental and CFD data: (A) 1 in., (B) 1.5 in., and (C) 2 in. Figure 14 . Figure 14.Comparison of experimental and CFD pressure drop. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (no.RS-2023-00219983).Moonis Ali Khan acknowledges the financial support through Researchers Supporting Project number: RSP2024R345, King Saud University, Riyadh.Saudi Arabia.One of the authors (R. Kumar) acknowledges the financial support through the National Research Foundation (NRF) of the Republic of Korea under the Creative and Challenging Research Program (2021R1I1A1A01060846). ■ NOMENCLATUREα i , fraction for the ith phase; ρ i , density of the ith phase; ▽, , velocity of the ith phase, ms −1 ; m ij , mass flow rate, kg s −1 ; g, Acceleration due to gravity, ms −2 ; i = , stress−strain tensor of the ith phase; R ij , interaction force between the phases, N; F i , external body force, N; F lift , lift force, N; F vm , virtual mass force, N; K ij , fluid−fluid exchange coefficient; μ appp , apparent viscosity, Pa s; u y d d , shear strain OR velocity gradient; n, flow behavior Index; y i,act , experimental values; y i,prd , CFD predicted values Table 1 . Comparison between Experimental and Simulated Data Table 2 . Prior Experimental Investigations Concerning the Flow of Oils in Horizontal Pipes with Oil−Water Mixtures
2024-03-03T16:11:41.701Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "139bd9dfa45b4e0a7c78e51633c1573cbfbc2cbf", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.3c05290", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5445ef856f2ccab21bd9f77601b2582aa429c30e", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
234212937
pes2o/s2orc
v3-fos-license
Identifcation of Pathways and Genes Associated With Meniscus Degeneration Using Bioinformatics Analyses There are few studies on the genetic changes of meniscus degeneration. We used anterior cruciate ligament resection of Wuzhishan pig to prepare a meniscus degeneration model, and applied gene chip technology to detect differentially expressed genes in degenerative meniscus tissue. Then we applied GO analysis, Pathway analysis, Core gene network analysis and Relevant miRNAs analysis to discover relevant regulatory networks of meniscus degeneration. As a result, we detected 893 differentially expressed genes, mainly involving hormone, apoptosis, inammation and other mechanisms, and obtained MUC13, Inammatory mediator regulation of TRP channels, MDFI, mir-335-5p and so on that may play a key role. In summary, we have established a reliable animal model of meniscus degeneration and found that meniscus degeneration involves several possible molecular mechanisms, which will provide molecular targets for further research of the disease in the future. research of meniscus degeneration mechanism 6,7 . With the advantages of large amount of information, reliable operation and repeatability, gene chip technology has been successfully used in gene expression detection, DNA sequencing, search for new genes, diagnosis of diseases and other research elds 8 . This study used gene chip technology to screen differentially expressed genes (DEGs) between normal meniscus and degenerated meniscus, and tried to analyze the mechanism of meniscal degeneration to provide theoretical basis for the diagnosis, prevention and treatment of meniscal degeneration. Introduction The meniscus has the functions of buffering load, absorbing impact, decompressing and improving joint stability. And the blood circulation of the meniscus is poor, the healing ability is limited, and it is easy to degenerate. The degenerative meniscus not only cannot fully protect the knee joint, but also tends to tear itself, leading to knee cartilage damage, pain, and restricted knee movement 1,2 . In addition, Meniscus degeneration can lead to osteoarthritis 3 . Therefore, how to protect the meniscus and delay the degeneration of the meniscus has important scienti c signi cance for preventing meniscus injury in young and middle-aged patients and alleviating the symptoms of elderly patients with knee arthritis. Meniscus and articular cartilage are very similar in tissue composition, many researchers speculate that meniscus degeneration may be similar to cartilage degeneration, and the imbalance between anabolism and catabolism leads to degeneration 4,5 . But the speci c mechanism is not clear. Trauma, chronic in ammation, apoptosis and so on may be involved unilaterally or in multiple ways, and there are countless kinds of genes that may play a role, which brings great trouble to the research of meniscus degeneration mechanism 6,7 . With the advantages of large amount of information, reliable operation and repeatability, gene chip technology has been successfully used in gene expression detection, DNA sequencing, search for new genes, diagnosis of diseases and other research elds 8 . This study used gene chip technology to screen differentially expressed genes (DEGs) between normal meniscus and degenerated meniscus, and tried to analyze the mechanism of meniscal degeneration to provide theoretical basis for the diagnosis, prevention and treatment of meniscal degeneration. Gross morphological observation In the normal meniscus group, the meniscus was smooth and complete, with a white and shiny surface, without any signs of degeneration. In the degenerative meniscus group, the surface of meniscus was rough, the color was dim, the elasticity was poor, and there were some small defects and erosion (Fig. 1). Figure 1. (a) In the normal meniscus group, the meniscus was crescent shaped, with complete and smooth surface, no tear, white color and good elasticity. And the medial part of the meniscus was thin and the lateral part was thick. (b) In the degenerative meniscus group, the color of the meniscus was light yellow, the elasticity was worse than that of the normal meniscus, the surrounding synovial membrane was congested and edema, the inner part was thinner with uneven wear, and the free edge was incomplete and cracked. Histological examination In the normal meniscus group, HE staining of the meniscus showed normal meniscus tissue. In the degenerative meniscus group, HE staining of the meniscus showed disorder, uneven staining and sparse arrangement of collagen bers, disorder, reduction and swelling of chondrocytes, reduction or disappearance of cartilage lacunae, increased local bers and hyaline changes, which were consistent with meniscal degeneration (Fig. 2). The HE staining of normal meniscus tissue. The chondrocyte nucleus was large and round, the cell distribution was regular, the collagen bers were abundant, and the ber bundles were thick and neat. (c-f) The HE staining of degenerative meniscus. The arrangement of collagen bers was disordered, the staining was uneven, and the arrangement was sparse; the chondrocytes were arranged disorderly and reduced, and the visible cells were swollen, the cartilage lacuna was reduced or disappeared, and the local area was brotic and hyalinized. a: x40, b: x100, c: x40, d: x100, e: x200, f: x400. Quality control For RNA quanti cation and quality assurance performed by NanoDrop ND-1000, the ratio of A260/A280 for all samples was close to 2.0, and the ratio of O.D. A260/A230 was greater than 1.8. The result of Agilent 2100 Bioanalyzer showed that all RINs ≥ 6.3, indicating that the integrity of total RNA was good and there was no obvious degradation, so follow-up experiments could be carried out (Table 1). There were 893 DEGs in the two groups, including 537 up-regulated genes and 356 down-regulated genes. The result was showed in the volcano plot and the dendrogram (Fig. 3). The top 10 most signi cant genes were cyp2c33, gcnt7, ncdn, exd3, MUC13, ppp1r3d, nphp3, upb1, CD81 and prph (Table 2). GO analysis A total of 55 biological processes enriched by the DEGs were obtained. The top 10 biological processes were cell response to hormone stimulus, response to hormone, sex determination, muscle ber development, mehcheme morphogenesis, cell response to endogenous stimulus, C21 steroid hormone metallic process, neuropeptide signaling pathway, negative regulation, respectively of reactive oxygen species metallic process and regulation of nitric oxide biosynthetic process (Table 3). Core gene network and Relevant miRNA analysis Core network analysis yielded 40 core genes, of which the gene MDFI had the largest number of connections, indicating that it was in the most central position (Fig. 4, Table 5). Relevant miRNA analysis yielded 101 related miRNAs, among which miR-335-5p was the most connected related miRNA (Fig. 5, Table 6). Real-time RT-PCR In order to verify the reliability of the microarray results, the expression levels of the seven mRNAs (GCNT7, NCDN, EXD3, MUC13, PPP1R3D, NPHP3 and CD81) detected by RT-PCR were equivalent to the microarray data. And the difference of mRNA expression between the degenerative meniscus group and the normal meniscus group was statistically signi cant (P < 0.05) ( Table 7). promotes NF-кB activity through NF-κB-dependent pathways and increases IL-8 production. OA synovial uid contains high levels of IL-6 and IL-8. The increased expression of IL-8 has a chemotactic effect on in ammatory cells and stimulates the secretion of IL-6, thereby promoting joint in ammation 11 . Therefore, MUC13 may play a role in degenerative meniscus tissue through the anti-in ammatory effect of MUC13. According to the GO analysis results, 55 BPs were signi cantly expressed. After comprehensive analysis, DEGs participated in cellular response to hormone stimulus, response to hormone, sex determination, cellular response to endogenous stimulus, C21-steroid hormone metabolic process, neuropeptide signaling pathway, negative regulation of reactive oxygen species metabolic process, regulation of nitric oxide biosynthetic process, response to endogenous stimulus, negative regulation of reproductive process, nitric oxide biosynthetic process, nitric oxide metabolic process, reactive nitrogen species metabolic process, regulation of hormone levels, regulation of reactive oxygen species metabolic process, and regulation of interleukin-4 production. It is suggested that hormones, apoptosis, oxidation and other mechanisms are involved in the meniscus degeneration process. Studies [12][13][14] have reported that the inhibitory effect of sex hormones seems to be related to cystic degeneration of meniscus tissue, and growth hormone and parathyroid hormone have effects on meniscus and chondrocytes. NO is the main cause of human articular chondrocyte apoptosis [15][16][17][18] . These studies are consistent with our nding. The regulatory biosynthesis process of nitric oxide is one of the most enriched biological processes in GO analysis. Studies have shown that NO induces chondrocyte apoptosis, and NO is the main cause of human articular chondrocyte apoptosis 15,[19][20][21][22][23] . Our study shows that the biological process involved in meniscus degeneration involves NO, suggesting that NO-induced apoptosis also occurs in meniscus cells, which could eventually lead to the degeneration of meniscus tissue. The consistency between our nding and previous ndings provides strong support for our hypothesis that degenerate meniscus cells are different from normal meniscus cells and may play an active role in the development of OA, in order to verify this nding , further research is still needed. In ammatory mediator regulation of TRP channels is one of the 10 lowest p-values in Pathway analysis. As a variety of cellular signal receptors, it plays a very important role in the in ammatory response 24,25 . This study found that in ammatory mediator regulation of TRP channels were upregulated in the meniscus of swine OA, suggesting that degeneration of the meniscus involves TRP channels and is associated with in ammatory responses. Our research found MDFI and miR-335-5p, both of which regulate the WNT signaling pathway 26,27 , which is the core signal pathway of OA synovitis [28][29][30] . This shows that synovitis and meniscal degeneration may have similar mechanisms of action. Synovial and meniscal tissues are also involved in the pathogenesis of OA. Pilar Tornero-Esteban's 31 study shows correlation between miR-335-5p expression and OA. Our study found a strong correlation between miR-335-5p and meniscus degeneration. MiR-335-5p could play an important role in meniscus degeneration, indicating that miR-335-5p may become a new target for clinical prevention and treatment of meniscus degeneration. The speci c mechanism and how to mediate meniscus degeneration are the next research the key of. This study has shortcomings. The experimental study uses a Wuzhishan pig meniscus degeneration model. The knee joint is different from humans in biomechanics, so the animal model research cannot be completely equivalent to the study of meniscus degeneration's patients. In addition, this model is more similar to traumatic meniscus degeneration, such as human meniscus and cruciate ligament injury, rather than primary meniscus degeneration. However, compared with many shortcomings such as the ethical issues involved in the study of human specimens, the lack of sample size, the obvious heterogeneity of specimens, and the di culty of obtaining specimens in special parts, this animal model is an ideal research tool. In summary, it is necessary to better understand the pathogenesis of meniscus degeneration, identify potential drug targets, and regulate gene expression of meniscus degeneration. This study is the rst time that the whole gene spectrum analysis of the meniscus tissue has been performed in the Wuzhishan pig animal. Our ndings expand current understanding of the genetic mechanisms of meniscus degeneration. Therefore, these data provide a basis for future studies on the function of a large number of newly discovered genetic materials in the pathogenesis of meniscal degeneration and the suitability of the target as a drug target for meniscal degeneration. Gross morphological observation and HE staining The medial meniscus was taken out and the smoothness, gloss and color of the surface to be observed. After a little cleaning, it was xed with 10% formalin. After 72 hours, it was decalci ed, dehydrated, and embedded in para n, and then sectioned. After HE staining, the morphological and structural changes of the tissue were observed under a microscope. RNA extraction According to the manufacturer's requirements, TRIzol RNA isolation reagent (Invitrogen, Carlsbad, CA, USA) was used to isolate total RNA from meniscal tissue. Quality control was performed by NanoDropND-1000 and Agilent 2100 Bioanalyzer. Microarray The Pig Gene Expression 4x44K Microarray V2 (Agilent Technologies, Santa Clara, CA, USA) were used to compare mRNA expression pro les, respectively, in normal and degenerative meniscus tissue. Microarray analysis was performed by GCBI analysis platform (Shanghai, China, https://www.gcbi.com.cn). The data have been deposited in the NCBI Gene Expression Omnibus and are accessible through GEO Series accession number GSE156132. Strategy The ow chart of the analysis was shown in the Fig. 6. First, the differentially expressed gene analysis used the two samples Welch's t-test (unequal variances) to identify signi cantly different mRNAs which were screened by p < 0.05 , fold change > 1.5 and FDR < 0.05 and sorted by p value. Second, the gene ontology system is used to classify these differentially expressed genes according to their biological functions. Similarly, pathway analysis was used to identify affected KEGG pathways. Third, differential expression genes were further analyzed by core gene network analysis and related miRNAs analysis. The core gene network analysis displays the interaction relationship between the input genes, making it easier to understand the core genes of the input genes. The core network is based on the relationship between genes and genes that have database basis and literature basis in Pubmed, Mesh, KEGG and other databases. Related miRNAs analysis show at least two miRNAs that interact with the input gene. Related miRNAs are based on the relationship between genes and miRNAs that are documented in Pubmed, miRBase and other databases, so as to construct a network of related miRNAs. All of the data mentioned above were analyzed by GCBI analysis platform. Real-time RT-PCR Changes in the expression of selected genes (GCNT7, NCDN, EXD3, MUC13, PPP1R3D, NPHP3 and CD81) were con rmed by RT-PCR. The 7 genes were chosen because they had the lowest p-value. TRIZOL reagent (Invitrogen, Carlsbad, CA, USA) was used to extract total RNA from meniscus tissue, and then 500ng RNA was taken to synthesize cDNA. PCR was performed on a ViiA 7 Real-time PCR System (Applied Biosystems, Foster City, CA, USA) using a 2X PCR master mix (Arraystar, Rockvile,MD, USA). The primers were chemically synthesized by Kangcheng, Shanghai, China and are listed in Table 7. GAPDH was used as internal controls to determine the relative expression of target mRNA. All reactions were repeated three times. Statistical Analysis The data of PT-PCR was analyzed using SPSS 19.0 software, and the counting data were expressed as (x ± s). One-way ANOVA was used for comparison between two groups, and P < 0.05 indicated statistical difference.
2021-05-11T00:03:44.023Z
2021-01-18T00:00:00.000
{ "year": 2021, "sha1": "b13cb74413e4f59abd84912cf7ff91dc6973d835", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-145945/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "e1ba382b8ecffb473d8a18fd03988e35e3b2e0a0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
221014054
pes2o/s2orc
v3-fos-license
Allergic rhinitis and asthma symptoms in a real-life study of MP-AzeFlu to treat multimorbid allergic rhinitis and asthma Background Asthma affects up to nearly 40% of patients with allergic rhinitis (AR). Poor control of AR symptoms is associated with poor asthma control. The goal of this study was to evaluate the effect of AR treatment with MP-AzeFlu on symptoms of AR as well as symptoms of asthma. Methods This prospective study used a visual analog scale (VAS) to assess symptoms of AR and asthma before and after treatment with MP-AzeFlu (Dymista®; azelastine hydrochloride plus fluticasone propionate; 1 spray in each nostril twice daily for 2 weeks). Participants suffered from moderate-to-severe AR according to Allergic Rhinitis and its Impact on Asthma criteria, with acute AR symptoms (AR-VAS scores ≥ 50 mm) on inclusion day. In addition to symptom assessment, patients recorded the impact of AR symptoms on quality-of-life measures before, during, and at the conclusion of the treatment period (approximately 14 days). Patients self-reported change in frequency of their usage of asthma reliever medication on the last day of treatment. Results Of 1103 study participants, 267 (24.2%) had comorbid asthma. These participants reported using a mean of 5.1 puffs of asthma reliever medication in the week before treatment with MP-AzeFlu. A total of 81.8% of patients with comorbid asthma responded to AR therapy (AR-VAS < 50 mm on at least 1 study day). Among patients with AR and comorbid asthma, MP-AzeFlu was associated with improved VAS scores across all study parameters, including AR symptom severity, asthma symptom severity, sleep quality, daily work or school activities, daily social activities, and daily outdoor activities. Asthma symptom severity decreased from a mean of 48.9 mm to 24.1 mm on the VAS. Self-reported frequency of asthma reliever medication use was reduced for 57.6% of participants (n = 139/241). Conclusion MP-AzeFlu used to relieve AR symptoms was associated with reduced asthma symptom VAS scores and frequency of asthma reliever medication usage. Changes in overall symptoms of AR and asthma were correlated. Background Globally, allergic rhinitis (AR) is a common, systemic allergic disease, with a prevalence of up to 25% in children and 40% in adults [1]. Among patients with AR, other allergic disorders are frequently comorbid [2]. Between 15% and nearly 40% of patients with AR have comorbid asthma, whereas asthma prevalence in the general population is approximately 7% [1,3]. Open Access Clinical and Molecular Allergy *Correspondence: dprice@opri.sg 1 Centre of Academic Primary Care, Division of Applied Health Sciences, University of Aberdeen, Aberdeen AB25 2ZD, UK Full list of author information is available at the end of the article Among patients with AR who visited a general practitioner, the majority-more than 90%-have moderateto-severe intermittent or persistent disease [4]. Many patients with moderate-to-severe AR have poorly controlled asthma [3], which may be attributable in part to lower airway inflammation [3]. In a survey of 520 patients with asthma, asthma was significantly less likely to be controlled in patients with moderate-to-severe, persistent AR compared with those with intermittent AR (65.7% vs 20.4%; P < 0.01) [4]. Furthermore, patients with AR and asthma comorbidity have higher healthcare resource utilization, including clinic visits, hospitalizations, and pharmacy costs over a 12-month time period [5]. Direct costs of AR are significantly higher for patients with mild-persistent asthma (€719) or moderate-persistent asthma (€799) than for the general population with AR (€554) [6]. Treatment of AR may concurrently improve AR and asthma symptoms in patients with comorbid disease [7][8][9]. In a past study, failure to manage AR symptoms was associated with increased use of asthma medications [7]. When patients with moderate-to-severe AR forgot to use their AR medication, more than half of those patients reported having to increase use of asthma reliever medications and 19.5% reported a need to increase asthma controller medication use [7]. Furthermore, in observational studies, AR treatment has been shown to improve upper and lower airway outcomes and decrease the risk for asthma-related hospitalization and emergency department visits by half [8,9]. Treatments for AR include oral H 1 antihistamines, intranasal corticosteroids (INCS), or intranasal antihistamines (INAH) [1]. Despite the wide variety of medication options available, many patients are dissatisfied with their AR treatment, resulting in poor adherence [10]. Therefore, combination therapies may improve satisfaction by reducing medication burden in patients with moderateto-severe, persistent AR symptoms. In the Allergic Rhinitis and its Impact on Asthma (ARIA) 2016 guideline update, combination treatment with INCS and INAH is recommended for patients with AR [1]. Azelastine hydrochloride has been formulated with fluticasone propionate in a single intranasal spray (MP-Aze-Flu; Dymista ® ) for the treatment of AR [11]. Compared with fluticasone propionate or azelastine hydrochloride alone, MP-AzeFlu resulted in significantly greater improvements in AR symptoms, including nasal congestion, one of the most bothersome and prevalent symptoms of AR, [12][13][14] nasal cell inflammation [15], loss of smell [16], and nasal hyperreactivity [17]. Relative to individual dosing of both an INAH and INCS, MP-Aze-Flu was also associated with lower pharmacy costs and total costs in a prior database analysis [18]. The purpose of this analysis of a real-world study was to evaluate the effect of MP-AzeFlu on asthma symptoms and frequency of use of asthma reliever medication in patients with comorbid AR and asthma. Study design This was a multinational, multicenter, prospective, noninterventional, real-life study conducted in 6 European countries: Austria, Germany, Czech Republic, Hungary, Netherlands, and Ireland. The study ran from February 21, 2018, to April 30, 2019. Ethics approval was obtained according to guidelines and procedures of the respective countries. Physicians who were usually involved in the management of AR and routinely used a visual analog scale (VAS) for symptom assessment in patients with AR were invited to participate in the study. Participating physicians included general practitioners, allergists, otorhinolaryngologists, pulmonologists, dermatologists, and pediatricians. The study consisted of an inclusion visit (day 0) and a control visit after about 14 days, allowing for some flexibility depending on usual clinical practice. Patients received patient cards at the inclusion visit to record AR symptoms, asthma symptoms, and other outcomes using a VAS. Physicians collected patient cards at the control visit, on or around day 14 or by mail. Participants Physicians enrolled patients with moderate-to-severe seasonal or perennial AR according to ARIA criteria, for whom MP-AzeFlu was prescribed for the first time. Decisions to include patients in the study were made by the physicians independently from and after the decision to prescribe MP-AzeFlu to the patient. Inclusion criteria included first-time prescription of MP-AzeFlu according to the summary of product characteristics, age 12 years or older, moderate-tosevere AR according to ARIA criteria [19], acute symptoms of AR on the day of inclusion (AR symptoms VAS ≥ 50 mm), written informed consent by the patient and (if applicable) caregiver for patients younger than 18 years, ability to understand the instructions for use of MP-AzeFlu according to the summary of product characteristics and patient leaflet, and ability to return the completed patient card. Exclusion criteria included known allergic reactions to MP-AzeFlu or any of its ingredients, pregnancy or planned pregnancy, breastfeeding, inability to provide informed consent, or missing consent. Study treatment All patients received MP-AzeFlu. MP-AzeFlu was dosed as outlined in the country-specific summary of product characteristics: 1 spray in each nostril twice daily (total daily dose: 548 µg azelastine hydrochloride and 200 µg fluticasone propionate) for 2 weeks. Physicians ensured that the patient properly understood the instructions for use, as specified in the summary of product characteristics and patient information leaflet. Study measures/outcomes On day 0, the physician documented patient demographics, AR symptoms, and previous treatments of AR in an electronic case report form. Patient recollections of their AR symptoms over the past 24 h were measured using a printed single-line VAS (AR-VAS) in the patient card, ranging from "not at all bothersome" (0 mm) to "extremely bothersome" (100 mm). AR symptom severity VAS scores and, for patients who suffered from asthma, asthma symptom severity VAS scores, were documented on the patient's card on days 0, 1, 3, 7, and ~ 14. Response was defined as an AR-VAS rating < 50 mm (indicating controlled AR) [20] at least once during the study. On days 0, 7, and ~ 14, patients assessed their sleep quality and troublesomeness in daily activities over the past 7 days, from "not at all troubled" (0 mm) to "extremely troubled" (100 mm). For patients who suffered from asthma, information on frequency of use of asthma reliever medication was collected at baseline. At the end of the documentation period (day ~ 14), the self-reported change in the frequency of use of asthma reliever medication was recorded as significantly reduced, reduced, equal, increased, or significantly increased. All suspected adverse drug reactions were documented in the case reports. Statistical methods Subpopulation analyses were performed for patients with AR but no asthma and for patients with AR and asthma comorbidity. The responder rate was calculated for the study population. Statistical analyses were performed using the statistical software package SAS (SAS Institute Inc.; Cary, NC, USA) version 9.4 or higher. Study population Of 1154 enrolled patients, 51 were excluded from data analysis because their data had not been confirmed by the investigator. The 1103 remaining patients were included in the safety analysis. A total of 267 patients listed asthma as a comorbidity. Patient demographics and baseline characteristics are detailed in Table 1. AR symptom response In the total study population, all 1103 patients were included in the responder rate analysis. Among the 915 patients reporting previous AR treatment, the most commonly used symptomatic AR treatments were oral, nonsedating Treatment response was defined as an AR-VAS score < 50 mm, the cutoff that differentiates controlled AR from uncontrolled AR [20], on any 1 day. A total of 944 patients [86.6%; 95% confidence interval (CI), 84.5-88.5%) met the response criteria, including 728 patients without asthma (88.1%; 95% CI 85.8-90.2%) and 216 patients with asthma (81.8%; 95% CI 76.7-86.0%). Over the course of treatment, the mean [standard deviation (SD)] VAS decreased by 46.2 (23.3) mm from baseline to the last day (Fig. 1). The mean (SD) change in AR-VAS over the study period for patients with AR without asthma was − 46.4 (22.9) mm; for AR with comorbid asthma, it was − 45.3 (25.2) mm. For patients with and without comorbid asthma, the AR-VAS change from baseline was significant at every time point (P < 0.0001). Furthermore, no significant differences were observed between patients with and without asthma in AR-VAS change at any time point. Asthma symptom response Among the subpopulation of patients with asthma, patients rated their asthma symptoms on a VAS. The mean (SD) asthma-VAS score decreased from 48.9 (29.3) mm at baseline to 24.1 (21.9) mm on the last day, resulting in a mean change of − 25.7 (26.0) mm (Fig. 2). Changes from baseline for AR symptoms and asthma symptoms were moderately correlated (Pearson correlation coefficient, 0.47; P < 0.0001). Participants with asthma reported using reliever medication a mean of 5.1 times during the week before treatment. Self-reported data regarding frequency of asthma reliever medication use during the study period were available for 241 patients (85.0%). A total of 139 patients (57.6%) reported that the frequency of asthma reliever use was either considerably reduced or reduced. In addition, 93 patients (38.6%) reported no change, and 9 patients (3.7%) reported an increased frequency of asthma reliever medication use. Troublesomeness of sleep Changes in quality-of-life measurements are reported in Fig. 3 through Fig. 6. In the whole study population, mean (SD) troublesomeness with sleep quality VAS score significantly decreased by − 33.7 (28.1) mm from day 0 to the last day (P < 0.0001). Similarly, among the subpopulation of AR with asthma, mean (SD) troublesomeness with sleep quality VAS score decreased by 34.6 (29.1) mm from day 0 to the last day. Among the population without asthma, mean (SD) troublesomeness with sleep quality VAS score decreased by − 32.7 (28.6) mm from baseline (Fig. 3). Troublesomeness of daily activities The mean (SD) troublesomeness of daily activities at work or school VAS score significantly decreased by 35.2 (25.6) mm in the whole study population (P < 0.0001). In the subpopulation of AR with asthma, the mean (SD) troublesomeness of daily activities at work or school VAS score decreased by 34.3 (27.5) mm. For patients without asthma, the mean (SD) decrease from baseline in troublesomeness of daily activities was 34.7 (26.3) mm (Fig. 4). Furthermore, in the whole study population, the mean (SD) troublesomeness with daily social activities VAS score significantly decreased by 33.2 (25.8) mm from baseline to the last day (P < 0.0001), whereas the mean (SD) decrease in the asthma population was 32.6 (29.2) mm. Among patients with no asthma, the mean (SD) change in social activities VAS score was − 32.7 (26.1) mm (Fig. 5). Finally, mean (SD) troublesomeness with daily outdoor activities VAS scores significantly decreased by 40.0 (27.2) mm, 40.2 (30.9), and 39.2 (28.0) mm in the general study population (P < 0.0001), asthma subpopulation, and no asthma subpopulation, respectively (Fig. 6). Discussion This was the first multicenter, prospective, noninterventional, real-life study to evaluate the effect of AR treatment with MP-AzeFlu on asthma symptom severity and reliever medication use. A total of 24% of patients with AR in this study reported comorbid asthma, which is comparable with literature rates of 15% to 38% [1]. We showed that patients with moderate-to-severe AR and comorbid asthma treated with MP-AzeFlu had similarly improved AR symptom severity compared with patients with AR alone. Patients with and without comorbid asthma also experienced improved quality of life with MP-AzeFlu treatment. For patients with asthma, asthma symptom severity and asthma reliever medication use decreased from baseline. In general, improvements in AR-VAS scores and quality-of-life measures were comparable for patients with and without asthma. This is particularly notable given that the subpopulation with comorbid asthma had numerically higher rates of more severe AR symptoms. Although significance testing was not performed across populations, the AR with asthma group had a higher rate of both perennial and seasonal AR and allergic sensitization to more than 5 allergens. Baseline AR-VAS scores, however, were only modestly higher in the group with asthma. These data suggest MP-AzeFlu treatment may have similar effectiveness in populations with and without asthma and with varying levels of AR severity. VAS scores were used to assess AR symptom severity, asthma symptom severity, and quality-of-life measures in this study. Advantages of VAS measurements include a high degree of resolution, with repeat measures offering the opportunity to identify even small changes within and among individual patients and groups of patients [21]. In addition, VAS scores are good tools for measuring continuous variables, such as AR and asthma symptoms [21]. In past studies, VAS scores have been shown to correlate well with the severity of AR according to ARIA guidelines [22,23]. A cutoff variation of 23 mm for VAS was shown to correlate well with the established cutoff of 0.5 for the Rhinoconjunctivitis Quality of Life Questionnaire [23]. Moreover, a change of 30 mm was always correlated with positive changes in quality-of-life parameters [23]. In the present study, changes in AR-VAS scores from baseline to the last day exceeded this cutoff for all endpoints in the safety population and the comorbid asthma population, suggesting meaningful changes in symptoms and quality of life. VAS scores are not only useful in clinical practice for stratifying patients and monitoring response; they have also been used as evaluation parameters in randomized controlled trials of AR treatment. In 2 studies of AR evaluating treatment with INAH, VAS scores discriminated between placebo and treatment groups better than total symptom scores [24,25]. In this study, the mean change in AR-VAS from baseline to the last day suggests a shift from uncontrolled to controlled AR and severe to mild AR. Although VAS scores are less commonly used for evaluation of asthma symptoms, they have nonetheless been shown to be valid measures for predicting asthma control and lung function [26][27][28]. When VAS was evaluated in the morning and evening in adolescent patients with asthma, average VAS scores were significantly correlated with both asthma control (r = 0.65, P < 0.001) and FEV 1 (r = − 0.38, P = 0.029) [26]. In a study of Japanese patients, Global Initiative for Asthma-defined control levels were discriminated by VAS score cutoff points of 1.50 cm (controlled), 4.79 cm (partly controlled), and 7.19 cm (uncontrolled) [28]. According to these cutoffs, the asthma severity VAS scores were suggestive of uncontrolled asthma at baseline, which is further supported by the use of more than 5 puffs of reliever medication on average in the week before treatment with MP-AzeFlu. With MP-Aze-Flu treatment, asthma control improved by the last day to partly controlled in the majority of patients (median 20.0 mm) and to controlled in at least 25% of patients (low quartile 5.0 mm). These data are further supported by the reduced use of asthma reliever medication at study conclusion. Several studies have shown that the use of INCS can improve asthma symptoms in patients with comorbid AR through the treatment of upper airway inflammation, which indirectly decreases bronchial hyperreactivity [29,30]. Therefore, improvement in asthma symptoms with MP-AzeFlu treatment could be attributed to the improved control of AR symptoms, decreased airway inflammation, or, most likely, a combination of the two, which is supported by the moderate correlation between AR symptom severity improvement and asthma symptom severity improvement. The "one airway, one disease" hypothesis suggests joint management of AR and asthma leads to better control of both diseases [31,32]. Evidence for the "one airway, one disease" hypothesis includes epidemiologic data that suggest the high frequency of comorbid asthma and AR, heritability of allergic diseases (e.g., AR, asthma, and atopic dermatitis), and the overlapping roles of inflammatory mediators in AR and asthma, which are supported by the clinical effectiveness of corticosteroids and antihistamines for both conditions. In this study, the moderate correlation between change of general AR-VAS and asthma-VAS scores lends additional credence to the "one airway, one disease" hypothesis. Study limitations included the observational design and lack of a control group for comparative purposes. This limits comparison with previous studies, during which data were collected under different circumstances. Furthermore, we have limited data surrounding clinically relevant features of asthma, including the method by which asthma was diagnosed and current asthma medications. However, because of the multinational, noninterventional study design, information about a variety of patients' allergy characteristics could be obtained, and comparison through a preintervention and postintervention design was recorded. Although the 2-week study period was sufficient for documenting a substantial improvement in AR and asthma symptom severity, monitoring of AR symptom control over a longer period of time would better inform long-term outcomes with MP-AzeFlu treatment. Conclusion MP-AzeFlu use was associated with improved AR symptoms, asthma symptoms, and quality-of-life measures in patients with concomitant asthma. Change in overall AR symptoms and change in asthma symptoms were correlated. The results support the "one airway, one disease" therapy approach for asthma and AR management.
2020-08-07T13:25:25.408Z
2020-08-06T00:00:00.000
{ "year": 2020, "sha1": "90025aae8c761a78566e3f3f00508d39ad7b8bd1", "oa_license": "CCBY", "oa_url": "https://clinicalmolecularallergy.biomedcentral.com/track/pdf/10.1186/s12948-020-00130-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4adfd3f57d9278e7269b41b09712bf07969531c1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17129059
pes2o/s2orc
v3-fos-license
Drug use as boundary play: a qualitative exploration of gay circuit parties. Research findings have revealed that gay circuit parties may be locations that are disproportionately responsible for the increasing rates of many STIs/HIV among gay/bisexual men. Theories have been put forth that this may be the case because circuit parties are locales of prevalent drug use and unsafe sex. To explore the relationship between these two phenomena, in-depth qualitative interviews were undertaken with 17 men who (1) have sex with other men, (2) attended gay circuit parties in Montréal, Canada, in 2007. These revealed that drugs (including alcohol) were used intentionally to engage in unsafe sex, and then to justify this behavior after the fact. This process we called boundary play. INTRODUCTION As a starting point, because both gay circuit paries (GCPs) and raves manifest some similarities, the latter type of party can be used to situate the former type of party. For example, they both typically take place in large venues and have repetitive and loud, "drum 'n bass" music played at a fast tempo in conjunction with intricate and elaborate light and laser shows. In addition, at both GCPs and raves, huge crowds dance, and often consume drugs-a fact that situates them as assemblies of nonmainstream participants and practices. Please note, in this context, alcohol was described, and thus coded, as a drug. Beyond these similarities, however, raves and GCPs differ quite significantly: unlike raves, GCPs were created and defined in relation to the celebration of/by a particular sexual orientation. This means that while raves and GCPs are both parties that have been created by nonmainstream populations and both involve music, dancing, and drug use, the two nonmainstream populations are markedly dissimilar. GCPs were designed by and for a subset of men who have sex with other men (MSM). Raves, in contrast, seem to attract a more undifferentiated, nonmainstream group and are primarily attended by younger, often heterosexual, youth; GCPs are most often attended by 20-to 40-year-old, self-identified, gay and bisexual men who are above average in education and socioeconomic status. In addition, drug use differs: ravers prefer ecstasy; GCP attendees use a wider variety of drugs, including ketamine, crystal meth, gamma-hydroxybutyric acid (GHB), amphetamines, ecstasy and alcohol. These differences thus raise questions about the validity of using data about raves to understand GCPs. In fact, the small quantity of research that addresses GCPs has often identified the participants as a distinguishable subculture, which is different from other dance-drug-party cultures. This suggests that many of the descriptions about drug use that were derived from other populations may not be valid for explaining the drug-using practices of GCP participants. Compounding the severity of inappropriately conflating GCPs and raves is that research (Ghaziani & Cook, 2005;Kurtz, 2005) suggests that GCPs may be disproportionately responsible for the recently observed increases in HIV rates among MSM. On the basis of the foregoing points, the purpose of this research was thus to revisit the notion of drug use within the contexts of GCPs to determine how these activities in this specific milieu could be understood from a non-postpositivistic, nonpsychoanalytic perspective. The goals of doing this were twofold. On the one hand, the intention was exploratory: to see what results two researchers who are guided by the theories of Deleuze (O'Byrne) and Foucault (Holmes) would produce about GCP-related drug/alcohol use. On the other hand, the aim was to use the information that arose from such a nonmainstream position to inform HIV prevention work. This latter goal arose because while the traditional explanations of these phenomena are empirically based, rigorously developed, and logical in nature, they do not seem to be yielding the information that is needed to develop 1510 DRUG USE: A QUALITATIVE EXPLORATION OF GCPs 1511 successful HIV prevention initiatives. Indeed, HIV rates continue to rise among MSM, and some authors (Ghaziani & Cook, 2005;Kurtz, 2005) suggest that GCPs and their associated practices are some of the possible reasons for this situation. As such, it seemed logical to explore this topic from different perspectives to see what information would arise (or, more accurately, be produced), while simultaneously delving into the larger task of identifying how this information could be used to produce novel HIV prevention initiatives for this target population. Regrettably, this second goal is not addressed within the scope of this article. Therefore, because of the lacunae in the knowledge about GCPs and their potential role in HIV transmission, we undertook a qualitative, exploratory study to understand drug use from the perspective of the GCP attendees by means of direct observation of two circuit parties, questionnaires, and in-depth, semi-structured interviews. In this article, however, only the data from the 17 interviews will be presented because such a narrow focus allows for more in-depth discussion of the precise insights that the research participants revealed about drug use within the context of GCPs. Following this, the data will be explained using the concept of boundary play. CONCEPTUAL FRAMEWORK: BOUNDARY PLAY To ground our explanation of drug use at GCPs, we employed the concept of boundary play, which neither initially described in relation to drug use at GCPs, nor originally called boundary play. Moreover, this concept was adopted during data analysis, not beforehand. In other words, while we were aware of the concept edgework prior to data collection/analysis, it was not originally included in the theoretical framework, which guided this work. Instead, it was incorporated as data analysis ensued because this idea provided an excellent structure for explaining our results. However, this necessitated that the concept of edgework be described as boundary play because this new term better reflected the scope, nature, and details of the collected data. Boundary play can best be understood as a process during which individuals navigate a variety of edges, or "play" with various boundaries including the limits between sanity and insanity, legality and illegality, safety and danger, chaos and order, and life and death. The boundary is the dividing line between two opposing states of existence, and playing with such boundaries is the act of approaching or treading on these lines. An essential component of such acts is that in flirting with danger, individuals must demonstrate the ability to safely navigate perilous edges, boundaries, or limits without falling off/over the edge. During boundary play, individuals must be ready and able to negotiate and navigate extreme situations rapidly despite the odds being against successfully doing so. They must be able to avert the extreme and often irreversible loss, damage, or destruction to which they have exposed themselves. In short, individuals who engage in boundary play are driven to what some may call extremes. In saying this, however, the term extreme must be nuanced. It is not a point that is too far, but rather the maximum point, or apex. It is the furthest point that one can reach without being/becoming unable to return and while such practices may seem self-destructive, this is not the case. Boundary play does not occur as a manifestation of inherent desires for self-harm. Instead, the behavior is seemingly paradoxical and manifests the desire to remain safe within otherwise dangerous situations that one has intentionally entered. Thus, boundary play is not an act of recklessness, a method by which to commit suicide, or a sign of personal disregard. Ultimately, it is not a manifestation of underlying psychopathology. Instead, it is the expression of pure desire. Design This project was undertaken as a qualitative-based, exploratory research into sexuality and drug use and gathered information about history, culture, gender, etc. As part of this, attention was paid to the environment, social interactions, and the culmination of all the physical and nonphysical connections that produced the overall ambience and experience of the GCP. Data collection occurred through direct observation, autoadministered surveys, and formal interviews. As noted earlier, only the interview data results will be discussed in this article. Participant Recruitment Recruitment for this study was not restricted to the target GCPs; it also occurred via posters (with a phone number and an e-mail address printed on them) in bathhouses, gay bars, clubs, and gyms, and sexual health clinics in three of Canada's biggest cities, which host the largest GCPs in Canada and have the largest urban Anglophone and Francophone populations. In addition, snowball sampling was used-interviewees were given the researcher's contact information, and asked to pass it on to other individuals whom they believed would be interested in participating. This recruitment method has been shown to be effective for infiltrating a group that engages in marginal practices (Platzer & James, 1997). The inclusion criteria were as follows: self-identifying gay or bisexual man who attends GCPs, has/does use(d) drugs at GCPs, and engages in sex with partners at or from GCPs. All potential participants were screened prior to meeting and only those who met all three criteria were formally interviewed. Data Collection: Formal Interviews The principal data collection method was formal interviews, which occurred in the offices of the research team. During these interviews, participants first completed a self-administered questionnaire in which they reported their sociodemographics and sexual/drug use behavior. This information was obtained to gather a rich description of the interview sample. Following this, the participants took part in a taped, in-depth, semi-structured, openended interview, which lasted approximately 1 hour. For this process, a feminist approach to interviewing was used, with its corresponding engagement in emotional issues and development of trust. This personalization of the interview process helped redefine the interview context and equalize the power differential between the interviewer and the interviewee, thus reducing the power disparity and the unidirectional structure present in the standard interview (Fontana & Frey, 2003). This also allowed the interview to wander into areas that the interviewees chose, and it continued until the interview material no longer provided new information. Data Analysis: Epistemological and Step-by-Step Considerations The analytic methods that were employed as part of this study could most readily be called a schema, or latent content, thematic analysis. This means that the analysis of the interview data occurred with the goal of unearthing the underlying latent content of the participants' interview data. However, this should not be understood in a psychoanalytic sense. Instead, as the underlying theoretical orientation of the authors who undertook this study is ultimately Deleuzian (O'Byrne) and Foucauldian (Holmes), latent analysis is not aimed at understanding the inner workings of the human mind, but rather is focused on identifying the structures of desire and power, which permeate both the undertaking and the description of a given event/phenomenon. Furthermore, such theoretical orientation also sheds light on the epistemological perspective within which this project took place-a poststructuralist orientation in which reality is the outcome of various competing discourses, interpretations, and interactions. An awareness of such information is imperative as one shifts through the data analysis and discussion, which follows shortly because it situates the constructivist method of data interpretation that occurred. Such a theoretical orientation also explains another idea that underpins this research: the notion that researchers cannot remove, eliminate, or even bracket their personal assumptions, beliefs, or values (what some might call, their "biases"). As such, while it is stated earlier that the conceptual framework of boundary play was selected during the data analysis, this should not be understood to mean that this conceptual understanding emerged from the data. Indeed, to use such language is to deny the active process that occurred as part of this analysis: the interpretation and scrutiny that ensued according to the researchers' underlying life histories. The after-the-fact selection of boundary play thus occurred because both authors came to the conclusion that it encapsulated the points that the research participants raised. Nevertheless, owing to slight inconsistencies (as are noted earlier in the section describing the conceptual framework), this concept was changed slightly from edgework to boundary play. Steps of Data Analysis. The specific methods by which this thematic analysis occurred were as follows. First, the authors familiarized themselves with the data through multiple readings of the interview transcripts, and then by engaging in numerous discussions with one another about the meaning and significance of the data. This involved repeated reviews of the interview data material to reflect and validate our initial interpretations. Second, and not necessarily a distinct phase from the first step, the authors began to generate a list of initial codes. Because this thematic analysis was latent content, or schema-based, it involved the identification of metaphors within the text. Codes were written directly into the margins of the printed transcripts, and were also compiled in an excel spreadsheet. At this point, the codes were not ranked, sorted, or filtered. Instead, they were simply listed along the vertical axis of the spreadsheet. Along the horizontal axis, each interview participant was listed, and a corresponding mark was made to indicate that this participant had mentioned the content of this code. As part of this mark, information was included to the line numbers of the transcript to: (1) verify the empirical data from which these codes arose at a later point and (2) to ensure a strong audit trail. As this second step began to finish, the third phase started: the aggregation of these codes into themes. This was the step wherein similar codes were identified and placed together under a unifying heading. Thereafter, the fourth step started, which involved reviewing the themes by returning to the actual coded interview material to ensure that the codes did in fact belong together. Here, we also sought to ensure that the themes were both internally homogenous (i.e., the codes that had been combined within each theme were sufficiently coherent) and externally heterogeneous (i.e., themes were identifiably distinct from one another). As the final phase, the themes were defined and named. This involved clearly and concisely articulating the content of each theme, both as an identifiable topic and in relation to the other themes. This was the process of producing each theme's narrative and the overall story of the themes as a whole. At this point, the written text that arose from this process was transformed into formalized manuscripts. Drugs and Its Users: Understanding GCP-Related Drug Use In total, 17 interviews were carried out. Each interview participant also completed a self-administered questionnaire regarding their age, ethnicity, socioeconomic status, and sexual and drug-using practices. These data are presented in the following sections. Sample Description In order to provide a detailed overview of the sample, a series of descriptors have been presented in Table 1 To add further details about this group, Table 2 I tend to explore. I need to know. I mean, people say, "oh have you tried this?" "No, but I'll try once," and then I'll know if I like it or not. The above quotations illustrate that exploration is a primary activity of the research participants. They wish to experience/experiment with many new sensations. However, this exploration is not without limits. Instead, it can be described as the process of standing as close to an edge as possible, but without falling off. It is a highly regulated and controlled activity that involves reaching the limits of self-control, without exceeding them. The following participant clearly articulates this point about how pushing limitations gives him "everything," but only up to a certain level. At this undefined point, the following participant realizes that he has gone too far, and stops pushing. Indeed, he decides that he is "not playing anymore." He states: What I was saying is that you're pushing; you have everything when you're pushing too far, but there's time when I'll say I'm not playing anymore. (Ott-14) In the foregoing quotation, Ott-14 exemplifies what many other participants described: a process of approaching personal limits in order to explore them, but without going too far. At the point where his exploration does go too far, he states that that is the "time when [he'll] say [he's] not playing anymore." In this quotation, that which stops him from "playing" is not reported-perhaps (and as will be suggested in the theme two), this is because it is only post hoc, after the exploration, that is, that he can explain what caused him to stop "playing." Nevertheless, further investigation with Ott-14 confirms that his goal is to "push [him]self to the limit," but not do destroy himself in the process. "Exhaustion is the goal," not death, destruction, or devastation. The bounded nature of this "push [ing]," which we call "exploration" here, is of particular interest because it demonstrates that the participants (as exemplified by the text of Ott-14) did not intentionally desire their own demise. Indeed, they did not seem to be intentionally attempting to inflict harm upon themselves. Notice how Ott-14 explains this process: I'd push myself to the limit of exhaustion. But not destruction? No. For example, three months ago, I was ready to die and I decided let's go in detox and just give yourself another chance, but I was ready to commit suicide, so I had at least that survival that keeps me. In this second statement, Ott-14 provides an example of his "push[ing]" (i.e., exploration) that exceeded the "limit of exhaustion"-indeed that went to the point of suici-dal ideation. However, at the point where he has gone too far, a withdrawal occurs ("detox" and "that survival [instinct] that keeps me [alive]" kicks in). Our interpretation of this process is that both Ott-14 and the other participants who described such a process withdraw when their explorations cause them to surpass a certain point (i.e., their limits). Again, we posit that this may be because the goal is to flirt with danger, not to be destroyed. That is, pushing beyond personal limits represented a "risk" that this participant did not want to take. Ott-14 further illustrates how he would push until he was about to "lose everything," but then draw back: To the limit of losing everything, yeah, I would do it completely. And that's what I'm thinking these days is the excess. Now I have to rehabilitate myself in finding pleasure without excess, but I still need my highs. As revealed in this quotation, Ott-14's ability to experience pleasure seems to be inextricably linked with "excess" and pushing "to the limit of losing everything." His desires, one could suggest, involve a form of bounded exploration, an investigation into everything that can be experienced within a given set of limits. The caveat, however, is that this participant also desires to retain control of himself by placing parameters onto his exploration. However, no other participants reported such severe outcomes (detoxification or suicidal ideation) related to their substance use. In contrast, most participants reported that both GCPs and substance use served as the means by which desired experiences of exploration could be realized. In all of this, the sequence remains unaltered: the participants' aimed not to lose control, but rather to maximize personal experiences by reaching this limit. In the following two quotations, After four days of partying most people, unless they were popping E or something crazy like that, their energy lowers, their mood and the atmosphere starts to die down. In the same vein, another participant relates that he uses ecstasy to offset fatigue: [Because of fatigue], usually at around four o'clock in the morning it will probably be ecstasy and usually when you come down from ecstasy you use marijuana so you don't come down harder. In the previous two quotations, the participants push their bodies to their limits. This demonstrates the deliberate use of drugs to achieve a specific form of exploration-not too much to overdose, but not too little so that they feel fatigued. In the second quotation, Ott-11 reports using additional substances to dynamically counteract the effects of previously consumed substances-the "use [of] This participant consumes drugs, but not to a point that he feels is excessive. His description of exploring his limit indicates an absolute measure that should not be exceeded. As previously noted, the analysis of the interviews identified that each individual possessed different limits, but to each participant, personal limits were reported as objective ("THE limit"): "But know what you're doing and know your limits, or know THE limits" . This quote demonstrates that despite Ott-8's substance-induced explorations, his goal is to remain cognizant of his limitations at all times. There is, thus, a point that the participants do not wish to pass, but at times, accidentally do. In these situations, drugs serve a secondary role: they justify transgression. Theme Two: Drug Use as Justification. In addition to deliberately consuming drugs to experience the effects that these substances can produce, the participants identified that these substances can also serve as an excuse for some of the actions they undertook while intoxicated. This means that while the participants used drugs to overcome some of their limitations (subtheme one), they also used these substances to justify their transgressions if they did not appraise them favorably once they had again become sober. In the following quotation, Ott-10 illustrates this process when he indicates that alcohol affects his decision-making processes. He states: Yeah. Here, Ott-10 notes that substance-induced intoxication causes him to "make decisions" that he "wouldn't normally make." In making such a claim, Ott-10 is attempting to relinquish, or at least diminish, personal responsibility for his own actions. In effect, his statement can be interpreted to mean that Ott-10's sober self is unaccountable for his drunk self based on the rationale that alcohol diminishes one's ability to make sound, logical decisions, and thus this substance is the cause of people's behavior. Other participants, such as TO-1, however, refute Ott-10's claim by stating that "if you know your own limitations, then you can't turn around and blame the drug for it." TO-1 continues: "Get stoned so that you can do it [i.e., pursue one's desires], but don't get so stoned that you lose control" (TO-1). In both these quotations, TO-1 maintains the idea that drugs can cause a loss of control, but this loss of control is an extreme state; it is not the usual outcome. Indeed, TO-1 argues, as did many other participants, that the usual outcome of drug consumption is to render one more likely to pursue otherwise inhibited or repressed desires. When Ott-10's claim that alcohol (as a drug) diminishes the hold he has on himself is interpreted through the lens of TO-1's statement, we can begin to understand that recreational drug use serves an important role in distancing the user's intoxicated self from his sober self. Further exploration of this topic/idea with other participants revealed that the ex post facto relationship between alcohol and drug use and behavior is one of justification. Ott-11 reported that he uses drugs to justify the actions he undertook while intoxicated when he feels that he needs to. An example of such a situation is when he blames sexual contact with an unattractive partner on drug-induced visual impairment. He states: It's not usually what you did, it's usually who you did it with. That's what they're referring to. It's like, "oh, I can't believe I took that monster home." That's probably what it is, their vision was really impaired. I think that's what they mean by it. My mother can tell me that she doesn't like getting fucked, and I call her a liar, because it's a human bodily function that we enjoy. So, for getting fucked, it's referring to whom you've been doing it with. (Ott-11) According to Ott-11, a person's claim, including his own, that specific sexual contacts would not have occurred without intoxication is only required when that individual perceives his actions to be unacceptable (e.g., sexual contact with an unattractive partner). In such cases, drugs serve a dual purpose: (1) to permit individuals to push their limits and (2) to absorb the blame if the outcome of such behavior is deemed inappropriate. By extending Ott-11's claim to its logical limit, we can interpret the idea that if one woke up to find an attractive person in one's bed, one would be unlikely to blame the situation on drugs. From the perspective of Ott-11's statement, if one were to engage a partner who continued to appear attractive after the drug-related visual impairment had passed, one's intent to satisfy what Ott-11 calls "a human bodily function that we enjoy" might be more openly acknowledged as intentional and purposive. In all other cases, however, Ott-11 suggests that substance use will continue to be blamed. Ott-11's insight also helps to explain the seeming inconsistencies that arose in other participants' interview data, such as when they adamantly reported that drugs did not produce changes in their behavior, but then described situations in which the exact opposite occurred and drugs were the identified cause of their behavior. Here, Ott-7 describes what seems to be an inconsistent story: one in which drugs both do and do not change his behavior. In the first three statements, Ott-7 describes how he planned his nightly drug use, which might have included undertakings that were not part of his original plan. In the fourth quotation, Ott-7 identifies the point at which he suspects he became HIV positive, and then proceeds to discuss how drugs, due to their having taken his "personality into darkness, into doing riskier things," were the reason that he engaged in the sexual practices that ultimately caused him to become HIV positive. As discussed, Ott-11's statement helps clarify that Ott-7 is likely using drugs as justification for an unwanted outcome. This reconciles the differences between his initial statements that drugs did not affect his behavior and his later assertions that drugs had taken him to "the dark side." Another example of dissociating intoxicated-self behavior from sober-self behavior can be seen when the following participant at first relates that he is a specific type of person (i.e., he outlines his sober self), then how alcohol (as the drug he uses) changes his behavior (i.e., his drunk self), and lastly, that his actions are actually planned. He states: In the three foregoing quotations, Ott-8 describes an inconsistency that is similar to the one reported by Ott-7. Ott-8 describes himself as being a specific type of person, then blames his out-of-character actions on substance use, and finally admits that the behavior he previously blamed on alcohol is actually part of a plan that he does not acknowledge. In the third and final excerpts, Ott-8 enhances our understanding of what Ott-11 states is the use of intoxicants as justification ex post facto. It protects the sober-self's self-esteem. To acknowledge that one knowingly made decisions that can be evaluated as unwise or illogical is to put one's intelligence and rationality in question. However, to pretend that these actions are unintentional to the point that they are the result of external agents (i.e., drugs) protects the rational/logical/intelligent sober self from its seemingly irrational desires. Thereby, drugs serve as both personally and socially acceptable excuses for behavior. DISCUSSION A summary of the foregoing interview data reveals that the interview participants reported that they deliberately consumed drugs (which in this context includes alcohol) to push themselves to their limits and to excuse themselves, after the fact, for having done so. In other words, an in-depth analysis of the data revealed that the research participants were intentionally and actively involved in the exploration of their personal limits and boundaries-both in relation to approaching and retreating from them. This sequence of approaching and receding from limits via drug use corresponds quite evidently to our description of boundary play: the process of exploring, experimenting, and approaching one's limits without exceeding personal boundaries. The goal of the participants was to move toward the event horizon of their identities, to navigate this dangerous border, and then to return to their routine lives and selves unscathed. They wanted to explore and experience life, but without irreversible damage to their everyday life/self. The first step in this process was the use of drugs to diminish fatigue, negate pain, and override psychological inhibitions while attempting to avoid any form of irreparable or irreversible harm. In each case, the result of drug use was an uninhibited expression of desire, an indulgence in sought-after pleasures-that which we called an exploration of life. That is, through the consumption of these substances, the research participants were able to approach the extreme limits of their usual parameters of behavior and self. Stated differently, these men were able to engage temporarily in a process of pure becoming without being-an exercise of opening themselves up to endless possibilities of change and movement (Deleuze & Guattari, 1980, 1987. Once such thresholds had been reached or explored, however, the participants reported that they sometimes blamed these substances (after the effects of the drugs had worn off) for any activities that they had engaged in during their boundary explorations. This occurred, primarily, when they later evaluated something that they had done during their period of intoxication as antithetical to their ideations of a sober self. Thus, if these men determined that they had pushed a boundary too far, they would explain this occurrence as a consequence of drug/alcohol ingestion rather than one of personal desires, or a lack of self-restraint. In explaining their actions and practices in such a way, the participants positioned themselves as victims of their own intoxicated selves; they became the innocent casualties of the substances they consumed. However, further exploration of this point indicated that the use of drugs served more precisely as an excuse for behavior that an individual does not wish to take ownership of. This latter point is of central importance in our understanding of GCP-related drug use among the men involved in this study. In fact, this second function of substance use guided our understanding of this practice as a form of boundary play. This was because the ex post facto use of drugs as excuses illustrated that the participants in the study were not attempting to destroy previous conceptualizations of themselves; rather they were, for a designated period of time (i.e., the duration of the ingested substances), seeking to explore everything that could be part of their existence, from the unknown and unexplored to the known, but apprehensively desired. This signifies that the participants in this study did not have desires for self or social destruction; they simply wanted to explore their boundaries. When the two functions of GCP-related substance use are considered simultaneously, drugs can be understood as transport mediums-agents which both permit and excuse an individual's unconventional behavior. From a boundary play perspective, drugs allow the individual to move to the edge of their boundaries and then permit them to return to the safe, central zone of their behavioral limits. These substances transport their users from nucleus to fringe and then back again-a process wherein individuals carry out their desires to the limit, and then argue that such actions are the result of intoxicants and not personal desires. As part of this process, the intoxicated self is thus separated from the sober self. The significance of these findings is that they illustrate a previously undocumented explanation of GCPrelated drug use. More specifically, these findings add to what previous authors have clearly identified as an almost incontestable link between drugs and unsafe practices. For example, (1) Mattison, Ross, Wolfson, Franklin, and HNRC Group (2001), who, based on their nonrandomly distributed 1,169 three-minute surveys at three GCPs across North America between 1998 and 1999, identified that most GCP attendees attend GCPs to fulfill their desires for "community, enjoyment, and celebration" (p. 125); (2) Ross, Mattison, and Franklin (2003), who used the same data set as Mattison and colleagues (2001); and (3) Colfax and colleagues (2001), who, in their telephone-based survey of 295 men in San Francisco, all reported that GCP-related drug use strongly correlated with unsafe sexual practices. Mansergh and colleagues (2001), who used the same data set as Colfax and colleagues, also found this relationship between drug use and unsafe behavior, and Lee and colleagues (2003), who also nonrandomly administered surveys to 173 men on-site at GCPs, illustrated this quite explicitly: "MDMA [ecstasy] use was also associated with significantly more receptive anal intercourse" (p. 47). This quotation summarizes the findings of the previously undertaken research about the strong relationship between drug use and unsafe behavior at GCPs. The present study found similar results. In addition to this similarity, however, the results of our study and those of the previously undertaken quantitative studies involving GCPs differ quite substantially-particularly in relation to the proposed explanations for the relationship between drug use and unsafe practices. Considering Mattison and colleagues' (2001) work, this difference relates to the interpretation of why the consumption of multiple drugs correlates with unsafe sex. In contrast to the findings presented in this article, Mattison and colleagues (2001) proposed that drug interactions render users more disinhibited and amnesic and this causes them to engage unknowingly in unsafe sexual practices. Note as evidence, the following quotation from these authors: "It is probably reasonable conjecture that not only is it likely that users of multiple drugs are less likely to be able to predict or control drug interactions but also that as the number of drugs used simultaneously increases, disinhibition and amnesia may increase" (p. 125). However, on the basis of the actual data that these researchers collected, the explanations of Mattison and colleagues (2001) should be approached with caution because, on the one hand, they failed to collect the necessary data on sexual intent to make such assertions, and on the other hand, when data that specifically explored the relationships between drugs and unsafe sex were collected, these findings were refuted. Therefore, we suggest that Mattison and colleagues' (2001) theory about multiple drug consumption and unsafe practices is based on personal assumptions that no individual would intentionally engage in unsafe practices. In doing so, these researchers maintain what Bataille (1962) calls an outsider perspective-one in which practices are interpreted exclusively in relation to the biases of the nonparticipant observer. This is particularly relevant when Mattison and colleagues arrived at the earlier assertion after having claimed that "it is interesting that the reasons for party attendance appeared to also predict unsafe sexual behaviour, with attending 'to have sex, ' to be 'uninhibited and wild,' and 'to look and feel good' all predicting higher levels of unsafe behaviour" (p. 124). Here, the seeming conflict in Mattison's research findings may be explained using the idea put forth in this article about substance use and boundary play. Indeed, when in-depth qualitative interviews are used to explain the relationship between GCP participants' drug use and unsafe practices, the results reveal that many of the outcomes related to drug use are not necessarily unintended. These substances are part of a deliberate act of using drugs to engage in boundary play. In their foregoing quotes, the participants of this study clearly indicated that drugs do not change their behavior. Rather, consequences such as HIV acquisition or sex with an unattractive partner are unintended by-products of boundary play for which substance use can easily take the blame. Such a distinction, while seemingly inconsequential, may be of the utmost importance for frontline HIV prevention workers who wish to design effective HIV/STI prevention initiatives for this group. (This point is discussed further in the section on limitations and research/practice implications.) Unfortunately, similar theories were created to explain substance use and unsafe behavior when Colfax and colleagues (2001) extended their analysis beyond the data they had gathered as part of their study. In doing so, these authors interpreted their data (which revealed a strong correlation between drug use and unprotected anal sex with a partner of either sero-unknown or sero-discordant HIV status) to mean that "drug use is influencing participants' decisions to have unprotected sex" (p. 376). They then proposed that drug use induces such outcomes because it makes "participants unknowingly engage in riskier behaviour than they would without drugs" (p. 376). However, a detailed review of Colfax and colleagues (2001) article reveals that they did not collect any data on preexistent sexual intent. Therefore, while their claims are based on empirical research, it may be that these authors incorrectly applied mainstream ideas about drug use to frame their explanations. Unfortunately, this practice, while able to produce solutions that may be valid in some situations, does not necessarily advance current understandings about drug use and its relationship to unsafe behavior at GCPs. Moreover, in an almost identical manner, Ross and colleagues (2003) applied explanations about drug use that were developed based on groups other than GCP participants. This resulted in a declaration that drug use induces cognitive escape. Although this interpretation seems to resemble the research findings presented in this article, further inspection reveals that this is not the case. On the basis of the findings of our research, drugs are not used to escape because "escaping" inherently requires an overcoming of constraining boundaries. Escape is an act in which one "break[s] free from confinement or control" (New Oxford American Dictionary). The participants in our study did not describe using drugs to escape any form of confinement or control. In fact, for these men, escaping one's boundaries meant failing in their attempts at boundary play. If they escaped, and thus moved beyond their personal boundaries, these men, first, invoked the excuse that drugs were the reason for adverse or unwanted outcomes, and then "quit playing" as one participant described it, despite the fact that all participants reported that both drugs and alcohol do not fundamentally change their behavior. This reveals that for the men in this study, escape is not a valid explanation for their behavior. They wanted to explore, not escape, and to move within their boundaries, not overcome them. Furthermore, in the qualitative literature on GCPs, deficit-based explanations were most often invoked to explain the occurrence of drug/alcohol use and unsafe behavior. Lewis and Ross (1995), the first researchers to address GCPs, for example, indicated that GCP-related drug use in Sydney, Australia, was a means by which the 17 GCP attendees who were interviewed escaped from the daily hardships and rejections of that gay men experience within a predominately heterosexual society-a seemingly valid point. However, this raises the question: Are these two authors suggesting that GCPs and their associated drug use would become obsolete if gay men were socially accepted? Kurtz's study (2005) involving four focus groups of 3 to 4 men for a total of 15 men raises the same question. He found that his research participants also used drugs to overcome negative emotions resulting from the difficulties of their daily lives. Next, Westhaver wrote two articles based on an ethnographic study of 35 GCPs across North America between 1998 and 2002. On the basis of this extensive data collection, Westhaver (2006) first argued that risky behavior at GCPs occurs as a result of individuals wanting to gain a sense of recognition that "is immanent in, but lacking from, current heteronormative social conditions" (p. 366). Westhaver (2006) then argued in a second article that GCPs should be understood as bodily, not cognitive, experiences of empowerment wherein men can express their homosexuality without reproach. Although this second article is different from the first one, it too was predicated on the belief that risky behavior was the manifestation of underlying deficits. This is somewhat paradoxical because Westhaver (2006) stated in his first article that he rejected the idea that risky practices can be understood as signs of "irrationality or moral depravity" (p. 347). Nevertheless, in this article, the author returned to a deficit-based explanation of risky behaviour-just one that did not label risky practices as either irrational or morally impure. Although the results of these other research projects differ from the results presented here, this by no means serves to diminish the importance or validity of these previously undertaken studies. In contrast, these comparisons were made simply to highlight the differences that were found in this research and in others. As for explanations about why such differences may have occurred, one must remember that, first, a different theoretical orientation was used in this project. This different perspective, while being highly critical, was also both accepting of illicit drug use and sceptical of the notion that unsafe sex is a pathological activity. As a second point, some of the differences that arose could also be explained on the basis of the methodological approaches that were used. This study was an indepth qualitative study, which employed thematic analysis (described earlier), rather than a quantitative study. In addition (and this point applies to the qualitative research as well), differences might have arisen because of the stated study goals. In this study, the explicit purpose was to expound as much detail as possible about the relationship between drug use and unsafe behavior-a goal that did not seem to exist in most of the other research projects. Thus, in contrast to the quantitative studies, which aimed to identify if a relationship existed between these two items, this study produced understandings about the relationship between drug use and unsafe behavior based on semidirected participant input, rather than from the imported use of findings from other, non-GCP-based qualitative studies about substance use and unsafe behavior. This, by no means, diminishes the importance of this previous research, as this previously undertaken quantitative research both justified and identified the need to undertake this project. LIMITATIONS AND HIV PREVENTION/RESEARCH IMPLICATIONS Although the generalizability of the results of this project is limited by the participant recruitment and data collection methods (i.e., that the data arose from only 17 purposively recruited men who attend GCPs, who use drugs, and who engage in unsafe sex as part of their GCP-partying experience), these results nevertheless reveal an interesting narrative about this group of men who may be particularly vulnerable to HIV acquisition/transmission. What is important about such specified understandings of this group is that a large quantity of research has demonstrated that general population HIV interventions are not only a poor use of resources, but also are unlikely to effect desired population-level decreases in HIV transmission (see Aral, Lipshutz, &Douglas, 2007 andFenton &Bloom, 2007, for further explanation about this point). The reason for this is that HIV is not a prevalent enough infection across North America and most of Europe to warrant general interventions. Instead, tailored and targeted interventions that address those groups that experience the greatest burden of HIV have consistently demonstrated successful public health outcomes. In other words, because HIV is not distributed evenly throughout the population at high enough levels, HIV prevention strategies yield their greatest impact when they are targeted specifically at those individuals who are most likely to encounter this infection-whether this is due to their sexual, occupational, or drug-using practices (Aral et al., 2007). Bearing this in mind, these results highlight a few points of interest regarding GCP-related HIV prevention. First, these results indicate that some men seem to purposively consume drugs to explore their limits and then to excuse themselves for having done so. From an HIV prevention perspective, this could be important in relation to the underlying assumptions that guide substanceuse-related HIV prevention strategies. Indeed, instead of assuming that all unsafe outcomes that follow substance use are accidental, it is important that HIV prevention workers determine whether this is actually the case among the individuals who they wish to target. This follows the work of Halkitis, Shrem, and Martin (2005), who found that crystal meth did not cause risky sexual practices, but rather men who frequently engaged in unsafe sex regardless of drug use were attracted to crystal meth. This raises an important point about the need for more indepth research, which addresses substance use and unsafe sex. In addition, these results raise the second point that HIV prevention workers working in STI clinics, public health or community-based workers who design HIV prevention interventions, or sociobehavioral researchers and students, should not accept the statement, "drugs made me do it" without further investigation. This does not mean, however, that such statements should be rejected; rather, the findings here simply indicate that, at times, for some individuals, this statement is used as an excuse and should be approached with a critical scepticism. For frontline clinicians and researchers who engage with individuals in a one-on-one basis about their behavior, this simply means that more questions are needed. Dig deeper to differentiate between genuine drug-induced behavior and which drugs are simply excusing. Third, from a methodological standpoint, the results presented herein stress how important it is not to simply stop asking questions because the data correspond to socially held, socially acceptable, or personally held ideas that drugs cause unsafe behavior. Although our critical poststructuralistic theoretical position means that we most certainly do not recommend adopting bracketing as a method to address this potentially confounding factor, the findings here do support the idea that researchers need to be forthcoming in their assumptions and opinions about the topics that they investigate. This holds true for both qualitative and quantitative research. Nevertheless, it is important to remember that because of the small and focused nature of this research study, the foregoing results and suggestions should not be generalized to all drug and alcohol use. Indeed, they, as is always the case, should be used with extreme caution. FINAL REMARKS In conclusion, the results of this research indicate that GCP-related drug use can be understood as a form of boundary play, a process of movement between the sober self and the intoxicated self. Indeed, it is possible to claim that drugs (including alcohol) permitted the 17 men in this study to deliberately explore their own boundaries and, later, to excuse themselves for having gone to extremes. As such, boundary play should not be confused with an intentional destruction of limits to the point of irrevocable negative changes. It is important to emphasize that drugs can be mechanisms, which individuals use to diminish problems that may result from actions undertaken while intoxicated. That is, boundary play is an attempt to remain safe despite having intentionally placed oneself in potentially unsafe conditions. Such results, while conflicting with the results of the 10 research studies that have addressed GCPs (because these other results, from a Deleuzian and Foucauldian perspective at least, seem to have concluded that drug use and its associated risky practices are the outcomes of underlying psychopathology), did not describe drug use as the result or outcome of underlying negative feelings. Rather, these practices were viewed as a method by which the research participants explored and experienced new sensations, and then excused this behavior ex post facto. In the language of this article, drug use thus permitted the participants to engage in boundary play. Nonetheless, returning to the fact that this finding arose from the interview data of only 17 participants, it is important that researchers further explore if/how this conceptual framework can be used to explain/predict the behavior of other groups that both consume drugs and engage in unsafe practices while intoxicated with these substances. Declaration of Interest The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the article. La consommation délibérée de drogues comme préludeà des pratiques sexuellesà risque : une exploration qualitative des gay circuit parties Les résultats de recherches actuels soutiennent que les « gays circuit parties » sont des environnements propices a la transmission des ITS et du VIH chez les hommes gays/bisexuels. Certains chercheurs ajoutent que la prise de drogues et les pratiques sexuellesà risque sont des comportements courants dans ces environnements. Afin d'explorer les relations entre ces deux comportements,
2014-10-01T00:00:00.000Z
2011-06-21T00:00:00.000
{ "year": 2011, "sha1": "c1591005c2a6b09385aa03740fc2a675886aad95", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.3109/10826084.2011.572329?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0af1ce429dea40a04c9937aef5128afbef37ccfd", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
258199995
pes2o/s2orc
v3-fos-license
Utilizing Robust Design to Optimize Composite Bioadhesive for Promoting Dermal Wound Repair Catechol-modified bioadhesives generate hydrogen peroxide (H2O2) during the process of curing. A robust design experiment was utilized to tune the H2O2 release profile and adhesive performance of a catechol-modified polyethylene glycol (PEG) containing silica particles (SiP). An L9 orthogonal array was used to determine the relative contributions of four factors (the PEG architecture, PEG concentration, phosphate-buffered saline (PBS) concentration, and SiP concentration) at three factor levels to the performance of the composite adhesive. The PEG architecture and SiP wt% contributed the most to the variation in the results associated with the H2O2 release profile, as both factors affected the crosslinking of the adhesive matrix and SiP actively degraded the H2O2. The predicted values from this robust design experiment were used to select the adhesive formulations that released 40–80 µM of H2O2 and evaluate their ability to promote wound healing in a full-thickness murine dermal wound model. The treatment with the composite adhesive drastically increased the rate of the wound healing when compared to the untreated controls, while minimizing the epidermal hyperplasia. The release of H2O2 from the catechol and soluble silica from the SiP contributed to the recruitment of keratinocytes to the wound site and effectively promoted the wound healing. Introduction Rapid dermal healing requires a balance of redox control [1,2]. During the early phases of the wound healing process, macrophages and neutrophils are attracted to the wound site and release reactive oxygen species (ROS), such as hydrogen peroxide (H 2 O 2 ), at concentrations within the micromolar range. H 2 O 2 induces vascular endothelial growth factor (VEGF) expression in keratinocytes, [3] which stimulates angiogenesis in wounds [4]. ROS are also necessary in the differentiation of M2 macrophages, [5] which promotes tissue regeneration and anti-inflammatory responses [6,7]. The application of ROS to chronic ulcers (e.g., the direct application of H 2 O 2 , hyperbaric treatment to enhance ROS concentration, and the application of honey, etc.) has been found to accelerate healing [2,8]. Additionally, ROS are a natural disinfectant and can prevent bacterial infection [9]. However, high levels of ROS can severely damage healthy tissues, which can lead to the formation of chronic wounds and tumor initiation [10,11]. Biomaterials supplemented with antioxidants have been found to accelerate wound healing, reduce chronic inflammation, and increase biocompatibility [12,13]. However, the complete removal of ROS delays wound healing [2]. As such, tuning the ROS release from a bioadhesive is necessary to promoting rapid dermal wound healing. delays wound healing [2]. As such, tuning the ROS release from a bioadhesive is necessary to promoting rapid dermal wound healing. Catechol-modified bioadhesives have been widely adopted as biomaterials for various applications, ranging from soft tissue repair to tissue engineering [14][15][16][17]. Catechol mimics the crosslinking and interfacial bonding chemistry of the amino acid, 3,4-dihydroxyphenyl alanine (DOPA), which is the main adhesive molecule found in mussel adhesive proteins. To activate a catechol-based adhesive for curing and adhesion, an oxidant such as tyrosinase or sodium periodate (NaIO4) is typically added to initiate the catechol's oxidation and crosslinking [18][19][20]. During the process of catechol oxidation, micromolar amounts of H2O2 are generated as by-products [21,22]. To modulate the amount of released H2O2, we have previously incorporated highly porous and micron-sized silica particles (SiP) into a catechol-functionalized branched polyethylene glycol (PEG) [23]. The silanol (SiOH) surface of these particles absorbs H2O2 through complexation with water molecules and facilitates the decomposition of H2O2 to water and oxygen [24]. Additionally, SiP provides cellular binding sites to enhance the bioactivity of the bioinert, synthetic PEG-based adhesive [23]. While these previous studies have demonstrated the ability of SiP to control the H2O2 production of catechol and improve the biocompatibility of catechol-based bioadhesives [23,25], these materials are not specifically tailored for a given application. Given that the ideal concentrations of H2O2 needed to promote successful wound healing outcomes are different between tissue types, it is necessary to modulate the release of H2O2 specifically toward dermal wound healing. In this study, we sought to simultaneously tune the H2O2 release profile and adhesive performance of a PEG-catechol adhesive incorporated with SiP (Scheme 1) and utilize the composite adhesive for dermal wound repair. However, there are a large number of parameters that may simultaneously affect both the H2O2 release profile and the functional performance of the adhesive. This yields a multitude of potential adhesive formulations, which can be time consuming to screen. For example, increasing the number of arms in the PEG architecture and PEG concentration effectively increases the crosslinking density of the adhesive, which will enhance the rate of the adhesive curing and adhesive strength [26]. The level of H2O2 released from a polymer network will depend on the crosslinking density of the matrix, which will trap the generated ROS [22]. Similarly, increasing the SiP concentration in a composite will increase the crosslinking density of the adhesive matrix and enhance both the curing rate and adhesive strength [23]. The concentration of the SiP will also affect the extent of the H2O2 degradation [25]. Finally, the H2O2 can acidify the surrounding media [27], which can reduce the rate of the catechol crosslinking and adhesive strength [19]. The concentration of the phosphate-buffered saline (PBS) will be used to minimize the effect of the released H2O2 while maintaining the buffering capacity of the PBS. The main objective of this paper is to optimize a catechol-based composite adhesive for dermal wound healing. To this end, the contribution of different parameters to the H 2 O 2 release profile and adhesive performance, such as the SiP content and adhesive polymer's architecture and concentration, was evaluated. To screen a large library of adhesive formulations more efficiently and effectively, a robust design experiment was employed. A robust design experiment uses an orthogonal array to make pairwise comparisons between factor levels and permits the investigator to reliably estimate the factor effects with fewer experiments [28,29]. An L 9 orthogonal array was used to determine the relative contributions of four factors (the PEG architecture, PEG concentration, PBS concentration, and SiP concentration) to the performance of the composite adhesive, as measured by its gelation time, adhesive strength, and the concentration of the H 2 O 2 generated. The candidate adhesive formulations were selected based on the robust design experiment and their efficacy in promoting dermal wound healing was further evaluated in a full-thickness dermal wound in mice. Robust Design Experiment An L 9 orthogonal array was used to determine the contribution of four factors and their relative contributions to the performance of the composite adhesive (Table 1) [28][29][30]. These factors were: (A) the PEG architecture, (B) the PEG precursor concentration, (C) the PBS concentration, and (D) the SiP concentration. Each factor was tested at 3 levels (e.g., A1, A2, and A3 for the PEG architecture, corresponding to 4-arm, 6-arm, and 8-arm, respectively). To determine the effect of the 4 factors, each at 3 levels, the orthogonal array required the testing of nine adhesive formulations ( Table 2). The gelation times, lap shear adhesion strengths, and maximum H 2 O 2 concentrations of these nine adhesive formulations were determined and these experimental values were utilized to determine the % relative variation or the relative contribution of each factor to the measured outcomes. Additionally, these results were further used to predict the performance of 4 3 or the Preparation of the Composite Adhesive In total, nine adhesive formulations were prepared based on the desired factor and factor levels shown in Table 2. Polymer precursor solutions were prepared by dissolving the corresponding PEG adhesives with the corresponding PBS solutions, according to Table 2. The composite adhesives were prepared by mixing equal volumes of the polymer precursor and NaIO 4 (11.6 mg/mL in deionized water) solutions [25]. NaIO 4 was used to oxidize the catechol and initiate the adhesive curing [18,19]. After mixing, the final concentrations of the PEG, PBS, and SiP in the composite adhesive would be reduced by half. As such, it was necessary to double the concentrations of the PEG, PBS, and SiP in the precursor solutions so that their final concentrations were reduced to the desired concentrations shown in Table 2. Characterization of the Composite Adhesive The time it took for the composite adhesive to cure was determined by using the vial tilting technique, as described in previous publications [18,19]. Briefly, 100 µL of the polymer precursor and 100 µL of the 11.6 mg/mL of NaIO 4 dissolved in deionized water were mixed in a vial. The concentrations of the adhesive polymer and the SiP in the polymer precursor solution were prepared based on Table 2. The moment that the adhesive mixture ceased to flow in a tilted vial was recorded as the gelation time. A lap shear adhesion test was performed using strips of bovine pericardium (2.5 cm × 2.5 cm) as the test substrate, following ASTM standards F2255-05 [31]. In total, 100 µL of the polymer precursor and 100 µL of 11.6 mg/mL of the NaIO 4 solutions were mixed in a glass vial and quickly added onto a piece of pericardium tissue. A second piece of pericardium tissue was then applied over the adhesive to create an adhesive joint with an overlapping area of 1 cm × 2.5 cm. The adhesive joint was weighted down using a 100 g weight for 15 min and further incubated in the PBS (pH = 7.4) at 37 • C overnight. The dimensions of the overlapped area were measured using a digital caliper for each sample before the adhesion testing. The samples were pulled to failure using an Electroforce ® machine (Bose Electroforce Group, Eden Prairie, MN, USA) at a speed of 0.1 mm/s. The lap shear adhesive strength was calculated by dividing the maximum load by the overlapped area of the adhesive joint. A rheological analysis was performed using a Discovery Hybrid Rheometer (TA Instruments, New Castle, DE, USA), using cure adhesive samples that were cut to a disc shape (diameter = 10 mm, thickness = 5 mm, and n = 3). Amplitude sweep experiments (0.1-100% at 0.1 Hz) were performed using parallel plates at a gap distance that was set to be 87.5% of the thickness of the individual sample, as measured by a digital caliper. A PerkinElmer Spectrum One spectrometer was used to perform a Fourier transform infrared (FTIR) spectroscopy analysis on the freeze-dried adhesive samples. FOX assay was utilized to quantitatively measure the amount of H 2 O 2 generated from the composite adhesive [22]. The adhesives were cured in the shape of a disc with a diameter of 10 mm and incubated with 1.5 mL of Dulbecco's modified eagle medium (Corning Cellgro, Manassas, VA, USA) DMEM with 10% (v/v) fetal bovine serum (FBS) and 0.5% (v/v) Penicillin-Streptomycin with phenol red (pH = 7.4) for 6 h at 37 • C. The concentration of the H 2 O 2 was determined by mixing 20 µL of the hydrogel extract with 200 µL of the FOX reagent, incubating the mixture at room temperature for 20 min, and then analyzing the absorbance of the mixture via a plate reader (SynergyTM HT, BioTek, Santa Clara, CA, USA) at 590 nm. Full Thickness Dermal Wound Repair Model The ability of the adhesive to promote wound healing in a full-thickness wound model was examined using the published protocols with minor modifications ( Figure S4, Supplementary Materials) [32][33][34]. The protocol (Board Ref# L0270) was approved by the Institutional Animal Care and Use Committee (IACUC) at Michigan Technological University on 12/14/2015. Briefly, 17 healthy female wild-type C57BL/6J mice (#000664, the Jackson Laboratory; age 9-10 weeks, weight 20 g) were anesthetized with isoflurane. The hair of the animals was removed from the potential dorsal wound sites with an electric shear and hair removal cream. The next day, the mice were anesthetized and 2 wounds were created bilaterally on the back of the mice using a 5 mm tissue punch. A medical-grade silicon ring (outer diameter = 10 mm and inner diameter = 6 mm) was fixated around each wound using cyanoacrylate glue and 5-0 nylon sutures. The ring served as a splint to minimize the skin movement and a reduction in the wound size as a result of skin contraction. The wound was either left untreated (control) or was treated with one of four adhesive formulations. The number of repetitions per treatment per time point was three. The adhesive was left undisturbed for 2 min to enable it to solidify before the wound was covered with a non-adhering dressing (Adaptic ® ) and then a breathable adhesive film (Hydrofilm ® ). A larger piece of Hydrofilm ® was further utilized to seal the wounds from their surroundings. Buprenorphine (1 mg/kg) was administered for three days to ensure the animal's comfort. Images of the wound were taken to determine the size of the wound using an Olympus stereo microscope with a video capture module. On days 7 or 14, the mice were euthanized via CO 2 asphyxiation and the tissues surrounding the wounds were collected for further analyses. Histological and Immunological Analysis of Dermal Wounds The harvested tissue samples from the wound site were fixed in Polyfreeze ® , flashfrozen in liquid nitrogen, and stored for up to 4 weeks at −80 • C. The tissue sections with a thickness of 10 µm were obtained using a cryomicrotome and further mounted onto Histobond ® slides. A total of 10 mounted tissue slides, each containing 2 slices of tissues, were produced for each tissue sample. A histological analysis was performed using Masson's trichrome staining to evaluate the wound morphology, epidermis thickness, and collagen content [23]. Additionally, keratin 6 was used to stain for keratinocyte and to determine the wound maturity, using a previously established protocol with minor Polymers 2023, 15,1905 6 of 23 modifications [35]. Specifically, toluidine blue was replaced with hematoxylin. The samples were rinsed with tap water, immersed for 1 to 5 min in the hematoxylin solution, and further washed using running tap water until the water became colorless. The samples were then dipped 10 times in an acetic acid solution (2 mL glacial acetic acid in 98 mL deionized water), 10 times in cool tap water, 5 times in a bluing solution (0.3 mL NH 4 OH in 100 mL tap water), and 20 times in tap water. The samples were mounted using a permanent mounting medium, stored at 4 • C overnight, and imaged using an EVOS microscope under polarized light. The overlaid images were processed using the auto stitching module in Adobe Photoshop (version 24.1.1, Adobe, San Jose, CA, USA) and analyzed using the wound healing tool macro in ImageJ [36]. The dermal wounds tissue slides were stained using Anti-CD68 antibody and Alexa Fluor 488 to visualize the CD68 positive macrophage with DAPI, in order to visualize all the cell nuclei to determine the overall population of macrophages at the wound site [23]. The samples were submerged in 100% ethanol for 2 min and washed 3 times in the PBS with Tween 20 (PBST; 5 min each time). A hydrophobic marker was used to draw a circle around each sample. The samples were incubated in 10% goat serum diluted with 1% bovine serum albumin for 60 min, a 1/100 dilution primary anti-CD68 Ab for 12-14 h at 4 • C in a humidified chamber, a 1/200 dilution of the secondary antibody (goat anti-rabbit IgG H&L) for 60 min at room temperature, and DAPI antibody (1/1000 dilution) for 3 min. After each incubation step, the samples were washed using PBST 3 times, for 5 min each time. The samples were mounted using an aqueous mounting solution and imaged immediately after the staining with an Olympus fluorescence microscope. Statistical Analysis The statistical analysis was performed using JMP Pro 13 (SAS Institute, NC, USA). A one-way analysis of variance (ANOVA) with a Tukey-Kramer HSD analysis was performed using a p value of 0.05. Robust Design Experiments An L 9 orthogonal array was used to determine the relative contributions of four factors (the PEG architecture, PEG concentration, PBS concentration, and SiP concentration) to the performance of the composite adhesive [28,29]. To examine the effect of these four factors, each at three factor levels, the robust design experiments required the testing of nine formulations ( Table 2). The gelation times, adhesive strengths, and the amounts of H 2 O 2 generated for each of the nine formulations were determined (Table S1, Supplementary Materials). The effect of each factor on the performance of the adhesive can be observed in Figure 1. In these plots, the experimental values are plotted against the corresponding factor levels. For example, factor level A1 corresponds to the three data points associated with the four-arm PEG (Table 1), which included data from Formulations 1, 2 and 3 ( Table 2). The slope of the linear trend lines indicated how each factor contributed to the performance. For example, the gelation time decreased with an increasing PEG concentration factor level (from B1 to B3). As expected, an increasing polymer concentration increased the rate of curing. However, increasing the SiP wt% had an opposite effect. This is somewhat unexpected, given that the incorporation of SiP as a filler increases the matrix crosslinking density, which should theoretically result in an increased rate of curing [23]. These results may be skewed due to the fact that two of the slowest curing formulations (Formulations 4 and 7) consisted of 75 mg/mL of the PEG, and the low polymer concentration contributed to a slower rate of curing. For the adhesive property ( Figure 1B), the adhesive strength increased with the PEG branching and PEG concentration. Increasing the level of branching increased the crosslinking density and bulk mechanical property of the adhesive, which contributed to a higher lap shear strength [26]. Catechol is responsible for strong interfacial bonding and the catechol concentration increased with an increasing PEG concentration. For the formulations with the lowest PEG concentration of 75 mg/mL (Formulations 1, 4, and 7), the measurable adhesive strength was not demonstrated, which may be due to the low catechol content in these formulations. The H 2 O 2 concentration decreased with an increase in the PEG branching and SiP wt% ( Figure 1C). Increasing the degree of the PEG branching increased the crosslinking density of the adhesive network, which could potentially trap the generated H 2 O 2 within the adhesive network and limit the amount of H 2 O 2 released from the adhesive [23]. Similarly, increasing the SiP wt% also increased the crosslinking density of the composite adhesive. Additionally, the porous SiP actively decomposed the H 2 O 2 [25]. is somewhat unexpected, given that the incorporation of SiP as a filler increases the matrix crosslinking density, which should theoretically result in an increased rate of curing [23]. These results may be skewed due to the fact that two of the slowest curing formulations (Formulations 4 and 7) consisted of 75 mg/mL of the PEG, and the low polymer concentration contributed to a slower rate of curing. For the adhesive property ( Figure 1B), the adhesive strength increased with the PEG branching and PEG concentration. Increasing the level of branching increased the crosslinking density and bulk mechanical property of the adhesive, which contributed to a higher lap shear strength [26]. Catechol is responsible for strong interfacial bonding and the catechol concentration increased with an increasing PEG concentration. For the formulations with the lowest PEG concentration of 75 mg/mL (Formulations 1, 4, and 7), the measurable adhesive strength was not demonstrated, which may be due to the low catechol content in these formulations. The H2O2 concentration decreased with an increase in the PEG branching and SiP wt% ( Figure 1C). Increasing the degree of the PEG branching increased the crosslinking density of the adhesive network, which could potentially trap To determine the relative degree to which a particular factor affected the performance of the composite adhesive, the measured results were used to determine the statistical coefficient signal-to-noise ratio (S/N) ( Figure S5, Supplementary Materials), which is a logarithmic function of the experimental values (Equation (S1)) [28,29]. As such, the change in the S/N values, as a function of the changing factor levels, mirrored those of the experimental values reported in Figure 1. S/N ratios were further used to determine the % relative variation or the relative contribution of each factor to the measured outcomes: the gelation time, adhesive strength, and H 2 O 2 concentration (Table 3). The PEG concentration contributed the most to the gelation time and adhesive strength (78.6% and 93.8%, respectively) of the composite adhesive. This indicates that the PEG concentration explains the largest portion of the variation in these two data sets. Similarly, the PEG architecture contributed the most to the measured H 2 O 2 concentration (65.6%). The SiP wt% was also a minor contributor to the measured gelation time and H 2 O 2 concentration, with percent relative variation values of 18.2% and 20.7%, respectively. The contribution of the PBS concentration was less than 6% for the three adhesive performances measured, indicating that its contribution was insignificant. Prediction Based on Robust Design Experiment The S/N ratios were further utilized to make adhesive performance predictions for the 81 possible formulations (Tables S2-S4) [28]. These predicted values were utilized to select the suitable formulations for the subsequent dermal wound healing model in mice (Tables S5 and S6). All the formulations were selected with the highest PEG concentration of 150 mg/mL, as this factor level yielded the lowest gelation time and the highest adhesive strength. The chosen four formulations also had similar predicted adhesive strength values, ranging from 4.5 to 6.2 kPa. Given that the PBS concentration contributed minimally to the adhesive performance, a 1× PBS concentration was chosen. To evaluate the effect of the H 2 O 2 concentration on the wound healing, we selected three composite adhesive formulations with increasing branching within the PEG architecture (PEG-D4-Si, PEG-D6-Si, and PEG-D8-Si). The predicted values of the H 2 O 2 concentration decreased from 86 to 39 µM, with increased branching. Formulations with 10wt% SiP were also chosen, as these formulations released a H 2 O 2 concentration (50-100 µM) that was in the range that was previously determined to be suitable for wound healing [10,37]. All three formulations contained the same concentrations of PEG and SiP to minimize the effect of the composition on the dermal wound healing. Additionally, PEG-D6 was included as a control and chosen to be compared with PEG-D6-Si, in order to determine the effect of the SiP on the dermal wound repair. FTIR spectra of the composite adhesive exhibited characteristic peaks of Si-OH at 960 cm −1 and Si-O-Si at 1089 cm −1 , which confirmed the presence of SiP within the PEG adhesive ( Figure 2). Additionally, oscillatory rheometry confirmed that the adhesives were fully solidified, as the storage modulus (G ) values were higher than those of the loss modulus (G") values (Figure 3). The G values for the different adhesive formulations averaged around 30 kPa. Validating Results from Robust Design Experiment The adhesive performances and amounts of H 2 O 2 generation of the four chosen adhesive formulations were determined and compared with their predicted values ( Figure 4). For both the gelation time and adhesive strength, the predicted values matched the experimental values for PEG-D6, which did not contain SiP (a percentage difference of 0.2 and 1.0%, respectively). However, the predictions associated with the SiP-containing formulations were generally poor. The predicted gelation times for the SiP-containing formulations were 4-5 times higher than the experimental values, with a percentage difference of 120-135%. Similarly, the predicted adhesive strength decreased with an increase in the number of PEG arms, which contradicted with the actual experimental data (a percentage difference of 20-45%). When testing the nine adhesive formulations during the robust design experiment, the gelation time increased unexpectedly with an increasing SiP content ( Figure 1A), and the adhesive strength decreased unexpectedly with an increasing SiP content ( Figure 1B). Both these findings contradict the prior reported findings, where filler concentrations have increased the curing rates and adhesive performances of composite adhesives [23,25]. centage difference of 20-45%). When testing the nine adhesive formulations durin robust design experiment, the gelation time increased unexpectedly with an increa SiP content ( Figure 1A), and the adhesive strength decreased unexpectedly with a creasing SiP content ( Figure 1B). Both these findings contradict the prior reported ings, where filler concentrations have increased the curing rates and adhesive pe mances of composite adhesives [23,25]. The observed discrepancy between the predicted and experimental values may be due to the unexpected effect of the formulations that were chosen for the robust design. Formulations 4 and 7 exhibited the highest gelation times and lowest adhesion strengths that were measured, which was mostly likely due to the low PEG concentration (75 mg/mL) in these formulations, rather than a higher concentration of the SiP. The combination of low adhesive polymer concentrations and high SiP concentrations limited the adhesive's ability to cure and form the strong and cohesive polymer network that is needed to achieve a strong adhesion. However, at a higher adhesive polymer concentration, the SiP contributed to forming a more cohesive network, which enhanced the curing rate and adhesive strength. As such, a higher PEG concentration may be needed for future robust design experiments. On the other hand, the predicted H 2 O 2 concentrations matched the experimentally determined values very well, with a percentage difference of 4-14% for the four formulations tested. As expected, the H 2 O 2 concentration decreased with an increasing number of PEG arms, as a more densely crosslinked network trapped the generated H 2 O 2 [22]. The PEG-D6-Si also generated less H 2 O 2 when compared to the PEG-D6, due to the presence of SiP that could decompose the generated ROS. Although the robust design experiment was not accurate in predicting the gelation time and adhesive strength of the composite adhesive, it provided useful guidance in selecting the adhesive formulations based on the amounts of H 2 O 2 generation. Dermal Wound Closure The ability of the composite adhesive to promote dermal wound healing was evaluated using a full-thickness wound healing model in mice. A circular wound with a wound area of 0.24 cm 2 was created ( Figure 5). By day 7, the wound sizes of the adhesive-treated wounds were significantly smaller when compared to the control wound, which was left untreated ( Figure 6). Particularly, the wound sizes were reduced by 33-37% for the SiPcontaining adhesives, PEG-D6-Si and PEG-D8-Si. By day 14, the wounds treated with the PEG-D4-Si and PEG-D6-Si were found to have the smallest wound sizes, with a reduction in the wound area that was greater than 90%. that were measured, which was mostly likely due to the low PEG concentration (75 mg/mL) in these formulations, rather than a higher concentration of the SiP. The combination of low adhesive polymer concentrations and high SiP concentrations limited the adhesive's ability to cure and form the strong and cohesive polymer network that is needed to achieve a strong adhesion. However, at a higher adhesive polymer concentration, the SiP contributed to forming a more cohesive network, which enhanced the curing rate and adhesive strength. As such, a higher PEG concentration may be needed for future robust design experiments. On the other hand, the predicted H2O2 concentrations matched the experimentally determined values very well, with a percentage difference of 4-14% for the four formulations tested. As expected, the H2O2 concentration decreased with an increasing number of PEG arms, as a more densely crosslinked network trapped the generated H2O2 [22]. The PEG-D6-Si also generated less H2O2 when compared to the PEG-D6, due to the presence of SiP that could decompose the generated ROS. Although the robust design experiment was not accurate in predicting the gelation time and adhesive strength of the composite adhesive, it provided useful guidance in selecting the adhesive formulations based on the amounts of H2O2 generation. Dermal Wound Closure The ability of the composite adhesive to promote dermal wound healing was evaluated using a full-thickness wound healing model in mice. A circular wound with a wound area of 0.24 cm 2 was created ( Figure 5). By day 7, the wound sizes of the adhesive-treated wounds were significantly smaller when compared to the control wound, which was left untreated ( Figure 6). Particularly, the wound sizes were reduced by 33-37% for the SiPcontaining adhesives, PEG-D6-Si and PEG-D8-Si. By day 14, the wounds treated with the PEG-D4-Si and PEG-D6-Si were found to have the smallest wound sizes, with a reduction in the wound area that was greater than 90%. Masson's trichrome histological staining of the wounds was used to evaluate the wound morphologies, determine the epidermal thicknesses, and determine the collagen contents (Figures 7 and 8). From the images captured on day 7, the wounds appeared to be irregular in shape, resulting from dermal contractions [38]. Among the SiP-containing adhesives, the PEG-D4-Si-treated wounds exhibited the most prominent level of granulation tissue. This observation may be attributed to the elevated level of H 2 O 2 released by the PEG-D4-Si (~80 µM), when compared to those released by the PEG-D6-Si and PEG-D8-Si. Additionally, the adhesive-treated wounds exhibited a thicker epidermis when compared to the untreated control ( Figure 9A). This dermal hyperplasia, or the thickening of the epidermis, is likely due to the application of H 2 O 2 to the wound site [10,39]. Among the composite adhesives, PEG-D4-Si released the highest amount of H 2 O 2 and resulted in the thickest epidermis layer that was measured. The PEG-D6-treated wound also exhibited a thicker epidermis, but this increase was not significantly different from the other adhesive-treated wounds. This indicated that the increase in the thickness of the regenerated epidermis was not only affected by the level of H 2 O 2 , but was also affected by the SiP, which can release soluble silica. Additionally, an accumulation of hyaluronic acid at the wound site could be attributed to the observed thickening of the epidermis during the early phase of wound healing [40]. Hyaluronic acid promotes keratinocytes activation and the proliferation that is necessary for rapid re-epithelialization [41]. Masson's trichrome histological staining of the wounds was used to eva wound morphologies, determine the epidermal thicknesses, and determine the contents (Figures 7 and 8). From the images captured on day 7, the wounds ap be irregular in shape, resulting from dermal contractions [38]. Among the SiP-c adhesives, the PEG-D4-Si-treated wounds exhibited the most prominent level of tion tissue. This observation may be attributed to the elevated level of H2O2 re the PEG-D4-Si (~80 µM), when compared to those released by the PEG-D6-Si D8-Si. Additionally, the adhesive-treated wounds exhibited a thicker epiderm compared to the untreated control ( Figure 9A). This dermal hyperplasia, or the th of the epidermis, is likely due to the application of H2O2 to the wound site [10,39 the composite adhesives, PEG-D4-Si released the highest amount of H2O2 and re the thickest epidermis layer that was measured. The PEG-D6-treated wound a ited a thicker epidermis, but this increase was not significantly different from adhesive-treated wounds. This indicated that the increase in the thickness of the ated epidermis was not only affected by the level of H2O2, but was also affected b which can release soluble silica. Additionally, an accumulation of hyaluronic a wound site could be attributed to the observed thickening of the epidermis d early phase of wound healing. [40] Hyaluronic acid promotes keratinocytes activ the proliferation that is necessary for rapid re-epithelialization [41]. By day 14, the tissue samples exhibited wound remodeling for all the adhesive-treated samples. There was a reduction in the wound size and an increase in the granulation tissue for all the adhesive formulations ( Figure 8). The wounds treated with the composite adhesives all demonstrated a reduction in their collagen content ( Figure 9B), with the PEG-D6-Si-treated wounds demonstrating the lowest collagen content. This suggests that the wound was remodeled successfully by day 14, with reduced granulation tissue [32,42]. Most importantly, treatment with an SiP-containing adhesive resulted in a reduction in the epidermal thickness, with PEG-D6-Si demonstrating the thinnest epidermal layer of around 25 µm. The epidermal thicknesses in these treatment groups were similar to those of healthy mice (~20 µm) [43]. This indicates that treatment with an SiP-containing adhesive regenerates new tissues, closely resembling those of a healthy epidermis. On the other hand, both the untreated control and the treatment with the SiP-free PEG-D6 resulted in an increase in the epidermal thickness, indicating prolonged epidermal hyperplasia. The wounds were further stained with keratin-6 ( Figures 10 and 11) and CD68 (Figures 12 and 13) to evaluate the presence of keratinocytes and macrophages at the wound site. On day 7, the percentage of the keratin-6-positive cells in the untreated controls was significantly lower (~20%) when compared to the adhesive-treated wounds ( Figure 14A). The release of H 2 O 2 from the adhesive likely recruited immune cells and served as a chemotaxis source for recruiting the keratinocytes. For PEG-D6-Si and PEG-D8-Si, the percentages of the keratin-6-positive cells were around 80%. This indicated that treatment with an adhesive increased the keratinocyte recruitment to the wound site at an early time point. The presence of keratinocytes indicated the maturation of the skin. The maturation of keratinocytes leads to skin cornification, which provides a protective outer layer for the underlying dermal tissue [44]. Additionally, these keratinocytes were concentrated in the epidermis and its surroundings, resembling the structure of healthy skin tissue [42,45]. The released H 2 O 2 likely recruited the keratinocytes to the wound site and promoted its proliferation as a response to oxidative stress [10,37]. Similarly, soluble silica has previously been demonstrated to induce keratinocyte migration and proliferation [25,46]. By day 14, the controls exhibited elevated keratin-6-positive cells compared to the adhesive-treated wounds. For the wounds treated with the composite adhesives, although the average keratin-6-positive cells in the area surveyed was around 40%, this value was over 80% near the epidermis. This indicates that the adhesive treatment promoted an early keratinocyte recruitment and the maturation of the healed wound [47,48]. In the untreated control, the percentage of the CD68-positive cells was found to be at around 18% on day 7, which was later reduced to around 4% by day 14 ( Figure 14B). On day 7, both the PEG-D6-and PEG-D6-Si-treated wounds exhibited significantly lower CD68-positive cells when compared to the control, indicating a reduced macrophage recruitment. Conversely, on day 14, the percentages of the CD68-positive cells for all the adhesive-treated wounds were significantly higher than that of the control. Although this increase in the macrophage population may suggest a prolonged immune response, CD68 does not distinguish between the types of macrophages that are present. There are two types of macrophages: M1 macrophages, which are involved in inflammatory response, and M2 macrophages, which are involved in matrix remodeling, the suppression of inflammatory responses, and tissue regeneration [5]. Although additional work may be required to distinguish these two macrophage types, the combined results of the reduced collagen content and the regeneration of the epidermal thickness to similar to that of healthy tissues suggest that the inflammatory response ceased by day 14 for the composite adhesive-treated wounds. By day 14, the tissue samples exhibited wound remodeling for all the adhe treated samples. There was a reduction in the wound size and an increase in the gra tion tissue for all the adhesive formulations ( Figure 8). The wounds treated with the posite adhesives all demonstrated a reduction in their collagen content ( Figure 9B), the PEG-D6-Si-treated wounds demonstrating the lowest collagen content. This sug that the wound was remodeled successfully by day 14, with reduced granulation t [32,42]. Most importantly, treatment with an SiP-containing adhesive resulted in a re tion in the epidermal thickness, with PEG-D6-Si demonstrating the thinnest epide layer of around 25 µm. The epidermal thicknesses in these treatment groups were sim to those of healthy mice (~20 µm) [43]. This indicates that treatment with an SiP-conta Taken together, a catechol-containing bioadhesive that generates H 2 O 2 as a by-product, in combination with SiP, can be used to promote dermal wound healing without resulting in epidermal hyperplasia. Specifically, PEG-D6-Si was found to be the ideal formulation for accelerating this dermal wound healing. PEG-D6-Si demonstrated a combination of fast wound closure and the regeneration of the thin epidermis that is found in healthy skin. PEG-D6-Si released around 60 µM of H 2 O 2 , which is consistent with previous findings that have indicated that this level of H 2 O 2 is desirable for promoting wound healing [10,37,49]. In addition to H 2 O 2 , the incorporation of SiP also contributed to promoting this wound healing, as the wounds that were treated with the SiP-free PEG-D6 resulted in epidermal hyperplasia. The incorporated SiP can release soluble silica, which has been demonstrated to induce keratinocyte and fibroblast proliferation and migration, as well as epidermal formation and tissue remodeling [42,44,46]. The dermal wound healing was performed by using composite adhesives with the same composition (i.e., the same PEG and SiP concentrations), which releases varying amounts of H 2 O 2 (40-80 µM) while maintaining the performance of the adhesive. This experiment was uniquely designed to investigate the effect of the H 2 O 2 concentration on wound healing, while minimizing the contributions from other parameters (e.g., the effect of the composition). The adhesive and filler combination that is reported here could potentially be further tuned to tailor a H 2 O 2 release profile that may be more suited for the repair of other tissues. Specifically, the adhesive formulation could be chosen based on the predicted values from the results of the robust design experiment. percentages of the keratin-6-positive cells were around 80%. This indicated that treatment with an adhesive increased the keratinocyte recruitment to the wound site at an early time point. The presence of keratinocytes indicated the maturation of the skin. The maturation of keratinocytes leads to skin cornification, which provides a protective outer layer for the underlying dermal tissue [44]. Additionally, these keratinocytes were concentrated in the epidermis and its surroundings, resembling the structure of healthy skin tissue [42,45]. The released H2O2 likely recruited the keratinocytes to the wound site and promoted its proliferation as a response to oxidative stress [10,37]. Similarly, soluble silica has previously been demonstrated to induce keratinocyte migration and proliferation [25,46]. By day 14, the controls exhibited elevated keratin-6-positive cells compared to the adhesivetreated wounds. For the wounds treated with the composite adhesives, although the average keratin-6-positive cells in the area surveyed was around 40%, this value was over 80% near the epidermis. This indicates that the adhesive treatment promoted an early keratinocyte recruitment and the maturation of the healed wound [47,48]. In the untreated control, the percentage of the CD68-positive cells was found to be at around 18% on day 7, which was later reduced to around 4% by day 14 ( Figure 14B). On day 7, both the PEG-D6-and PEG-D6-Si-treated wounds exhibited significantly lower CD68-positive cells when compared to the control, indicating a reduced macrophage recruitment. Conversely, on day 14, the percentages of the CD68-positive cells for all the adhesive-treated wounds were significantly higher than that of the control. Although this increase in the macrophage population may suggest a prolonged immune response, CD68 does not distinguish between the types of macrophages that are present. There are two types of macrophages: M1 macrophages, which are involved in inflammatory response and M2 macrophages, which are involved in matrix remodeling, the suppression of inflammatory responses, and tissue regeneration [5]. Although additional work may be required to distinguish these two macrophage types, the combined results of the reduced vestigate the effect of the H2O2 concentration on wound healing, while minimizing contributions from other parameters (e.g., the effect of the composition). The adhesive a filler combination that is reported here could potentially be further tuned to tailor a H release profile that may be more suited for the repair of other tissues. Specifically, adhesive formulation could be chosen based on the predicted values from the result the robust design experiment. Conclusions The ability of a composite adhesive consisting of PEG-modified catechol and SiP to heal full-thickness dermal wounds was evaluated. Given the large number of factors and factor levels, robust design was utilized to select the adhesive formulations that released the suitable amounts of H 2 O 2 for wound healing. Although the prediction from the robust design experiment was generally poor for the gelation times and adhesion strengths, the predicted and experimental values for the H 2 O 2 concentrations were in good agreement. The chosen adhesive formulations possessed the same compositions and mechanical properties, with the only varying parameter being the different amount of H 2 O 2 concentrations generated by each formulation. This experimental design enabled us to study the effect of H 2 O 2 concentration on dermal wound healing, while minimizing the contributions of the other factors. From the dermal wound healing experiment on mice, all the adhesive-treated wounds increased the rate of wound closure when compared to the untreated control. Additionally, the composite adhesive promoted dermal wound healing without resulting in epidermal hyperplasia. The release of H 2 O 2 from the catechol and soluble silica from the SiP contributed to recruiting keratinocytes to the wound site in order to effectively promote the wound healing. As a result, PEG-D6-Si is the optimal formulation for accelerating wound closure, wound remodeling, and the maturation of a skin wound. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/polym15081905/s1, Methods used for data analysis associated with robust design experiment, results from the robust design experiment, 1 H NMR spectra, formulation assignment for the animal study, and exemplary tissue sections of histological staining; Figure S1: Table S2. Values are plotted as mean and standard deviation of η values for the corresponding factor level; Table S1: Experimental results of the nine formulations used in the robust design experiment; Table S2: Predicted adhesive performance for PEG-D4; Table S3: Predicted adhesive performance for PEG-D6; Table S4: Predicted adhesive performance for PEG-D8; Table S5: Adhesive formulations chosen for dermal wound repair based on their predicted adhesive performance; Table S6: Control and treatment groups tested in the full thickness dermal wound model. References [28,29] are cited in the Supplementary Materials. Institutional Review Board Statement: The animal protocol was approved by the Institutional Animal Care and Use Committee (IACUC) at Michigan Technological University (Board Ref# L0270). The protocol was initially approved on 12/14/2015 and subsequently renewed. Michigan Tech's Animal Facility complies with Public Health Service (PHS) policy for Humane Care and Use of Laboratory Animals and has PHS Animal Welfare Assurance approval. The facility employs stringent policies and controls to ensure animal health and meets National Institutes of Health/PHS and USDA requirements for rodent (mouse and rat) and rabbit housing. A licensed veterinarian of record is available as needed for consultation, training, animal health evaluations and animal emergencies. The availability of an individually ventilated cage (IVC) system and biosafety cabinets allows for work with immune-compromised animals and/or limited work with infectious agents or materials.
2023-04-19T15:02:07.284Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "9137cdbc46d5a25cb1d3829c521f8dfb094454f8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/15/8/1905/pdf?version=1681719760", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "92954f57341d2dbd017d1b8679fec26285284325", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
205554187
pes2o/s2orc
v3-fos-license
Optically-controlled long-term storage and release of thermal energy in phase-change materials Thermal energy storage offers enormous potential for a wide range of energy technologies. Phase-change materials offer state-of-the-art thermal storage due to high latent heat. However, spontaneous heat loss from thermally charged phase-change materials to cooler surroundings occurs due to the absence of a significant energy barrier for the liquid–solid transition. This prevents control over the thermal storage, and developing effective methods to address this problem has remained an elusive goal. Herein, we report a combination of photo-switching dopants and organic phase-change materials as a way to introduce an activation energy barrier for phase-change materials solidification and to conserve thermal energy in the materials, allowing them to be triggered optically to release their stored latent heat. This approach enables the retention of thermal energy (about 200 J g−1) in the materials for at least 10 h at temperatures lower than the original crystallization point, unlocking opportunities for portable thermal energy storage systems. P hase-change materials (PCMs), such as salt hydrates 1 , metal alloys 2 , or organics 3 , store thermal energy in the form of latent heat, above their phase-transition temperature, which is released via reverse-phase transformation 4 . Long-term storage of latent heat without loss to the environment remains a challenge 5 due to the sensitivity of phase-transition to temperature, which fundamentally prevents the deployment of thermally charged PCMs away from heat sources. One way to prevent this spontaneous heat loss is to install an energy barrier for the reverse-phase change from a high-energy phase to a low-energy phase. In some materials, intrinsic energy barriers exist, and the controlled heat release is feasible by applying external mechanical energy to overcome the barriers. For example, flexing metal clips in pockets of supersaturated salt hydrate and the application of external pressure on a ceramic material have been shown to trigger heat release from certain inorganic PCMs, but they suffer from inherent limitations. In the case of salt hydrates, the instability of the salt solution, decreasing the storage density upon cycling, and corrosiveness are the major issues that have not been resolved 6 . Ceramics, on the other hand, require a high-cost synthesis process (sintering metal oxides at T over 1000°C), and possess low-energy densities (50-60 J g −1 ) compared to conventional inorganic or organic PCMs (100-600 J g −1 ) 7 . Moreover, mechanical triggering for both cases is limited by the high cost for large-scale applications. Organic phase-change materials, such as low-cost paraffin waxes 8 , fatty acids 9,10 , polyethylene glycols 11 , and sugar alcohols 12 , generally exhibit large latent heat and solid-liquid phase transitions, covering a wide range of melting and crystallization points 13 . Since the phase changes are governed by intermolecular interactions, including van der Waals, dipolar, and hydrogen bonding, the phase-transition temperatures and thermal energy densities can be controlled by tuning these key interactions between constituents. Organic photoswitches that undergo reversible structural changes upon light irradiation have been integrated into various materials for applications, including light-driven actuation, drug delivery, sensing, optical memory, and so on 14,15 . Among photochromic molecules, such as spiropyran 16,17 , azobenzene [18][19][20][21] , fulvalene diruthenium 22 , and dithienylethene 23,24 , azobenzene has been widely explored due to its well-known shape (molecular length of 9 Å for trans, 5.5 Å for cis; planar geometry in trans, the benzene ring tilted at 56°from the other ring in cis) 25 and polarity changes (dipole moment of 0−1.2 D for trans, 3.1−4.4 D for cis) 26 upon photoisomerization. The planar-to-twisted conformational change of azobenzene upon UV irradiation and the reverse isomerization triggered by visible-light illumination can be utilized to alter the physical properties of surrounding molecules through the change in intermolecular interactions. Herein, we introduce azobenzene dopants into conventional organic PCMs as a way to change the intermolecular dynamics. These dopants, possessing activation energy barriers for switching between photoisomers, provide stability to the phase storing thermal energy and triggerabilty for energy release, thus allowing controllable, high-density energy storage in scalable organic composites. Specifically, the azobenzene dopants that change conformation upon illumination can be locked in the liquid phase of PCMs by lowering their crystallization temperature (T c ), retaining the thermal energy storage at cooler temperatures. Results Thermal energy storage and release in PCM composites. We prepared a composite of tridecanoic acid, as an example of n-fatty acids with high heat of fusion (177 J g −1 ), and an azobenzene dopant that is functionalized with a tridecanoic ester group to render high miscibility with the PCM molecules. Long aliphatic compounds such as n-fatty acids or n-paraffin waxes form lamellar structures through hydrocarbon side packing (i.e., van der Waals forces), and the CH 3 end-group interaction between the lamellae can affect the molecular arrangement as well, leading to polymorphism 27 . In the case of n-fatty acids with -COOH groups, the polar interaction and H-bonding between the acid groups can also impact the lamellar formation 28 . The azobenzene dopants possess strong π-π interactions 29 among adjacent aromatic cores and van der Waals interactions between alkyl chains. Ester linkers are also expected to contribute to intermolecular interactions. (1)-(4) stepwise process for the thermal energy storage and release cycle. T 1 and T 2 are crystallization points of charged and uncharged composites, respectively, and ΔT c is the difference between T 1 and T 2 . ΔH total represents the expected heat release from the optically discharged composite. c Chemical structures of PCM (tridecanoic acid) and the azobenzene dopant, functionalized with tridecanoic ester group, in each isomeric form The first step in the thermal storage cycle is the absorption of external thermal energy by the solid composite that is crystalline as prepared (Fig. 1a, i). When heated above the melting point (T m ) of the PCM (42°C), the composite becomes a mixture of molten PCM and crystalline aggregates of the azobenzene dopant, which has a higher melting point of 73°C (Fig. 1a, ii). Then UV illumination of the slurry switches the trans azobenzene dopants into cis, and the resulting cis-dopants with a twisted conformation become well dispersed in the liquid PCM (Fig. 1a, iii). We note that the temperature of the composite is maintained to be above 42°C during the UV-charging process, which is enabled by simultaneous heating and UV absorption processes (see Methods section). This liquid composite is storing the fractional latent heat of the PCM (177 J g −1 ), that of trans azobenzene dopants (118 J g −1 ), as well as the fractional isomerization energy of the metastable cis azobenzene (116 J g −1 ). Surprisingly, the liquid state of the composite can be conserved through subsequent cooling to a temperature below the original T c (38°C), while the latent thermal energy stored is fully maintained (Fig. 1a, iv). This striking heat storage ability of the composite is achieved by the metastable cis-dopants that can disrupt the packing of PCM molecules through steric repulsion and dipolar interactions, and require triggering to overcome the activation barrier for reverse isomerization to their ground-state trans form. Visible-light illumination rapidly switches the dopants and allows the PCM composite to crystallize and release the stored latent heat ondemand, recovering the original state of the composite (Fig. 1a, i). The PCM composites before and after UV/thermal charging possess different phase-transition temperatures and scales of latent heat (Fig. 1b). Enthalpy and temperature changes, during the thermal storage and release cycle, are depicted as follows: (1) rise of temperature and enthalpy during the PCM melting, (2) isothermal enthalpy increase by UV illumination, (3) temperature and enthalpy decrease during cooling to an arbitrary temperature between T 1 and T 2 , and (4) visible-light-triggered isothermal exothermic reaction. ΔT c , a figure of merit in this system, represents the degree of phase stabilization of the charged composite (PCM + cis-Azo) without losing heat. ΔH total , the expected exothermic energy density from the system, should be comparable to that of the pristine PCM for high-performance thermal storage applications. Figure 1c shows the basic structures of the PCM and the azobenzene dopants, selected for this proofof-concept study, and we also explored other dopant derivatives that are systematically functionalized on the para-position of the azobenzene core to alter the intermolecular interactions in the resulting composites. Optical control of PCM composite phase. Consistent with the schematic cycle shown in Fig. 1, the crystalline composite was partially molten when heated above the T m of the PCM, then charged by a UV (365 nm) lamp to become fully liquid, then able to be cooled below the T c of the pure PCM (38°C), and was finally discharged by a blue (450 nm) LED light (Fig. 2a). The azobenzene dopant exhibits typical optical absorption properties of the trans and cis isomers, before and after UV illumination in solution (Fig. 2b) 30 , and the relative ratio of each isomer in the uncharged compound (94% trans) and in the UV-charged showing peak absorption at 325 nm (π → π* transition), and the UV-charged solution saturated with cis isomers (solid red line) exhibits a n → π* transition peak at 440 nm 48,49 . c Differential scanning calorimetry (DSC) scans of charged and uncharged composites (35 mol% doped) obtained while cooling from 70°C at a rate of 5°C min −1 , illustrating different crystallization points (T 1 and T 2 ) and the gap (ΔT c ). Small peaks at 20°C indicate the solidification of a minor polymorph 50 in tridecanoic acid. d T 1 , T 2 (left axis), and ΔT c (right axis) measured by DSC at a rate of 5°C min −1 from composites with varying additive ratios. e Heat of fusion (i.e., crystallization enthalpy) measured by DSC at a rate of 5°C min −1 and calculated on the composites with varying additive ratios. Error bars in d, e indicate standard deviations of the data (either temperature or heat of fusion) collected at least 5 times on each type of composite NATURE COMMUNICATIONS | DOI: 10.1038/s41467-017-01608-y ARTICLE compound in the photostationary state (94% cis) was measured by 1 H NMR ( Supplementary Fig. 1a, b). The efficiency and kinetics of solid-state trans → cis conversion of the azobenzene dopant in the PCM composites was also analyzed by 1 H NMR ( Supplementary Fig. 1c) which indicates the saturation of cis isomers at 90% after 1 h of charging. The dispersion of cis azobenzene in liquid PCM facilitates the uniform charging of the composite, and the time scale estimated for the dispersion is consistent with the experimental time scale for cis saturation (see Supplementary Note 1). T c of the charged composite was measured by differential scanning calorimetry (DSC), and compared to the uncharged composite (Fig. 2c). The charged composite crystallizes at 28°C, followed by the crystallization of cis-Azo at 9°C , while the uncharged counterpart exhibits the first solidification peak of trans-Azo at 48°C, followed by the PCM crystallization at 38°C. We note that the crystallization temperature of trans-Azo (60°C measured as a pristine material) is significantly lowered in the PCM composites, as a result of the solvation by the liquid PCM molecules at temperatures higher than the PCM crystallization point. The melting point of trans-Azo (originally 73°C) is also lowered, due to the solvation effect, and the degree of the changes in phase-transition temperatures can be variable, depending on the doping level in the composite and cooling rate. Interestingly, T 1 and T 2 change significantly at varied doping levels ( Fig. 2d) with the maximum ΔT c obtained at a doping level of 35 mol%. At low-doping levels (5-10 mol%), both T 1 and T 2 decrease slightly from the original T c of the PCM by a few degrees, resulting in negligible ΔT c ( Supplementary Fig. 2a). Above 20 mol% doping, however, T 2 remains almost unchanged from 38°C, and the azobenzene dopants crystallize separately at higher temperatures, as seen in Fig. 2c. ΔT c generally increases with a higher doping level but decreases at a doping level of 45 mol%. T 2 is fixed at 38°C while T 1 increases up to 31°C, and the charged composite shows a minor exothermic peak at 36°C which can be assigned to the solidification of the trans-Azo dopant ( Supplementary Fig. 2b). The relative crystallization enthalpy of the charged (ΔH 1 ) and uncharged (ΔH 2 ) composites was obtained by integrating the crystallization peaks (Fig. 2e). ΔH 1 and ΔH 2 are the sum of crystallization enthalpies of the PCM (ΔH PCM ) and dopant (ΔH cis-Azo or ΔH trans-Azo ) at relative weight fractions (χ between 0 and 1). The measured ΔH 1 and ΔH 2 are well aligned with the calculated values, indicating that the dopants are mobile in the PCM medium and prone to aggregation, regardless of the isomeric state due to the strong dopant-dopant interactions. ΔH total is the heat released from liquid (cis) → solid (trans) transition of the composite, so it is described as ΔH iso is the isomerization enthalpy of the azobenzene dopant, measured by DSC during thermal reverse isomerization (cis → trans) occurring in the liquid state ( Supplementary Fig. 3a), and ΔH iso of 116 J g −1 (46 kJ mol −1 ) is comparable and slightly (v-vii) Uncharged composite was heated, and only the uncovered area (yellow letters) was irradiated with UV for 1 h. Then, the sample was cooled at 36°C. Scale bar is 10 mm. Optical microscopic images of a charged part (iv, liquid) and discharged part (viii, crystalline solid) of composite films. Scale bar is 100 µm ARTICLE NATURE COMMUNICATIONS | DOI: 10.1038/s41467-017-01608-y improved from that of pristine azobenzene (41.4 kJ mol −1 ) 31 . A quantitative measurement of ΔH total was challenging due to the difficulty in decoupling the heat release from the composite and the heat absorption from the light source for optical triggering during the DSC measurement. However, assuming the complete cis → trans conversion, the expected heat release is considerable and increases with higher doping levels. When highly doped, ΔH PCM decreases, while the contributions of ΔH trans-Azo and ΔH iso increase. If it is 100% doped, the pristine azobenzene dopant has a potential to release 234 J g −1 of heat, but the requirement of heating it above its high T m (73°C) for the solidstate charging induces the thermal reverse isomerization of the cis isomer, preventing the control over thermal storage. Also, UV illumination of azobenzene in the solid state without heating is ineffective as a result of steric confinement of aromatic groups in the crystalline lattice 32 . Thus, the design of PCM and the azobenzene composite has a unique advantage to realize thermal storage at temperatures above the relatively low T m of the PCM. The dependence of ΔT c and ΔH on the doping level can be interpreted as a result of different degrees of nucleator formation and supercooling (Fig. 3a). At the low-doping level (5-10 mol%), both trans and cis dopants are well dispersed in the PCM and lower the T c of the composite by similar amounts, thus making ΔT c negligible (Fig. 3a, left). The low-doped composites are eutectic, showing no phase separation of PCM and dopants during solidification. However, with increased doping (20-35 mol %), the photo-switching action can influence ΔT c considerably due to the drastic differences in the aggregation dynamics of charged and uncharged dopants (Fig. 3a, center). The trans dopants first aggregate while the PCM composite is cooled and play a role as nucleating agents that facilitate the crystallization of PCM molecules on their surface. The addition of nucleators in the conventional PCMs for suppression of supercooling is well known, and the requirements for effective nucleators (high T m , an isomorphous structure to the PCM) 33,34 are fulfilled by the properties of the trans azobenzene dopant aggregates. In contrast, cis dopants have strong interactions with the polar PCM molecules as well, leading to significant supercooling in the absence of nucleators, and the values of ΔT c are as high as 10°C. If more dopants (45 mol%) are present, however, the solid-state UV charging of the composite is incomplete due to the relatively low fraction of the available molten PCM for solvating the trans-Azo dopants by overcoming the strong dopant-dopant interactions. (Supplementary Fig. 4) Therefore, the unconverted trans-Azo aggregates in the charged composites, although a minority of the total concentration, can act as nucleators and reduce the degree of supercooling and ΔT c (Fig. 3a, right). cis-cis Trans-trans Fig. 4 Influence of measurement conditions and dopant structures on thermal storage properties. a Impact of cooling rate on ΔT c at varied doping levels. ΔT c generally increases with higher cooling rates, and was measured by DSC through heating and cooling between 10°C and 70°C. b Impact of functionalization of azobenzene dopants on ΔT c . Compound 2 is functionalized with a long alkyl chain (decyloxy group), and compound 3 is decorated with a bulky substitution (tert-butyl group). Error bars indicate standard deviations of temperature measured at least 5 times on each type of composite. c DFTcalculated binding energy between two neighboring molecules (either as trans/cis isomer or PCM) varying with the functional group on the para-position of the azobenzene core. Dotted lines represent the head-to-head interaction between acid groups of the PCM (green) or the side-by-side interaction between alkyl chains of the PCM (blue). trans-Azo signatures, while that of the charged composite exhibits the major peaks of PCM and almost negligible peaks of trans-Azo that remains in the composite after solid-state charging ( Supplementary Fig. 1c). Selective optical charging and discharging was conducted on composite films (Fig. 3c). The demonstration of local crystallization and liquefaction of the composite confirms that optical manipulation of the azobenzene conformation is indeed the key to phase transformation and thermal energy storage in the composites. The patterns can be easily removed by either UV/thermal charging for 1 h or by visible-light discharging for 30 s, and can be recycled repeatedly. Intermolecular interactions controlling the phase change. In addition to dopant concentration, the cooling rate of UV/thermally charged composites also influences ΔT c , impacting T 1 more than T 2 . T 2 is fixed around 38°C, due to the facile formation of trans-Azo nucleators, while T 1 decreases significantly at fast cooling rates (Fig. 4a). When slowly cooled, the diffusion of cis dopants and the cis → trans conversion can reduce the impact of the cis dopants on disrupting the PCM arrangements, resulting in a higher T 1 . In particular, the heating and cooling between 10 and 70°C at the slowest rate of 1°C min −1 exposes the metastable cis isomers to high-enough temperatures to enable thermal reverse isomerization over a considerable period of time, which was confirmed by the observation of trans-Azo aggregates through DSC measurements, leading to underestimated ΔT c values (3-4°C). Beyond concentration and the cooling rate, the chemical structures of photo-switching dopants and the relative strength of intermolecular interactions between dopants and that with PCM molecules can vary ΔT c in remarkably different ways (Fig. 4b). Compound 2 is a derivative of compound 1 with a decyloxy group substituted on azobenzene at the para-position, designed to increase the van der Waals interactions between alkyl chains present in both the PCM and dopants. Compound 2 exhibits the highest T c of 68°C, indicating strong binding forces between the dopants through π-π stacking of aromatic cores and side packing between alkyl chains. Yet, compound 2 was observed to have a negligible impact on ΔT c compared to compound 1, because of the inefficient solid-state charging of trans aggregates in the PCM composites due to their strong packing ( Supplementary Fig. 4). The charged composites show both trans and cis dopant crystallization peaks (around 60 and 10°C, respectively), and the ratio of the trans/cis peak intensity increases with higher doping levels, confirming the difficulty in solid-state charging of strong aggregates. Compound 3, on the other hand, possesses a bulky tert-butyl substitutent that significantly diminishes the π-π interactions between aromatic groups, helping to prevent the formation of nucleating agents in the composites. Interestingly, due to the sterically hindering nature of tert-butyl groups, the trans dopant can effectively disrupt the PCM alignment, even better than the cis isomer, as the negative value obtained for ΔT c implies. The cis dopants exert a similar degree of supercooling (by 5-6°C) as the cis form of compound 1 at a doping level of 10 mol%, but the trans dopants induce a much larger supercooling (by 8-13°C, over varied cooling rates). When the low numbers of dopants are dispersed in the PCM, the interactions between a dopant and the surrounding PCM molecules become the predominant factor that predicts the crystallization dynamics. However, at higher doping levels, dopant-dopant interactions and aggregations become increasingly important, as the formation of nucleators can greatly reduce the supercooling. Despite the repulsion between tert-butyl groups, the trans isomers crystallize at 44°C ( Supplementary Fig. 3b), whereas the cis isomers remain in the liquid state upon cooling down to −20°C (Supplementary Figs. 3c, 5 and 6; Supplementary Movies 1 and 2) 35 , implying very little chance of forming any nucleator in the cooled composites. Indeed, the UVcharged composites are increasingly supercooled with more dopants, and the uncharged composites show a more slowly increasing degree of supercooling, competing with the nucleation process. At a doping level of 30 mol%, trans and cis isomers induce similar levels of supercooling, resulting in ΔT c closer to zero and similar ΔH 1 and ΔH 2 values (Supplementary Fig. 7). At 40 mol% doping, ΔT c is found to be positive, and ΔH 2 exceeds ΔH 1 , as was the case for compound 1. The X-ray diffraction patterns ( Supplementary Fig. 8) of low-doped (10 mol%) and high-doped (40 mol%) composites also corroborate these findings as follows: (1) higher impact of trans dopants on the PCM packing than that of cis isomers at a low level of doping and (2) larger disruption of PCM alignment by cis dopants at a high doping level. We explored the binding energy (E b ) between two molecules (either dopant or PCM) via ab initio calculations, and found that the dopant-dopant interaction is generally stronger than the PCM-PCM interaction, which explains the facile solidification of dopants in composites during the cooling process (Fig. 4c). We note that the binding energies of the cis dopants are generally overestimated due to the neglect of finite-temperature effects and ensemble environment. E b of compound 2 is much larger than that of other dopants, justifying its stronger tendency to aggregate and the negligible ΔT c of the composites. Consistent with the greater probability to form trans rather than cis aggregates, the binding between trans isomers is stronger than that between cis isomers for compound 1 and 2, even though the dynamic effects that may further reduce the stability of the cis crystal were not accounted for in the simulations. Upon functionalization with the tert-butyl group (compound 3), E b decreases in the trans while it increases in the cis dimer (configurations shown in Fig. 4d, compared to compound 1), which is in agreement with the higher concentration required for trans dopant aggregation observed in the experiment. In the low-doping region, both trans and cis dopants are dispersed, and thus, the interaction between an isolated dopant and the surrounding PCM molecules becomes the dominant factor that governs the crystallization dynamics. This situation was analyzed by considering E b between a trans isomer and a truncated PCM molecule in various energetically favorable configurations (Fig. 4e). In addition to the strongest binding between -COOH and ester groups, the results suggest that bound states are also feasible via the formation of hydrogen bonds with N atoms or van der Waals interactions between the -COOH group and aromatic rings (Fig. 4e). Compared to cis, an isolated trans may have a higher impact on disrupting PCM arrangement due to the planar structure that provides a larger space for binding on many different sites. The cis isomer, with such a bulky substituent, limits the interaction with another PCM molecule, for example, on the back side of the N = N group (the opposite configuration to Fig. 4e, top) because of the significant steric repulsion. Finally, the stability of thermal energy storage in a UV/thermally charged composite (30 mol% of compound 1) was measured by 1 H NMR analysis of the cis fraction and by monitoring solidification of the composite over a period of 24 h in the dark at the cooled temperature (36°C, 6°C below the T m ) (Fig. 4f). The half-life of the cis dopant in such conditions was about 24 h, and the liquid phase was conserved for at least 10 h. When the cis ratio drops from 70% (10 h) to 65% (19 h), the solidification of the PCM was observed, which was also consistent with optical triggering experiments ( Supplementary Fig. 9). Energy efficiency analysis. The concept of this study is fundamentally different from that of conventional solar thermal fuels (STFs) [36][37][38][39] , or molecular solar thermal (MOST) systems 22,40 , which convert photon energy into stored thermal energy. Instead, here, we demonstrate a unique approach that uses the photon energy as a way to control the phase-transition properties of traditional thermal storage materials such as PCMs. The energy efficiency of this type of energy-storage system will depend on the thermal energy input from a high-temperature heat source (ΔH 2 ) and the released thermal energy at a lower temperature upon optical triggering (ΔH total ). As described in equation (3), ΔH total is the sum of ΔH 2 and the reverse isomerization energy of azobenzene dopants (cis → trans), making it a greater value than the thermal storage density of each component (i.e., PCM molecule or the azobenzene dopant). We note that the photon energy required for operating an azobenzene switch is considerable, given the low quantum yield (ca. 10%, different depending on the measurement conditions 41,42 ) of its photoisomerization (trans → cis). For example, the PCM composite containing 30 mol% of azobenzene dopants requires UV (365 nm) photon energy of ca. 3.6 kJ per 1 gram of the composite for the photoinduced phase fixation, while the total heat storage in 1 gram of the composite is ca. 0.2 kJ, or about 5-6% of the input photon energy. This may seem to be inefficient, but the photon energy consumption for this system is significantly lower than that for conventional STFs. The energystorage capacity of pristine azobenzene as an STF is 0.23 kJ g −1 31 , while the input UV (365 nm) photon energy density is 18.0 kJ g −1 (1.3% efficiency). Other azobenzene derivatives that exhibit higher energy storage as a result of structural designs provide slightly improved efficiencies such as 2.7% 32 and 3.0% 43 , assuming approximately 10% quantum yield. Utilizing azobenzene photoswitches as a minor component in the PCM composite gives rise to the relatively low demand for photon energy input, compared to that for STFs that consist of 100% photoswitches. Both STFs and photoswitchable PCMs would benefit from further development of high-quantum-yield photo-switching molecules to increase the energy efficiency. However, we note that abundant solar energy and radiation are expected to replace the current UV lamps, used for the experiments at this stage, as demonstrated on STFs by Saydjari and coworkers 44 . We also note that this approach is fundamentally distinguished from the use of thermal insulation that is currently used for decreasing the cooling rate of thermally activated materials. While thermal insulation can reduce the loss of sensible heat, it cannot stop the spontaneous heat transfer between the thermally charged PCMs and the cooler surroundings (see Supplementary Fig. 10 for schematic energy diagrams comparing the two concepts) or the liquid-to-solid phase transition which leads to the loss of latent heat. In contrast, the photoswitchable PCM systems, which are able to still provide thermal storage even in the absence of thermal insulation, effectively preserve latent heat by changing the intrinsic properties (i.e., crystallization temperature) of the PCMs. As with traditional PCMs, thermal insulation can also be used with the photoswitchable PCMs to slow down the loss of sensible heat, but the advantage is the ability to hold on to the latent heat until it is released. Another difference between the two approaches is the triggerability of heat discharge. Optical triggering of the latent heat release from the photoswitchable PCMs is a newly introduced property to pristine PCMs with or without thermal insulation. Based on these differences, we can compare the energy efficiency of azobenzene-doped PCMs and that of conventional PCMs with thermal insulation (Supplementary Note 2). As a result of gradual sensible heat loss through an insulation layer, conventional PCMs need external thermal energy input to maintain the liquid state at a high temperature, which leads to the efficiency drop over time ( Supplementary Fig. 11). On the other hand, the efficiency of azobenzene-doped PCMs is determined by the UV photon energy input during the initial charging process that fixes the liquid state in the dark until a critical percentage of cis azobenzene undergoes reverse isomerization. As aforementioned, the use of photoswitches with higher quantum yields will significantly increase the energy efficiency that is constant over time in this system. Discussion The goal of our work is to present a unique method to modify the thermodynamic properties of conventional PCMs and to explore the design criteria of the photo-switching dopants. For the potential applications and practical devices, multiple aspects of the system need to be optimized, and the potential challenges should be addressed. First, we envision a device structure that involves a container with windows and covers needed for the controlled light absorption and dark storage. Thermal energy from various waste heat sources, including solar heat, is indirectly transferred to the PCM composite using a heat exchanger, and a separate UV source (an LED or a gas-discharge lamp) will be used for the simultaneous azobenzene-charging process. The thermally activated PCM will be simply carried to a heat outlet at room temperature with the covers closed to enable dark storage. To trigger heat release, the covers will be opened to expose the liquid composite to ambient light or blue LED light for a faster release. The PCM composite is designed to keep the stored latent heat even when it is cooled down to room temperature. Therefore, the facile heat loss through the glass windows or through the container without thermal insulation is considered as a part of the heat-transfer process to reach equilibrium with the cooler surroundings, while it would be a challenge in the pristine PCM system that solidifies at the original crystallization point. The common borosilicate glass that allows over 90% transmission of 365-nm UV and visible light will be suitable as the window material. The implementation of a stirring or flowing system in the container can solve multiple potential challenges associated with the UV charging and visible-light discharging processes, including the fixed penetration depths of light, the initial phase separation of PCM and trans-Azo (during UV charging), and the scattering of visible light by the solidified portion of the PCM (during discharging). In our system, the azobenzene dopants are suspended in the viscous liquid PCM when they are irradiated with UV, and solvated by the PCM as trans-to-cis conversion occurs. The dopants can diffuse in the viscous liquid phase of the PCM, which enables the facile charging of thick samples, similar to the complete charging of azobenzene solutions (in dichloromethane) in 20-mL vials that are 23-mm thick. The powder samples as shown in Fig. 2a are 100-200-μm thick, and the azobenzene dopants are successfully charged by UV irradiation (ca. 90%, Supplementary Fig. 1c). This indicates that the dynamics in the composite play an important role in increasing the actual penetration depth, despite the smaller calculated static penetration depth (23 µm, see Supplementary Note 3, Supplementary Fig. 12, and Supplementary Movie 3 for a detailed analysis). Therefore, we envision that the liquid PCM composite, once equipped with a stirrer or a flow system, will not be limited by the static light penetration depth or the initial phase separation, due to the uniform exposure to UV and the facile dissolution of trans-Azo aggregates. The stirring or flowing system will also effectively remove the scattering layer and disperse the nucleating sites in the bulk PCM during the discharging step, which allows for facile crystallization and propagation in a large-scale composite. In order to demonstrate the concept at a larger scale, we devised an experiment where 3 g of a UV-charged PCM composite is optically triggered to release heat which is transferred to 1 g of water and raise its temperature effectively (Fig. 5). The lowering of T c of PCM from 38 to 29°C by the addition of the azobenzene dopant and UV activation is clearly observed (Fig. 5a), and the triggering of crystallization above 29°C by optical irradiation is demonstrated (Fig. 5b), in accordance with the milligram-scale DSC measurements and thin-film-patterning experiments. The heat release from the optically triggered composite is larger than that from the UV-charged composite in the dark, due to the additional contribution of dopant crystallization and azobenzene isomerization energy, consistent with our DSC measurements shown in Fig. 2. The longer crystallization process and the longer heating of water, despite the continuous and spontaneous heat loss to the cooler surroundings, indicate significant heat release from the composite and transfer to water. The impact of the blue light irradiation on the temperature of bulk materials, either PCM or water, was confirmed to be negligible, through a control experiment with pristine PCM and water. The LED lamp was placed 20 cm away from the PCM flask. As shown in Fig. 5c, pristine PCM can heat water to a higher temperature for a shorter period, as a result of its higher T c (38-39°C) and the faster heat loss to the surroundings at high temperatures. The crystallization process of pristine PCM is slower without contacting to a water container (Fig. 5d). In order to emphasize the impact of phase change and latent heat on water-heating dynamics, we conducted control experiments replacing the PCM by other substances which only transfer sensible heat to water within the temperature range of study. Octadecane was chosen as a type of paraffin which has a similar thermal conductivity and heat capacity to tridecanoic acid but presents a low T c of 28°C (Fig. 5e), and water as a common heating medium with a high heat capacity (Fig. 5f). In both cases, the heated water cools down rapidly, in contrast to water in a solidifying PCM composite bath, demonstrating the significant advantage of PCMs over heated liquid as a heating medium. Over 5 times longer heating of water enabled by the light-triggered phase change than by the sensible heat transfer from other heated fluids shows the significance and practicality of latent heat storage and release. Finally, the long-term stability and reversibility of the photoswitching composites needs to be addressed. High thermophysical stability of the azobenzene dopant (compound 1) at temperatures up to 200°C and the optical cycling stability over 100 cycles of charging and discharging for over 50 h were probed (Supplementary Fig. 13), and the invariable morphology of the composite after optical discharging was analyzed by the Fourier transforms of optical microscope images ( Supplementary Fig. 14). Further Xray diffraction studies, however, indicate that optically induced fast crystallization can lead to less aggregation and incomplete cisto-trans conversion, consistent with the molar fraction analysis of the cis isomer during optical discharging ( Supplementary Fig. 9). Although crystallinity of the PCM composite is slightly changed from the starting material after the initial optical discharging, the morphology and crystallinity of the composite in the subsequent cycles of UV charging and visible-light discharging will be fully reversible. In summary, a unique method is demonstrated to control the intermolecular interactions between phase-change materials and photochromic dopants for thermal energy storage and release in the composites by optical switching. This simple approach provides the foundation to add a functionality to conventional heatstorage materials and potentially to various other types of PCMs including inorganic materials by employing suitable photoswitches that can interact with relative PCMs effectively. This proof of concept may provide insights into thermal energy management for a wide range of potential applications such as waste heat recycling, solar thermal collection, and smart temperature systems for buildings. Methods Materials. Tridecanoic acid, oxalyl chloride, and the azobenzene starting materials were purchased from Sigma-Aldrich and were used without further purification. Dichloromethane, tetrahydrofuran, dimethylforamide, and methanol were purchased from VWR and were used as received. Synthesis of azobenzene dopants. To a solution of tridecanoic acid (385 mg, 1.8 mmol) in dichloromethane (5 mL), oxalyl chloride (231 µL, 2.7 mmol) was added dropwise at room temperature and stirred for 10 min. A catalytic amount (a drop) of dimethylformamide was added to the mixture, and the solution was stirred for 3 h generating CO 2 (g), CO (g), and HCl (g) through a bubbler. The reaction mixture was dried under reduced pressure (50 mTorr) for 2 h to obtain tridecanoic acid chloride (yellow oil) which was used for the next step without further purification. The acid chloride (417 mg, 1.8 mmol) was dissolved in dry THF (5 mL) which was added dropwise to the mixture of 4-phenylazophenol (534 mg, 2.7 mmol) and triethylamine (1.1 mL, 8.1 mmol) in dichloromethane (50 mL). After the gas evolution stopped, the mixture was stirred overnight, and the solvent was evaporated to a reduced volume (5 mL). Methanol (10 mL) was added to the reaction mixture to precipitate a yellow powder that was filtered, rinsed with methanol, and dried under reduced pressure. The clean product, compound 1 (567.4 mg), was obtained with 80.1% yield. Compounds 2 and 3 were synthesized with different phenylazophenol precursors, 4-(4-decyloxyphenylazo)phenol and 4-(4-tert-butylphenylazo)phenol, and were obtained at 89.1% and 62.5% yields, respectively. The 1 H and 13 C NMR spectra and the structural information including HRMS of the compounds can be found in Supplementary Figs. 15-20 and Supplementary Methods. Sample preparation and charging/discharging procedures. The composites were prepared by dissolving the relative fractions of the PCM and azobenzene dopants mixed in dichloromethane which was evaporated at 40°C under nitrogen flow. The solid samples were then transferred to glass substrates and were heated at 43°C while being irradiated by a UV lamp (365 nm, 100 W) that was placed 25 cm above the samples. The UV-charging station was covered by a container and aluminum foil to block ambient light exposure. After 1 h of charging, the samples were transferred to DSC pans in the dark for measurements. For discharging, the samples were illuminated by a blue LED lamp (450-460 nm, 15 LED chips, 12 W) placed 10 cm above the samples at 36°C for 30 s. For the patterning experiments (Fig. 3c), the composites were UV/thermally charged on glass substrates, and the liquefied samples were pressed under another glass slide to make films. For solution-state charging, the azobenzene derivatives were dissolved in dichloromethane and illuminated by the UV lamp while being stirred at room temperature. The charged solutions were then dried under reduced pressure in the dark to prepare DSC samples. Other measurements. 1 H and 13 C NMR spectra were taken on Varian Inova-500 spectrometers. Chemical shifts were reported in ppm and referenced to residual solvent peaks (CD 2 Cl 2 : 5.33 ppm for 1 H, 53.84 ppm for 13 C, CDCl 3 : 7.26 ppm for 1 H, and 77.16 ppm for 13 C). Bruker Daltonics APEXIV 4.7 Tesla Fourier transform ion cyclotron resonance mass spectrometer was used for high-resolution mass determination with an electrospray ionization (ESI) source. UV-Vis absorption spectra were obtained using a Cary 60 UV-Vis spectrophotometer (Agilent Technologies) in a 10-mm pathlength quartz cuvette. DSC analysis was conducted on a Q series DSC Q20 (TA Instruments) with the RCS40 component. Powder Xray diffraction (PXRD) patterns were recorded on Bruker D8 Discover diffractometer using nickel-filtered Cu-Kα radiation (λ = 1.5418 Å) with an accelerating voltage and current of 40 kV and 40 mA, respectively. Samples for PXRD were prepared by placing a thin layer of the appropriate material on a zerobackground silicon crystal plate. Computational methods. Standard ab initio calculations within the framework of density-functional theory (DFT) were performed to optimize the geometry and calculate the binding energies of molecular dimers with various configurations, using the Vienna Ab Initio Simulation Package (VASP v5.4) 45 . Plane-wave and projector-augmented-wave (PAW)-type pseudopotentials 46 were employed with a 400 eV kinetic-energy cutoff and the GGA-PBE exchange-correlation functional 47 . Van der Waals interactions were included by the DFT-D2 method of Grimme. A 20 Å vacuum along the direction involving π-π stacking and a 15 Å vacuum for all other directions were constructed to avoid artificial interactions between periodic images. The structures were relaxed until all forces were smaller than 0.02 eV Å −1 . In order to overcome the challenge of computing long molecules with a large vacuum, the truncated molecules with short tails containing only 4 C atoms were used as representatives in all simulations. This approximation is justified by considering the PCM dimers with various alkyl chains, wherein the binding energy increases linearly with increasing tail length at the rate of 0.055 eV per CH 2 unit. Then, the final binding energies between stacking dimers as shown in Fig. 4d were obtained by the summation over the binding energy of truncated dimers and the estimated contributions from alkyl chains based on the above relation. For the exploration of the possibility to bind an individual dopant with multiple PCMs at a low-doping level (Fig. 4e), the van der Waals interactions from long tails have been excluded because most of the surrounding PCMs will not be able to ideally stack with the dopant. Data availability. The data that support the findings of this study are available from the corresponding author upon reasonable request.
2017-11-24T20:36:04.952Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "d0c864c89d97db71d94854702fbb586667a9ff64", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-017-01608-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4829dd6dadbd86b3c2df1254c9266c482372a9e3", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
139802013
pes2o/s2orc
v3-fos-license
Asphalt concrete modified by rubber crumbs in transport construction. High-temperature and low-temperature characteristics of the rubber-bitumen binder and rubber asphalt concrete based on it are researched. The determination method of binder’s low-temperature characteristics is offered. The estimation of binder’s and pavement’s stability against technological and operational aging is evaluated. Estimation of environmental and economic aspects of using rubber crumbs is made. The possibility of using rubber crumbs as modifier of organic binder for production of asphalt concrete on its base is justified. Introduction Considering the vast territory of the Russian Federation, which is characterized by a wide interval of operational temperatures, one of the ways to increase traffic-operational characteristics of road pavement is the modification of bitumen by various polymeric additions in order to extend the temperature interval of the reliable operation of asphalt concrete pavement. Search for more effective modifiers, testing them to find the best recipes of the modified bitumen and polymer-bitumen emulsions, and analysis of their usefulness, which was started in the 1950's, are still under way. Research of possibilities to use rubber from shredded tires was started in the 1960's in the USSR in order to increase the efficiency of asphalt concrete pavement. [1][2][3][4]. The rapid growth of motorization (that still is under way) had dictated the relevance of the large-scale use of the tire-recycling production. In rubber crumbs' regulatory requirements, there are requirements concerning only grainsize (not more than 20% and not larger than 0.63 mm) and absence of cord admixtures [5], which do not represent main features of neither a chemical composition nor a granularity level of rubber from different tire manufacturers, whose shredding is made by different methods. Main Part In this work, rubber crumbs are taken as an effective modifier; to get the compositional bitumenrubber binder and to research the possibilities of crumbs, the modifier should be used. Rubber crumbs are produced by the shredding method of high-temperature shifting that is based on simultaneous pressure on material from the intensive compression, shifting-based deformation and heat. An average size of the crumb is 5-50 microns. Comparative test results of the RBV rubber-bitumen binder, modified with 20% crumbs (funnel viscosity = 40 dmm) in accordance with the regulatory requirements and in comparison with polymerbitumen binder PBV 40 and rubber-bitumen binder BITRACK 40/60, are shown in Table 1 Data analysis shows that the RBV binder exceeds regulatory requirements for the PBV binder, which are the same requirements as those for the BITRACK 40/60 binder. The temperature requirement is not regulated for the rubber-bitumen binder because the rubber crumbs-asphalt combination is heterogeneous. Rubber, unlike DST, is a crosspolymer, but due to a large distance between crosslinking, macromolecules preserve their ability to straighten up when extended and to curl up into balls after removal of the mechanical load. Figure 1. Macrostructure of polymer-bitumen (а) and rubber-bitumen (b) binders Connections between rubber's macromolecules and asphaltenes of the organic binder can create a cross-linked structure. As a result, characteristics of the produced RBV are determined by characteristics of the cross linked structure, which can extend in the direction of the applied load and can be perceived as its significant part [6]. Taking into account the fact that with the identical grain-size composition of asphalt, concrete characteristics in general, peculiarities of asphalt concrete determine operability, the comparative studies of macadam and mastic asphalt concrete were made on the original bitumen, bituminous polymer binder and the rubber-bitumen binder according to the main characteristics, which define their reliable operation in a wide interval of operational temperatures. Heat resistance was characterized: bitumen's normal operation temperature in the heat period by the «Superpave» method using dynamic shifting rheometer DSR in the controlled tension mode. This device is used to define viscous and elastic behaviors by measuring complex shear modulus (G*) and phase angle (δ) of organic binders (Tab. 2). − For asphalt concrete -by strength properties at 20°С and 50°С and, also, by the coefficient of internal friction and clutch shear. Ultimate compressive strength rate at 50 о С is 43% higher than that of the original bitumen and 28% higher than that of the bituminous polymer binder, which demonstrates a more reliable operation of asphalt concrete pavements with use of RBV in conditions of higher summer temperatures. Shifting cohesion for SMA on RBV is by 100% and 44% better than that on original and polymer bitumen respectively, which allows predicting higher tracking resistance for asphalt concrete pavements. Low-temperature characteristics of binders were compared with requirements, regulated by GOST 52056-2003 (Tab. 3) and additionally by the residual deformation determination method that was offered in this work, because formation of this type of deformation is particularly dangerous for asphalt concrete pavements. So, after traffic load is over, with increasing residual deformation up to 5 mm, dynamic impact increases 16 times, and residual deformation accumulation lowers severely endurance of the entire asphalt concrete pavement [7]. Experimental data indicate that RBV is highly resistant to both high-temperature and lowtemperature conditions' effects. However, conclusions about binder's deformability at low temperature that characterizes the cracking resistance's temperature value can be done only with considering technological and operational aging of organic binders, because alteration of chemical group composition due to aging causes an increase of binder's fragility that causes the destruction of asphalt concrete pavements. Characteristics of the compositional rubber-bitumen binder were researched in accordance with technological regulations of the American «Superpave» system in order to determine these requirements. A long-term plan to include such tests in practical road-building in Russia was offered by RosAvtoDor in 2013. A distinctive feature of «Superpave»'s technological conditions is that, unlike conventional requirements of valid GOSTs, the «Superpave» method models fully operational conditions of binders in the asphalt concrete composition. They are based on tests of organic binders in terms of three critical stages throughout the life of the binder. Tests of the original binder are the first stage of its transportation and storage. The second stage is the technological aging of the binder in the process of batching the asphalt concrete mix and building the asphalt concrete pavement. The third stage simulates binder's operational aging. The «Superpave» method suggests that for III and IV DKZ, the RF binder should accord to American trademark PG 64-28 (64 0 С -the highest temperature of IV DKZ and -28 0 С -the lowest temperature of III DKZ). Comparison of RBV's factual requirements was made with regulatory requirements of this regulation. Herewith, according to ASTM standards, when the binder with the temperature interval of more than 90 0 С is chosen, only the modified bitumen should be used. In order to determine the negative temperature value of binder's cracking after operational aging, the rheometer was used to test beam on BBR inclination. If binder's hardness during creep is high, cracking will take place. Low-temperature cracking in RBV occurs at -43 0 С, while regulations for the authors' region cover -28 0 С [9]. Test results are provided in Table 4. -the compositional rubber-bitumen binder is almost not affected by technological aging and the operational aging rate is significantly lower than the regulated one. This can be explained by the fact that rubber in a highly oriented condition significantly impedes oxygen's diffusion from external environment and inhibits oxidative processes; -RBV has a wide temperature interval of reliable operation from -43 0 С to +65 0 С, which exceeds regulatory requirements by 16 0 С. Physical and mechanical requirements of SMA-15 with an identical grain-size composition and with use of compared binders are provided in Table 5.
2019-04-30T13:07:16.384Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "56e3e488357c02638db2822d163dbda1b0d88336", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/327/3/032019", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fbb18439e83e60ad591f968aee4de64bef044447", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
233277708
pes2o/s2orc
v3-fos-license
Emotion Regulation as a Mediator in the Relationship Between Cognitive Biases and Depressive Symptoms in Depressed, At-risk and Healthy Children and Adolescents Contemporary cognitive models of depression propose that cognitive biases for negative information at the level of attention (attention biases; AB) and interpretation (interpretation biases; IB) increase depression risk by promoting maladaptive emotion regulation (ER). So far, empirical support testing interactions between these variables is restricted to non-clinical and clinical adult samples. The aim of the current study was to extend these findings to a sample of children and adolescents. This cross-sectional study included 109 children aged 9–14 years who completed behavioural measures of AB (passive-viewing task) and IB (scrambled sentences task) as well as self-report measures of ER and depressive symptoms. In order to maximize the variance in these outcomes we included participants with a clinical diagnosis of depression as well as non-depressed youth with an elevated familial risk of depression and non-depressed youth with a low familial risk of depression. Path model analysis indicated that all variables (AB, IB, adaptive and maladaptive ER) had a direct effect on depressive symptoms. IB and AB also had significant indirect effects on depressive symptoms via maladaptive and adaptive ER. These findings provide initial support for the role of ER as a mediator between cognitive biases and depressive symptoms and provide the foundations for future experimental and longitudinal studies. In contrast to studies in adult samples, both adaptive as well as maladaptive ER mediated the effect of cognitive biases on depressive symptoms. This suggests potentially developmental differences in the role of ER across the lifespan. Introduction Depression is one of the most common psychiatric disorders with a lifetime prevalence of around 20% (Hasin et al., 2018). It affects more than 300 million people worldwide (World Health Organisation, 2018) and by 2030 is expected to be the leading cause of disease (World Health Organisation, 2004). Depression is characterised by persistent low mood combined with disturbances in cognition (e.g., hopelessness about the future) and physical symptoms (e.g., difficulties sleeping; American Psychiatric Association, 2013). Cognitive models underline the role of emotion regulation (ER) difficulties in the development and maintenance of depression. ER has been defined as "the processes by which individuals influence which emotions they have, when they have them, and how they experience and express these emotions" (Gross, 1998, p. 275). ER strategies can be cognitive (e.g., reappraisal of the cause of the emotion) or behavioural (e.g., distraction from the negative emotion via a different activity). A "vicious cycle" between negative emotions, cognitions and behaviours is thought to be responsible for the development and maintenance of depression. Supporting depressed individuals to use more adaptive ER strategies is a core component of many psychological interventions (Young et al., 2019). However, whereas these treatment approaches assume ER is under volitional control, there is a growing awareness of the role that automatic cognitive processes, which are not necessarily under patients' volitional control, play in ER (Joormann & Stanton, 2016;Wadlinger & Isaacowitz, 2011). The focus of this article is on the extent to which cognitive biases and ER interact in youth depression. Depression most commonly onsets during adolescence (Hankin et al., 1998) and adolescent depression is associated with greater chronicity (Kovacs, 1996;Lewinsohn et al., 1999;Weissman et al., 1999) and suicide risk (Harrington et al., 1990;Weissman et al., 1999). Childhood and adolescence is a time of ongoing social, cognitive and neurobiological development (Pfeifer & Blakemore, 2012), questioning the appropriateness of directly applying adult aetiological models to youth (Lakdawalla et al., 2007). Whereas negative emotions are regulated by the caregiver in early childhood, adolescence sees a reduction in the reliance on caregivers for ER (Young et al., 2019). Nevertheless, adaptive ER has limited efficacy in modifying negative emotions until early adulthood (Zimmermann & Iwanski, 2014). As such, understanding the association between cognitive biases and ER in youth depression may help to inform improved prevention and treatment methods. Emotion Regulation When faced with events which elicit negative emotions depressed individuals show systematic differences in the ER strategies they habitually use (for reviews see Sheppes et al., 2015;Aldao et al., 2010;Johnstone et al., 2007;D'Avanzato et al., 2013). Depressed individuals are more likely to ruminate (dwell) on the causes of emotional distress (e.g., "what's wrong with me?"; Nolen-Hoeksema, 2009) and less likely to reappraise the valence, cause or meaning of the event (e.g., "this could give me a chance to pursue other interests and friendships"; Gross, 1998). The fact that the former strategies are positively associated and the latter are negatively associated with depressive symptoms has led to ER strategies being dichotomised as "maladaptive" versus "adaptive". This dichotomisation is nevertheless controversial (Bonanno & Burton, 2013) since the effectiveness of ER strategies depends partly on the intensity of the stressor (Sheppes & Meiran, 2008;Troy et al., 2013) or its perceived controllability (see Haines et al., 2016). Meta-analyses demonstrate that the effect size (ES) for habitual ER strategies in depressed versus healthy adults is very large: g' = 1.12 to 2.10 for maladaptive and g' = -0.70 to -1.04 for adaptive strategies (Visted et al., 2018). Experimental and longitudinal studies of adults suggest that ER strategies predict future symptoms of depression and are not simply a by-product of the disorder (Berking et al., 2019;Ochsner et al., 2004). A meta-analysis of studies of unselected youth found that those with increased depressive symptoms more frequently used maladaptive (and less frequently used adaptive) ER strategies (Schäfer et al., 2017). Increased use of maladaptive ER strategies has also been demonstrated in clinically-depressed youth (Kullik & Petermann, 2013;Sfärlea et al., 2019a). One meta-analysis found that rumination prospectively predicted depressive symptoms in unselected youth (Rood et al., 2009). However, these effects disappeared when baseline symptoms of depression were controlled for (Rood et al., 2009). Cognitive Biases The term cognitive biases refers to quick, automatic cognitive processes which are typically assessed via indirect measures in behavioural paradigms and are biased towards negative (mood-congruent) information (Mathews & MacLeod, 2005). These can occur at the level of attention (attention biases; AB), interpretation (interpretation biases; IB) or memory (Mathews & MacLeod, 2005). For example, when shown an array of emotional stimuli (e.g., faces) depressed individuals and those with elevated symptoms of depression selectively attend to the negative rather than neutral or positive stimuli (Armstrong & Olatunji, 2012;De Raedt & Koster, 2010;LeMoult & Gotlib, 2019). Similarly, depressed individuals also draw more negative interpretations of emotionally ambiguous information. Meta-analyses have shown medium ES for negative AB (Hedge's g' = 0.46 -0.80; Armstrong & Olatunji, 2012;Peckham et al., 2010) and IB (g' = 0.72; Everaert et al., 2017aEveraert et al., , 2017bEveraert et al., , 2017c in adult depression. Studies of non-depressed adults (Beevers & Carver, 2003;Sanchez-Lopez et al., 2019), adults with a current (Beevers et al., 2015) or past episode (Browning et al., 2012) of depression suggest negative AB predict future negative mood and depressive symptoms. Similarly, experimental studies support the predictive role of negative IB in explaining negative mood and depressive symptoms (Jones & Sharpe, 2017;Koster & Hoorelbeke, 2015;MacLeod, 2012). Combined-cognitive bias models of depression (Disner et al., 2011;Everaert et al., 2014) argue that negative AB causally influence negative IB. When attention is easily trapped by negative stimuli, e.g., frowning faces during a talk (AB), this may directly lead to an increase in the negative interpretation of ambiguous scenarios e.g., "they do not like my talk" versus "they cannot hear me well" (IB). This has been supported by experimental studies (Sanchez et al., 2016). Individual studies show cross-sectional associations between cognitive biases and depressive symptoms in unselected and depressed youth (Kertz et al., 2019;Platt et al., 2017). However, no studies have tested for associations between AB and IB in youth. Integrative Models of Depression Contemporary cognitive models posit a causal effect of cognitive biases on ER (Gross, 1998). For example, according the process model of ER (Gross, 1998), attentional deployment (an antecedent form of ER similar to AB) determines which aspects of an emotional situation will be focused on and thus influences response-focused ER. More specifically, an AB towards negative information and/or difficulties disengaging from negative information may kindle rumination (repetitive negative thinking; Linville, 1996). Furthermore, negative IB may influence cognitive reappraisal (ER) by consuming the cognitive resources necessary for positively reappraising the event (ERJoormann & D'Avanzato, 2010; Mogg & Bradley, 2018). The hypothesis that cognitive biases influence ER is supported by a number of empirical studies. Firstly, studies which measure participants' gaze patterns (AB) when they are given a specific ER strategy demonstrate an association (cross-sectional correlation) between AB and the ER strategies reappraisal, suppression and distraction (Bebko et al., 2011;Manera et al., 2014;Strauss et al., 2016;van Reekum et al., 2007). Note that two studies found that the effects of cognitive reappraisal on emotion reactivity were independent of attentional processes (Bebko et al., 2014;Urry, 2010). As Sanchez et al. (2016) note, this may be because reappraisal is influenced by multiple factors (Morris et al., 2014). Secondly, adults who report more frequent rumination also show a negative AB (Duque & Vázquez, 2015;Joormann et al., 2006;Owens et al., 2016) or IB (Everaert et al., 2020;Mor et al., 2014). Thirdly, unselected students who were trained to adopt a positive AB later showed more frequent cognitive reappraisal and more positive mood (Sanchez et al., 2016). Finally, two studies have found that children who ruminate showed a negative AB (Hilt & Pollak, 2013;Romens & Pollak, 2012). Models of adult depression go one step further by proposing that ER mediates the effect of cognitive biases on depression, i.e. that the effect of cognitive biases in ER contributes to depressive symptoms (Joormann & Stanton, 2016;Wadlinger & Isaacowitz, 2011). This hypothesis is supported by cross-sectional studies of unselected adults (Everaert et al., 2017a(Everaert et al., , 2020, a longitudinal study of unselected adults and a longitudinal study of adults with a self-reported history of depression (Yaroslavsky et al., 2019). Together they found that rumination does mediate the effect of cognitive biases on depressive symptoms. Furthermore, Everaert et al. (2017a) found evidence to support the model AB IB ER depressive symptoms in unselected adults. The mediating role of ER is yet to be investigated in currently depressed adults. The hypothesis that ER mediates the effect of cognitive biases on depressive symptoms is also yet to be investigated in youth samples. The relation between cognitive biases, ER and depression may be different in youth compared to adults. Ongoing neurobiological development of brain regions associated with ER (Pfeifer & Blakemore, 2012) may mean that ER plays less of a role in mediating the effects of negative AB on depressive symptoms in childhood and adolescence (Kindt & Van Den Hout, 2001). On the other hand, ER may play an even more crucial role in determining the effects of automatic cognitive biases on depressive symptoms during this period due to the enhanced emotional sensitivity for negative information (Paus et al., 2008). The Current Study The aim of this cross-sectional study was to extend the finding that cognitive biases have their effect on depressive symptoms via alterations in ER to a well-characterized sample of youth. Although cross-sectional studies are limited in their ability to infer causal relationships, they provide important foundations for randomised controlled trials which are generally more time-and cost-intensive. Clinically-depressed youth, youth with an elevated familial risk of depression and youth with a low familial risk of depression were included in order to maximise variation (recommended by Everaert et al., 2017a). Since the psychometric properties of behavioural measures of cognitive biases are generally poor LeMoult & Gotlib, 2019), measures which had previously shown good reliability were selected (Platt et al., 2021;Sfärlea et al., 2019b). A measure of ER which included sub-scales for adaptive and maladaptive ER was chosen, since it is unclear whether mediating effects are specific to rumination (Everaert et al., 2017a) or also apply to adaptive ER strategies (Sanchez et al., 2016). The fourth hypothesis extended studies of unselected adults (Everaert et al., 2017a) and adults with a history of depression (Yaroslavsky et al., 2019) to youth, and predicted that maladaptive ER strategies would mediate the effects of AB and IB on depressive symptoms. Findings relating to the mediation of AB and IB on depressive symptoms via adaptive ER are mixed therefore this pathway was tested but no specific predictions made. Finally, in line with (Everaert et al., 2017a), the mediating role of IB and ER on the relationship between AB and depressive symptoms was tested ( Fig. 1). Methods Data were collected within a broader project on cognitive biases in youth depression (Platt, 2017). Data from the same project are reported elsewhere in relation to attention (Platt et al., 2021) and interpretation (Sfärlea et al., 2019b) biases in at-risk youth, as well as interpretation biases in clinicallydepressed youth (Sfärlea et al., 2020). Participants Inclusion criteria were participants who were 9-14 years old. Exclusion criteria were intelligence quotient (IQ) < 85 (CFT 20-R; Weiß, 2006), pervasive developmental disorders, attention deficit and hyperactivity disorder, and a history of schizophrenia or bipolar disorder. In order to maximise variability within the sample (see recommendation by Everaert et al., 2017a), youth had either i) current depression (MD; n = 27), ii) no psychiatric disorder but a high familial risk for depression (HR; n = 41), or iii) no psychiatric disorder and a low familial risk for depression (LR; n = 41). 1 Diagnoses according to DSM-IV (American Psychiatric Association, 2000) were assessed using standardized, semi-structured psychiatric interviews (K-DIPS; Schneider, Unnewehr, & Margraf, 2009) conducted by trained interviewers with participants and one parent. The K-DIPS has good interrater-reliability (accordance > 96% for all diagnoses; Neuschwander et al., 2013). Two thirds (72) were female, mean age 12.4 years (SD = 1.7) and mean IQ 109.2 (SD = 12.1). Three MD participants had recurrent MD, 13 had at least one comorbid anxiety disorder, and two were receiving selective serotonin reuptake inhibitors. The present data were collected within a study examining IB differences between HR and LR youth using a sample size calculation based on an effect size (ES) d = 0.6, α = 0.05 and power = 0.80 (one-tailed test). Accordingly, 36 participants were required per group. For practical reasons we did not quite achieve the required 36 MD participants, but the final sample (n = 109) reached the required total sample size and allowed us to detect a significant correlation of r > 0.27 at α = 0.05 and power = 0.80. The ethics committee of the LMU University Hospital Medical Faculty approved the study. Written informed consent was obtained from participants and parents and Fig. 1 Path model testing mediating role of ER between cognitive biases and depressive symptoms in youth. Notes: AB = Attention Bias; IB = Interpretation Bias; ER = Emotion regulation 1 Children at high familial risk for depression had at least one parent that had suffered from depression or dysthymia during the child's lifetime. Children were not included if the affected parent had a history of bipolar disorder, schizophrenia, or substance abuse but other current or past comorbidities were allowed. Of some children both parents were affected by depression but psychiatric diagnosis was not systematically assessed in the second parent. Children at low familial risk for depression had parents with no history of any mental disorder. Both parents' psychopathology was considered for these participants, whenever possible. Parental psychopathology was assessed with a standardized psychiatric interview (DIPS; Schneider & Margraf, 2011). participants were rewarded with €30 to €50. 2 MD participants were recruited though local mental health services. The majority of HR participants were recruited through a study of a preventive intervention for children of depressed parents (Platt et al., 2014) and LR participants were largely recruited through public advertisements. Measures Attention Bias (AB). Eye movements were recorded during a Passive Viewing Task (PVT; Harrison & Gibb, 2015). Stimuli were coloured photographs of children's faces displaying sad, angry, happy, and neutral emotional expressions from the NIMH Child Emotional Faces Picture Set (NIMH-ChEFS; Egger et al., 2011). The stimulus set comprised 24 models (50% male/female). Each trial began with a drift correction (small white circle in the centre of the screen). Upon fixation the experimenter initiated the trial. A fixation cross was presented for 1000 ms. Then the 2 × 2 stimulus array was presented for 15,000 ms. The task consisted of 16 emotional trails (corresponding to the minimum trial number suggested for ET research by Orquin & Holmqvist, 2018) and eight neutral trials (not analysed) that were presented in random order. In the emotional trials (Fig. 2), the stimulus array comprised four photographs of the same model displaying a sad, an angry, a happy, and a neutral facial expression. The position of each emotional facial expression was randomly assigned to one of the quadrants with each emotion being presented in each quadrant exactly four times. The neutral filler trails comprised four neutral photographs of the same person. Stimuli had a size of 9.5 × 7.5 cm and were presented with a distance of approximately 6.5 cm horizontally and 1 cm vertically between them. Participants were instructed to freely view the stimuli keeping their attention on the screen. Eye movements were registered with an EyeLink 1000 Plus desktop-mounted eye-tracker (SR Research, 2013b). Participants were seated in front of a 15inch monitor (1024 × 768 pixel resolution). The experiments were presented using Experiment Builder 1.10 (SR Research, 2013a). Viewing was binocular while eye movements were registered from the dominant eye with a sampling rate of 1000 Hz. A forehead and chin rest were used to minimize head movements and keep the viewing distance (65 cm) constant. The lighting in the room was constant. Before the task commenced, a 9-point calibration and validation procedure was conducted and required average error less than 0.5° of visual angle and maximum error less than 1° of visual angle. Saccades were defined as events with a velocity above 30°/s or an acceleration above 8000°/s 2 (e.g., Skinner et al., 2018;Waechter et al., 2014). Fixations were defined as gaze positions stable within 1° of visual angle for at least 100 ms (e.g., Duque & Vázquez, 2015). Trials in which the total dwell time was less than 75% of the presentation time (e.g., due to excessive blinks; Skinner et al., 2018) were excluded. The final sample excludes six participants who had poor performance on the AB task (< 70% valid trials; Duque & Vázquez, 2015) or systematic calibration errors. An average of 15.3 trials per participant (SD = 1.1; 96% of 16) were available for analysis. AB was defined as mean percentage of dwell time on sad faces due to its good split-half reliability (Spearman-Brown-corrected = 0.81). Interpretation Bias (IB). A computerized version of the Scrambled Sentences Task (SST; Wenzlaff & Bates, 1998) adapted by Everaert et al. (2014) was used to assess the tendency to form negative or positive statements out of ambiguous information. The stimuli consisted of 30 emotional (e.g., "total I winner a loser am") and 20 neutral (e.g., "like watching funny I exciting movies") scrambled sentences (full stimulus set described in Sfärlea et al., 2019b). All sentences contained six words and had two possible solutions. In emotional trials, one solution was positive whereas the other was negative. In neutral trials both solutions were neutral. The trial procedure is depicted in Fig. 3. Participants were instructed to read the words, mentally form a grammatically correct five-word sentence as quickly as possible, and click on the mouse button as soon as they did so to continue to the response part of the trial. The scrambled sentence was presented for a maximum of 8000 ms; if no mouse click occurred during that time the response part was omitted and the next trial began. In the response part, five boxes appeared below the scrambled sentence and participants were required to build the sentence they had mentally formed by ordering the words into the five boxes via mouse click. The 50 trials were randomly divided into five blocks of ten, each containing six emotional and four neutral trials presented in random order. Before the first block participants completed five practice trials to familiarize themselves with the task. Similar to earlier studies (e.g., Burnett Heyes et al., 2017; Everaert et al., 2014) a cognitive load procedure was included to prevent deliberate response strategies: before each block, a 4-digit number was presented for 5000 ms which had to be memorized and recalled at the end of the block. Participants' responses were rated as correct or incorrect. Trials in which no grammatically correct sentence was built (time-out or incorrect sentence) were excluded. The final sample excludes two participants who had poor performance (< 3 SD below the mean). An average of 26.2 correct emotional trials (SD = 2.9; 87%) per participant were analysed. IB score was calculated as the proportion of negatively resolved sentences from the total number of correctly resolved emotional sentences (Everaert et al., 2014). Split-half reliability of the task was assessed by correlating IB scores based on odd versus even trials (see e.g., Van Bockstaele et al., 2017) and was excellent (Spearman-Brown-corrected: 0.94). The task was administered during ET in order to simultaneously assess AB (Everaert et al., 2014) but the AB index had unacceptable split-half reliability in our sample and was therefore not analysed. Emotion Regulation (ER). The German questionnaire FEEL-KJ (Grob & Smolenski, 2005) contains 90 items assessing self-reported habitual use of seven adaptive and five maladaptive strategies to regulate anxiety, fear, and sadness in youth. Each item is rated on a five-point scale according to how often this strategy is applied to regulate each of the emotions. Sum scores for adaptive and maladaptive strategies across all emotions were calculated. The sum score for adaptive strategies can adopt values from 42 to 210 and the sum score for maladaptive strategies can adopt values from 30 to 150. The questionnaire authors report good internal consistency of the adaptive (Cronbach's α = 0.98) and maladaptive (Cronbach's α = 0.82) sub-scales in a sample of unselected children and adolescents (Grob & Smolenski, 2005). Internal consistency in our sample was excellent (Cronbach's αs ≥ 0.91). Good external validity has been indicated by strong correlations between depressive symptoms and the adaptive (r = -0.40, p < 0.001) and maladaptive (r = 0.35, p < 0.001) sub-scales in a large sample of unselected Belgian children and adolescents aged 8 to 18 years (Cracco et al., 2015). Experimental Procedure Tasks were administered in random order. As cognitive models of depression suggest that cognitive vulnerabilities such as negative biases are activated by negative mood (e.g. Disner et al., 2011;Scher et al., 2005), a mood induction procedure was administered twice during the experimental session. Participants watched a 2 min scene from the movie The Lion King (Hahn et al., 1994) known to successfully induce unpleasant mood in children (von Leupoldt et al., 2007). Participants in this study reported significantly worse mood after watching the movie scene (ts ≥ 8.0, ps < 0.001). The exact course of the experimental session including the results of the mood induction is reported in Sfärlea et al. (2019b); Supplement 5). Data Analysis Zero-order correlations were calculated to test the first three hypotheses assessing direct relationships across the whole sample between i) each of the cognitive variables (AB, IB, maladaptive ER, adaptive ER) and depressive symptoms, ii) AB and IB and iii) each of the biases and each form of ER. ES are based on Cohen's (1992) recommendation for the interpretation of r values: small effect for r ≥ 0.10, medium for r ≥ 0.30, and large for r ≥ 0.50. A path model (see Fig. 1) was generated to test the fourth and fifth hypotheses, which assumed a mediating role of ER in the relationship between iv) cognitive biases and depressive symptoms and v) AB-->IB (mediator) --> ER (mediator) --> depressive symptoms across the whole sample. The path model was estimated using AMOS (Version 25) with the maximum likelihood estimation. Model fit was assessed with Confirmatory Fit Index (CFI) and Root Mean Square Error of Approximation (RMSEA); a model is regarded to have good fit for CFI > 0.95 and RMSE < 0.05. As we did not have a priori hypotheses for the direct effects of AB or IB on depressive symptoms, we performed model selection according to the information criteria, Akaike's information criterion (AIC) and Bayesian information criterion (BIC), with a lower value indicating better model fit. We started the model selection with the perfect mediation model, where we assumed no direct effect of AB or IB on depressive symptoms. In search of the best-fit model, we added either or both the direct effects (Table 1). If the added direct path(s) improved the model fit, we concluded that the added path has a significant contribution to explain the observed data (and thus should be included in the model). The results of the model selection revealed that the addition of the direct path from IB to depressive symptoms improved the model fit. On this best-fit model with the direct effect of IB, we tested the statistical significance of the individual path coefficients as well as the indirect effects of IB and AB on depressive symptoms via ER. Error intervals of the indirect effects were estimated through bias-corrected bootstrapping with 1000 iterations. Table 2 displays means, SDs and zero-order correlations for the analysed variables. Hypotheses 1-3 Each of the cognitive vulnerability factors was associated with symptoms of depression (hypothesis 1). Whilst maladaptive ER was positively correlated with AB, IB and depressive symptoms (more negative biases were associated with more maladaptive ER and depressive symptoms), adaptive ER was negatively correlated with AB, IB and depressive symptoms. AB and IB were positively associated (hypothesis 2). AB and IB were positively associated with maladaptive ER and negatively associated with adaptive ER (hypothesis 3). Path Analysis-Direct Effects The estimated path coefficients (direct effects) are shown in Fig. 4. As previously stated, in the model selection procedure it was found that the best model fit did not include the direct effect of AB on depressive symptoms, suggesting that the zero-order correlation observed between AB and depressive symptoms (Table 2) is likely to represent an indirect effect (mediated by another factor). The path model also suggested no direct association between AB and adaptive ER. Path Analysis-Mediation (Indirect) Effects The estimated indirect (mediation) effects testing hypotheses four and five are presented in Table 3, with statistically significant effects indicated when zero was not included in the 95% CI. Maladaptive ER mediated the effect of AB on depressive symptoms and of IB on depressive symptoms (hypothesis 4). Adaptive ER also mediated the effect of IB on depressive symptoms but not of AB on depressive symptoms. The full path effect (AB-->IB-->ER-->depressive symptoms) was significant for both adaptive and maladaptive ER (hypothesis 5). Summary of Results Cognitive models of depression propose that negative AB and IB increase depression risk by promoting maladaptive ER. The aim of the current study was to extend empirical findings supporting these models in adults to a sample of 109 youth including those with a diagnosis of depression. The data supported a mediating role for both maladaptive and adaptive ER. There was also evidence of a direct effect of IB on depressive symptoms. Interpretation of Findings In line with our first hypothesis, all four cognitive constructs were significantly associated with depressive symptoms. The strongest association with depressive symptoms was observed for IB (large ES), supporting a meta-analysis of IB in adult depression (Everaert et al., 2017c). Associations between adaptive and maladaptive ER and depressive symptoms were also of a large ES, supporting meta-analyses of AB attention bias, ER emotion regulation, IB interpretation bias, M mean; SD standard deviation. Correlations remain significant after Bonferroni-Holm (Holm, 1979) correction for multiple testing *p < 0.05; **p < 0.01; ***p < 0.001 (Aldao et al., 2010;Visted et al., 2018) and youth (Schäfer et al, 2017). The medium ES for AB fits with previous literature (Peckham et al., 2010), although our path analysis suggested that the direct effect of AB on depressive symptoms is likely to be minimal. Our second hypothesis was also supported: in line with the combined cognitive hypothesis models of adult depression (Disner et al., 2011;Everaert et al., 2014), we showed for the first time that AB and IB are positively related in youth. Finally, in line with our third hypothesis, we found associations between cognitive biases and adaptive and maladaptive ER. This supports previous (adult) studies of AB and cognitive reappraisal (Bebko et al., 2011;Manera et al., 2014;Strauss et al., 2016;Van Reekum et al., 2007), emotional suppression (Bebko et al., 2011), distraction (Strauss et al., 2016) and rumination (Duque et al., 2014;Joormann, Dkane, Gotlib 2006;Owens et al., 2016). Similarly, it supports studies in adults showing associations between IB and rumination (Everaert et al., 2020;Mor et al., 2014) and cognitive reappraisal (Everaert et al., 2020). Empirical research testing the hypothesis that ER mediates the effect of cognitive biases on depression has been limited to cross-sectional studies in adults. These studies suggest that rumination (but not cognitive re-appraisal) mediates the effect of negative AB and IB on depressive symptoms (Everaert et al., 2017a;Everaert et al., 2020;Sanchez-Lopez, Koster, Van Put, De Raedt;Yaroslavsky et al., 2019). Our data also support a mediating role for maladaptive ER. In line with (Everaert et al., 2017a) we also observed a direct effect of IB on depressive symptoms, suggesting that negative interpretations contribute to both depressed mood as well as supporting maladaptive ER. Taken together with findings from studies in adults, our study suggests one subtle difference in the mediating role of ER between adult and youth samples: we found that negative cognitive biases also had an effect on depressive symptoms by hampering adaptive ER. As previously stated, in early adolescence children are less reliant on their parents to regulate their emotions, yet adaptive ER is ineffective (Zimmermann & Iwanski, 2014). As a result, they may be particularly sensitive to the effects of negative cognitive biases. Can ER be considered a mediator in the relationship between cognitive biases and depressive symptoms in youth? The current data give some initial indication to support this idea but further research including longitudinal designs should follow in order to establish causality and the extent to which bi-directional relationships between cognitive biases and ER are in operation. As Kazdin (2007) notes, temporal precedence of the cause (cognitive bias) and mediator (ER) over the outcome (depressive symptoms) is needed to truly establish mediation. No longitudinal study has investigated whether changes in cognitive biases predict subsequent changes in ER and depressive symptoms. A second requirement is experimental manipulation of the cause and proposed mediator (Kazdin, 2007). Adult studies show that modifying ER is associated with improvements in depressive symptoms (e.g., Berking et al., 2019) and modifying AB results in increased use of cognitive reappraisal (Sanchez et al., 2016). Similar studies in youth samples are needed. Experimental studies may also be helpful in testing bi-directional relationships between the variables. For example, some authors have argued that a tendency to draw negative interpretations directs future attention (Sanchez et al., 2015) and that maladaptive forms of ER may lead to more negative IB (Hilt & Pollak, 2013). Since consistency of findings in multiple studies and samples is another requirement (Kazdin, 2007), our study provides a valuable contribution to the literature by extending findings in adults to a sample of youth including those with clinical depression. The current findings also fulfil the requirement for strong relationships (Kazdin, 2007) between IB and ER and between ER and depressive symptoms. One additional finding which deserves attention is that IB partially mediated the effect of AB on depressive symptoms. This finding replicates a similar cross-sectional study in adults (Everaert et al., 2017a). More broadly the finding also supports combined-cognitive bias models of depression which suggest that when attention is trapped by negative stimuli they cause negative interpretation of ambiguous scenarios (Disner et al., 2011;Everaert et al., 2014). As far as we are aware, this is the first study to demonstrate a significant association between AB and IB in youth and the first study to suggest that IB partially mediates the effects of AB on depressive symptoms in youth. Strengths and Limitations A strength of the current cross-sectional study is that it provides the foundations for future experimental studies designed to determine causal relationships between cognitive biases, ER and depressive symptoms in youth with depression. A major strength of the study was the use of measures with good to excellent psychometric properties, particularly given the generally poor reliability of behavioural and ET LeMoult & Gotlib, 2019). The sample also included participants with a clinical diagnosis of depression as well as those with an elevated familial risk of depression. This increases the power of the study to detect effects due to increased variability in the data and improves the external validity of the study. A final strength is the use of standardised psychiatric interviews which had good inter-rater reliability. As previously mentioned, the cross-sectional nature of the study is a limitation since it prevents causal inferences from being drawn. In addition to experimental studies, longitudinal studies will be helpful in determining the directionality of effects, for example, whether IB influences ER or vice versa. Of note, one cannot rule out the possible influence that top-down processes, such as ER, had on our measures of AB and IB. Although both included a time limit and the SST included a cognitive load procedure, ER processes may nevertheless have influenced participants scores on AB and IB measures. Indeed, we are also unable to rule out the influence that AB and IB had on the scores participants obtained on the self-report ER measures. The relatively modest sample size is a further limitation of the current study. We also cannot be sure that our findings are specific to symptoms of depression versus anxiety. Thirteen of the 27 MD patients had comorbid anxiety, which is comparable with expected rates of comorbidity (Essau, 2005). When recalculating the path analysis excluding the participants with comorbid anxiety disorders in an exploratory post-hoc analysis, all direct effects between the study variables remained significant except the association between AB and IB. 3 However, we refrain from interpreting this post-hoc finding since the statistical power of the analysis is compromised (> 10% reduction in sample size) and the variability in depressive symptoms was reduced (just 14 patients were left in the sample). Future studies with larger sample sizes might investigate if this is an artefact or if the AB-IB relationship is indeed dependent on anxiety. However, studies seeking to disentangle the effects of depression and anxiety by directly comparing samples of youth with pure depression versus anxiety are limited since comorbid anxiety is considered the rule rather than the exception in youth depression (Essau, 2005;Nottelmann & Jensen, 1999). Clinical Implications Since most psychotherapeutic interventions for youth depression include elements of ER (Young et al., 2019) the current findings highlight the need for psychotherapists to be aware of the powerful influence that cognitive biases have on ER. Combined with data from studies in adult samples (Everaert et al., 2020;Yaroslavsky et al., 2019) they suggest that if underlying cognitive biases are not addressed in psychotherapy, effective ER may be hampered. Studies which address the extent to which traditional therapeutic approaches modify cognitive biases could be helpful in this endeavour. Furthermore, therapeutic approaches which combine both explicit therapeutic techniques (e.g., cognitive restructuring) as well as automatic cognitive biases (e.g., cognitive bias modification) may be particularly effective. Finally, if negative cognitive biases have their influence on depressive symptoms by hampering adaptive ER (Joormann & D'Avanzato, 2010), then training cognitive biases during situations that evoke negative emotions (and therefore require adaptive ER) may be more effective than training cognitive biases in non-emotional situations. Future Research As previously mentioned, future experimental and longitudinal studies will be helpful. Across study designs we encourage researchers to select measures based on psychometric properties. One important construct in cognitive models of depression which may be useful to include in future studies is cognitive control. People with (high risk of) depression have deficits in cognitive control (Grahek et al., 2018) which may contribute to increased use of rumination, hamper cognitive reappraisal (Cohen & Ochsner, 2018;Joormann & Siemer, 2011;LeMoult & Gotlib, 2019) and heighten negative cognitive biases (Everaert et al., 2017b) by failing to remove irrelevant negative information from working memory (Joormann et al., 2007). Since the relation between cognitive biases, ER and depression may vary across the lifespan, future developmental studies which specifically address age-related changes could be particularly informative. Conclusions Contemporary cognitive models of depression posit that negative cognitive biases influence depression by altering ER processes. We provide the first empirical evidence to support this model in youth. In contrast to the adult literature, in which negative cognitive biases increase the use of maladaptive ER, we found evidence that for youth, negative cognitive biases also hamper adaptive ER. Our findings require replicating in experimental and longitudinal designs but provide preliminary evidence that interventions designed to improve ER for youth with depression are likely to be enhanced if they also address underlying negative cognitive biases.
2021-04-18T06:16:15.802Z
2021-04-16T00:00:00.000
{ "year": 2021, "sha1": "cec0bc27469ef569d3f5be71be0c088b6f568001", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10802-021-00814-z.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "31d21e04f81ecc25300e32919120a7366d649616", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
222284288
pes2o/s2orc
v3-fos-license
Biopolymers as sustainable metal bio-adhesives We describe the use of biopolymers, such as sodium alginate, as an sustainable adhesive binder for several metals and high-density polyethylene. A standard pull test and peel test was performed with disks, made of different material and size. Adhesive failure was investigated by varying the amount of applied alginate solution, drying time, drying temperature, effect of surface area, and the nature of the adherend. Alginate adhesion was remarkably strong, rela-tively general, and sensitive to the presence of water. A brief comparison with other biopolymers is also provided. | INTRODUCTION During the last decade, great emphasis has been placed on the need to improve the sustainability of industrial chemical products and processes. Bio-based materials are increasingly attractive because of their enhanced environmental footprint. 1 This is as true for adhesives as for any other class of materials. 2 Adhesives usually contain a polymer that either covalently links to the material or uses extensive physiochemical attraction forces to enable a connection. Often the adhesive is applied in solution and the adhesive joint is formed during or after solvent evaporation. A good example is provided by epoxy-diamine materials that are used in a wide variety of applications including in metal adhesives and paints. [3][4][5] Liquid monomers, or unreacted epoxide groups in the polymers, can react with metallic surface oxide/hydroxide groups to form chemical bonds between the polymer and the surface during curing, thus enhancing adhesion. Roughening of the metal surface, such as by etching, anodization, plasma treatment, or acid treatment, enhances the production of a metal oxide layer and thereby covalent and noncovalent interactions with the adhesive polymer. This was nicely illustrated by Yoshida and Ishida in their exploration of the cure behavior on copper, steel, and aluminum surfaces of commercially available Epon 828 epoxy resin, consisting of a typical diglycidal ether from epichlorohydrin and bisphenol A. [6][7][8] Polyurethanes constitute another important class of metal adhesives. 9,10 In both cases, a switch from organic solvents to water-soluble or high solid adhesives would avoid Binoy Maiti and Alex Abramov are equally contributed to this work. harmful evaporation during production as well as maintain lifetime and minimize disposal costs and environmental problems. Natural products are a versatile source of water-soluble materials that can function as adhesives. Well-known representatives include dextrin, gelatin, casein, and starches. 11 For example, Imam et al. formulated a replacement for phenolformaldehyde resin using starch, polyvinyl alcohol, and hexamethoxymethylmelamine with citric acid as catalyst. 12 The advantages of this system included its formaldehyde-free nature, low cost, and lack of environmental footprint, since starch could be obtained in large amounts from commodity crops. Other examples of water-soluble bio-based adhesive materials include the scleroprotein collagen, which is collected from animal tissue and has been used for thousands of years. 13 We focus here on alginates as inexpensive, biodegradable, and biocompatible anionic polysaccharides with low toxicity. Alginate is widely used as a food additive, and has also been employed in such varied materials applications as binders for composite wood-and cotton fiber-based building insulation materials, 14 additives to improve the adhesive strength of polyamide adhesive (Fix™), 15 and in hydrogel adhesives for cell encapsulation. 16,17 Inspired by these reports but noticing a lack of testing of simple alginate, we explored sodium alginate as an adhesive binder for several metals and high-density polyethylene (HDPE). Simple pull and peel tests ( Figure 1) were performed with different disks using different variables, such as amount of applied alginate solution, drying time, drying temperature, surface area, and the material being adhered. | Materials Alginates having low molecular weight (viscosity: 4-12 cps, 1% in H 2 O at 25 C) and high molecular weight (viscosity: 1000-1500 cps, 1% in H 2 O at 25 C) were obtained from Sigma-Aldrich and Alfa Aesar, respectively. Methyl cellulose (MeCel, viscosity: 400 cps), gelatin (from pocine skin, gel strength 300, type A) were obtained from Sigma-Aldrich. Dextran 40 (MW-40,000) was obtained from TCI. Sodium hyaluronate (Hyal) gifted by Novozymes Biopharma, Denmark. The commercial metal glue used for comparison (brand name UHU-Metal) was obtained from UHU GmbH & Co, Bühl, Germany. Copper, brass, aluminum, titanium, steel, cast iron, and HDPE test samples were prepared by the workshop of Universität Regensburg, Germany. Each material was obtained in the form of 4 cm diameter rods and were cut into 1.5 cm equal pieces, drilled to make a hole in the middle, and equipped with a hook for pull test measurements. | Surface and sample preparation Each metal surface was cleaned by brief sanding with aluminum oxide sanding paper (120 followed by 180 grit) to remove surface contaminants, followed by exposure to concentrated H 2 SO 4 for 1 min and then rinsing with water, ethanol, and acetone. Alginate solutions in distilled water were freshly prepared before use. Previous experiments showed very similar results with phosphatebuffered saline buffer. The surfaces were glued by applying different concentrations of alginate solution to one of the two adherends, which was then immediately placed in contact with the other adherend, and the materials were then allowed to dry for the prescribed period of time. Failure load testing was performed in two modes (pull test and 90 peel test, Figure 1) by hanging weights from the hook for 1.5 min, increasing by increments of 2.5 kg from 1 to 28.5 kg. | RESULTS AND DISCUSSION A standard pull test was repeated multiple times with brass disks using different amounts of applied alginate solution (Table S1, Figure 2). When the amounts of adhesive were insufficient to cover all the metal surface (25, 50, and F I G U R E 1 Illustration of adhesive strength measurements by (a) pull test and (b) peel test 75 mg), the amount of weight tolerated before failure varied widely. In contrast, use of 100 mg of alginate solution (7.0 mg + 93 mg water), sufficient to completely cover the adherent metal surfaces, provided for consistently strong bonding. The drying time of this alginate adhesive after surface contact proved to be an important factor: poor results were observed when the brass plates were tested at 6 or 24 h after application of the adhesive (Table S2, Figure 3). However, after 48 h, all the test samples were properly dried, and all exceeded the maximum load that could be measured with our apparatus. The drying time could be shortened by curing adhered disks in an oven at 60 C for 12 h, giving the same maximum load-bearing performance. Alginate adhesion of brass disks retained full strength after standing for at least 2 weeks under ambient conditions, or at 90 C for 3 days. However, the adhered blocks were released by delamination of the adhesive after 20 min soaking in water. The use of low-versus high-molecular weight alginate at 2 wt% (the maximum concentration for high molecular-weight alginate in water) did not seem to make much difference in adhesive performance, but more studies are necessary to definitively explore this variable. All other experiments described here were performed with low molecular-weight alginate. As expected, the contact surface area was found to have a large impact on adhesion strength as measured by the peel test, which is more sensitive to failure by crack propagation. 18 We tested seven materials (six metals and HDPE, Figure 4, Tables S3 and S4) and found alginate to provide excellent resistance against the pull test for all of them over the larger surface area (4 cm diameter), and for all but polyethylene with the smaller surface contact (2 cm diameter). The peel test revealed significant differences: aluminum, brass, and copper provided strong (Table S5 and Table S6). In all cases, pretreatment by light sanding and strong acid was necessary for effective adhesion ( Figure S1). Additionally, we have investigated other biopolymers on adhesive failure ( Figure S2) and found that alginate, Hyal, and gelatin provide excellent resistance against the pull test, whereas dextran and methyl cellulose showed comparatively lower adhesive strength. For comparative purposes, we also used commercially available UHUmetal glue, which showed the same maximum loadbearing performance. Examination of disk surfaces after adhesion and pull test separation revealed patterns of adhesive failure (characterized by separation of the adhesive from the disc surface), rather than cohesive failure ( Figure 5). Fourier Transform Infrared (FTIR) spectroscopy of adhesive material removed after this failure ( Figure 6) showed only small variations in the position of the characteristic carboxylate asymmetric stretching band at approximately 1596 cm −1 , suggesting no significant difference from the expected metal (sodium or other) carboxylate moiety. 19 Unfortunately, FTIR cannot provide meaningful information on the failure mechanism. While it is likely to involve intermolecular interactions involving the carboxylate residues, only a tiny proportion of the total number of carboxylate groups would be affected during adhesive failure. Furthermore, a primary determinant of the observed peak position is the counterion, and in our case (unlike examples of cation exchange 19 ) the cation remains sodium throughout. Nevertheless, some plausible failure modes may include, at least, (a) rupture of intermolecular interactions, such as between surface metal-hydroxyls and adhesive carboxylate groups; and (b) plasticization of the adhesive layer under stress, F I G U R E 5 Different material surfaces (4 cm diameter) after pull test failure F I G U R E 6 FTIR spectra of pure sodium alginate (alg) and scratched alginate sample from the different metal plates after failure presumably with the assistance of water vapor adsorbed from the air. | CONCLUSION These data show that simple solutions of sodium alginate serve as effective adhesives for a variety of metal surfaces, and are somewhat less powerful but still substantial adhesives for polyethylene. To our knowledge, this is the first description of such a phenomenon, although it is not surprising given the fact that alginate contains many functional groups able to interact noncovalently with oxidized or acid-etched surfaces. Adhesion requires drying, either slowly at room temperature or faster at elevated temperatures, and the adhesive interaction can be disrupted by treatment with water. Alginates warrant further study as potential inexpensive and strong metal adhesives when extended curing times can be tolerated. SUPPORTING INFORMATION Additional supporting information may be found online in the Supporting Information section at the end of this article.
2020-08-27T09:06:52.317Z
2020-08-20T00:00:00.000
{ "year": 2020, "sha1": "13e99333fc5708f21e9716ec6ae6d7a4c6fd0ef7", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/app.49783", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "289bda7d0d46286a71b03d8b369c58a6c322b651", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
235635189
pes2o/s2orc
v3-fos-license
Experiences and lessons learned from two virtual, hands-on microbiome bioinformatics workshops In October of 2020, in response to the Coronavirus Disease 2019 (COVID-19) pandemic, our team hosted our first fully online workshop teaching the QIIME 2 microbiome bioinformatics platform. We had 75 enrolled participants who joined from at least 25 different countries on 6 continents, and we had 22 instructors on 4 continents. In the 5-day workshop, participants worked hands-on with a cloud-based shared compute cluster that we deployed for this course. The event was well received, and participants provided feedback and suggestions in a postworkshop questionnaire. In January of 2021, we followed this workshop with a second fully online workshop, incorporating lessons from the first. Here, we present details on the technology and protocols that we used to run these workshops, focusing on the first workshop and then introducing changes made for the second workshop. We discuss what worked well, what didn’t work well, and what we plan to do differently in future workshops. Introduction QIIME 2 (https://qiime2.org) is a widely used microbiome bioinformatics platform, with users around the world and working across all areas of microbiome research. Since its initial conception, the developers and others have taught hands-on workshops around the world in a wide variety of formats. These have sometimes been held at universities or research institutions for local teams of microbiome researchers; sometimes timed with microbiome-focused conferences as either official conference events or unofficial sessions before or after a conference such as the Soil Science Society of America (November 2, 2014) or the International Society for Microbial Ecology meetings (August 18, 2012 and August 30, 2014); and sometimes held in collaboration with nonprofit educational organizations such as the Foundation for Advanced Education in the Sciences (FAES) at the National Institutes of Health (NIH; February 22 to 23, 2018, December 12 to 14, 2018, and January 8 to 10, 2020). Workshop costs, such as instructor travel and cloud computing expenses, are often covered by the hosting institution or by the participants through a registration fee. Components of recent in-person workshops Our recent in-person workshops are composed of different components (bolded in the following text) that we hoped to reproduce in an online event. The schedule of our most recent inperson workshop is presented in S1 Fig. Lectures covering basic theory on QIIME 2 [1], microbiome research, and bioinformatics are a large component. These typically cover topics such as approaches for sequence quality control, diversity metrics, taxonomic assignment methods, differential abundance testing, and QIIME 2's semantic type and data provenance tracking systems. Interspersed with our lectures, we have hands-on tutorial sessions where we guide participants through running QIIME 2 on a single tutorial data set that is used throughout the workshop. These tutorial data are derived from an actual microbiome study but filtered to a small fraction (around 10%) of the full data set to enable quick analysis. We deploy a shared compute cluster on Amazon Web Services (AWS) for the workshop, and all participants are given their own login credentials on this server. After a topic is introduced in a lecture, we guide participants through applying that new knowledge and interpreting the results. For example, in a lecture on beta diversity metrics, we cover unweighted UniFrac [2], how it is related to other diversity metrics, and how it is computed using an example simple enough to be computed with pencil and paper. We then have participants connect to the workshop cluster, and we guide them through using QIIME 2 to compute unweighted UniFrac on the tutorial data set and generate statistical and visual summaries of the results. They view the results that they generated using QIIME 2 View (https://view.qiime2.org), and we discuss the interpretation of those results as a group. In addition to the lectures and hands-on work, we host dedicated question and answer (Q&A) sessions on general topics in microbiome research or QIIME 2. These are often the sessions of our workshops that participants report as the most valuable. To expand on the success of the Q&A sessions, we have recently begun to facilitate "watercooler chats," where participants and instructors convene in small groups during coffee and lunch breaks to discuss topics of common interest that might not be relevant to all participants. An example of such a topic might be protocols for analyzing data generated with a new sequencing technology. Finally, in our longer workshops, we sometimes schedule a poster session, where attendees can present their own work, and a day of parallel sessions on topics of more specialized interest, such as analysis of fungal communities or developing QIIME 2 plug-ins. Parallel sessions allow participants to pick a path that suits their needs. Initial workshop plan and modifications for CAU : Toobserveconsistencythroughout OVID-19 In 2019, the Caporaso Lab at Northern Arizona University received a grant from the Chan Zuckerberg Initiative (CZI), which included support for hosting a co-convened user and developer workshop in Latin America in collaboration with CABANA (https://www.cabana.online/), a project focused on increasing bioinformatics capacity in Latin America. We planned to test a new model for our workshops that would serve in part to foster interactions between the QIIME 2 user and developer communities. We planned to run a 5-day workshop where all participants would be invited to attend for all 5 days. Days 1 and 2 would be targeted toward users of QIIME 2, following the schedule of our successful 2-day QIIME 2 workshops. Days 4 and 5 would be targeted toward software developers and would cover topics such as plug-in and interface development and software testing. Day 3 would bring these communities together with a poster session, small group projects such as data analysis sprints, and lectures on topics of general interest, such as teaching QIIME 2 (many workshop participants subsequently teach informal workshops at their home institutions). We expected that some participants would not attend for the full week, but we planned to only accept participants who could be present for at least 3 of the 5 days to ensure full participation on Day 3. This plan was disrupted by the Coronavirus Disease 2019 (COVID-19) pandemic, and, in March of 2020, we worked with CZI to reenvision our workshop as an online event. We had never taught an online QIIME 2 workshop before, but we had received many requests for one. We saw that this could still be viewed as an opportunity to develop a new model for our workshops that would let us reach a more diverse audience, such as individuals who couldn't travel to a workshop due to financial or family constraints, and to host a more sustainable event by reducing its carbon footprint. These are widely recognized benefits of virtual rather than inperson events [3][4][5]. Since teaching an online workshop was new for us, we chose to teach this as a user-focused workshop (as opposed to our original plan of integrating user and developer workshops) to avoid making too many changes at once. We continued our plan to host a 5-day workshop in collaboration with CABANA. We decided to offer 75 seats for students to enroll and participate interactively in the workshop. We additionally planned to stream the workshop publicly on YouTube where other students could view the workshop in its entirety but not participate interactively. For clarity, we use the term participants to refer to the students who enrolled in the workshop and could participate interactively in the workshop and the term viewers to refer to the students who watched the livestream but were not enrolled, so couldn't participate interactively. These terms and others are defined in Table 1. Student recruiting and registration The CABANA team recruited 25 participants from 7 different countries in Latin America. These participants were selected considering a balance in gender, geographical location, minority groups participation, and how relevant the workshop would be to their work. The registration fee was US$50. In addition to the CABANA cohort, we opened up 50 additional seats to the public. We charged a US$50 fee to participate in the workshop. Given the global target audience of the workshop, we recognized that US$50 might not be affordable to many people who may have been interested in attending. We therefore offered a purchasing power discount: Prospective participants could email to suggest an accessible price for the workshop, and we issued discount codes that those individuals could use when registering for the workshop. In order to not put any additional burden on these requestors, we did not solicit supporting material to evaluate their request. Instead, this was a "no questions asked" approach. We had approximately 140 total requests for purchasing power discounts. All discount requests were honored at the rate requested (including completely waiving the registration fee). Moreover, 15% of our 50-person public enrollment cohort of workshop participants enrolled with purchasing power discounts. We were inspired to adopt this model by The Pragmatic Studio (https:// pragmaticstudio.com/about). Other groups have also used this approach to enable global participation in online bioinformatics events [4]. The US$50 fee for a 5-day workshop was considerably lower than a typical QIIME 2 workshop fee. Grant funding for this event enabled this, as we did not need to raise funds to cover our computing or personnel expenses. Because this was our first online event, we were also more comfortable doing a lower-cost event, acknowledging that our students were a test cohort for a new approach to teaching QIIME 2. We still felt that it was important to charge for attendance, to ensure that students who registered would be invested enough to show up for the workshop (a no-show at the workshop would have meant that an available seat was empty). We received overwhelming demand for the workshop. We had initially planned to space registration out across time zones and a 1-week period to give prospective students the opportunity to enroll at a reasonable time of day in their location and on a day when they were available. Due to an error in our registration and payment system, however, all seats were made available at the first advertised time, and all available seats sold out in under 5 minutes. This is higher than the typical demand that we experience for our workshops and in line with other groups who have reported increased demand and reaching new audiences with virtual meetings [4,6]. Enrolled participants joined the workshop from at least 25 countries (Fig 1). In addition to the participants who enrolled in the workshop, we streamed the workshop live on YouTube Participant A student in the workshop who has full access to the event (including workshop server and Slack access). These are typically the paying attendees of the workshop. Viewer A student in the workshop who has view-only access through the YouTube stream. These individuals are attending for free and cannot access the workshop server or Slack, but can follow along with the workshop on their own deployment of QIIME 2. Instructor A teacher in the workshop. These individuals have contributed in one or more of the following ways: prerecording lectures or hands-on tutorial content, providing support to participants through Slack, joining live broadcast Q&A sessions, or assisting with the technical aspects of workshop delivery. Technical leader The instructor who is managing the broadcast software: queuing up prerecorded videos, transitioning title screens and overlays, and ensuring that the Skype-based speakers are connected to the broadcast. Triage leader The instructor who is leading moderation of the Slack workspace, including directing questions to other instructors or to queues to be addressed during live Q&A sessions. Broadcast leader The instructor who broadcasts live throughout the day, serving as a master of ceremonies. This instructor provides opening and closing notes at the end of each day, appears between all prerecorded videos to discuss upcoming events, breaks, etc., and acts as an "interviewer" during Q&A sessions. Control room team The technical leader, the triage leader, and the broadcast leader. Workshop host The organization that is providing the funding for the workshop event or who has otherwise coordinated the event and provided logistical and administrative support such as registration management, mediating student communications, evaluation development, and data analysis. for free. Viewers of the livestream could see all of the presentations and Q&A sessions and could access the written tutorials that we worked through in the interactive portions of the workshop, but did not have access to the server or Slack workspace that were available to the students. This had 2 main implications. First, they could not ask questions of the workshop instructors (questions were asked on Slack, as detailed below). Second, they could not follow along with interactive components of the workshop unless they had their own deployment of QIIME 2 to work in. Thus, the participants paid for live access to the instruction team through Slack and for the use of our cloud-based workshop server. We set up spaces on the QIIME 2 Forum for viewers to discuss workshop content among themselves, and workshop instructors were reasonably successful in supporting workshop-related forum topics asynchronously while the CZI/CABANA workshop was in progress, although we prioritized supporting participants over viewers. Participant and instructor demographics We performed a preworkshop questionnaire to compile information about our participants before we began teaching. A total of 71 out of 75 participants completed the preworkshop questionnaire, and we describe the background of the participants based on those results. A total of 50% of our participants were graduate students, 20% were postdoctoral researchers, 18% were research staff, and 6% were faculty members. When asked to report their discipline or disciplines, 63% of participants reported Microbiology, 35% reported Agriculture or Environmental Sciences, 28% reported Bioinformatics, 23% reported Biomedical or Human Health Sciences, and smaller percentages of participants reported other disciplines including Chemistry, Planetary Sciences, and Education. This illustrates the reach of microbiome sciences and the associated bioinformatics tools across disciplines. Moreover, 73% of our participants reported using programming languages several times per year or more, but the majority reported never using databases such as SQL or Access (59.2%) or version control software (65%). A total of 65% reported using a command shell at least several times per year. Taken together, we interpret this as our cohort being relatively novice users of computing tools such as programming languages and command line software, and generally not comfortable with software engineering given the lack of experience with databases and revision control software. In addition, 56% of our participants reported using QIIME 2 at least several times per year, but less than 10% reported using QIIME 2 weekly or daily. Only 14% of our participants reported using QIIME 1 at least several times per year. QIIME 2 succeeded QIIME 1 on 1 January 2018. In general, we find that some of our workshop participants have experience with QIIME 1 and attend to update their skills. We do not have historical questionnaire data to compare this cohort to previous cohorts. However, based on our experiences, the skills and background seem comparable to our in-person workshops. We plan to incorporate this questionnaire in all future QIIME 2 workshops, because the knowledge we gained about the participants was helpful for targeting discussions to their background during the workshop. The preworkshop questionnaire that we provided is available as S1 Text. It was derived from The Carpentries, where it is made available under the Creative Commons Attribution (CC-BY) license. The original materials are available at https://github.com/carpentries/assessment. We reached out to instructors who had been involved in our recent workshops to recruit a team to co-teach our first online workshop. In addition to high demand from students, we had many instructors interested in co-teaching this workshop. A total of 22 instructors were involved in one or more of the following aspects of the workshop: prerecording lectures or hands-on tutorial content, providing support to participants through Slack, joining live broadcast Q&A sessions, or assisting with the technical aspects of workshop delivery. Our instructor team consisted of 1 undergraduate student, 9 graduate students, 3 postdoctoral scholars, 3 research staff, 4 assistant professors, and 2 associate professors. We allowed instructors to select which topic or topics they would like to teach, based on a draft workshop schedule that we derived from recent workshops. Most novice QIIME 2 instructors were given first choice of topics, so they could select those they were most comfortable teaching. Technology stack We purchased licenses of TechSmith Camtasia 2020 (https://www.techsmith.com/videoeditor.html) for instructors, and this was the primary software used to record videos. Camtasia allows for simultaneous video and screen recording and provides many helpful features for video editing. For example, their Freeze Region visual effect allows a video editor to freeze a region of a screencast for a duration of this video, which enables editing out of pop-up notifications that may have occurred in the middle of a screen recording. Camtasia 2020 only includes auto-captioning of videos for an extra fee, so we used other software to add captions. Our team experimented with both Otter.ai and YouTube Studio for auto-captioning. Both worked surprisingly well, but the auto-generated captions are of course not perfect, and considerable time is still needed for copyediting. Due to time constraints, copyedited captions were not included in the videos presented in the first workshop. Some were added for the second workshop, and copyedited captions are being added to all videos being released on the YouTube channel. Instructors were given a deadline by which they needed to share their video recordings for review. We purchased a license for TechSmith Video Review (https://www.techsmith.com/ video-review.html), which facilitates reviewing videos by allowing reviewers to link comments and annotations to frames in the video timeline (among other conveniences). We found these TechSmith tools to be very convenient for content creation, review, and editing, and the learning curve for basic usage was minimal. During the workshop, the tools used to support our livestream were Open Broadcaster Software (OBS) Studio (https://obsproject.com), Skype, and YouTube. One instructor (the technical lead) ran OBS Studio, which fed our livestream to YouTube where it was broadcast publicly at a new URL for each day of the workshop. The technical lead maintained the daily schedule, which alternated between live broadcasts and prerecorded videos. Live broadcasts were conducted using Skype, in a Skype video conference that all instructors could join at any time. Skype was chosen over other video conferencing software (like Zoom) because it most easily integrated with OBS Studio through the NDI protocol (https://en.wikipedia.org/wiki/ Network_Device_Interface), an industry-standard broadcast stream tool. To facilitate hands-on work, we preloaded the tutorial data on the shared compute cluster. This server configuration was almost identical to the configuration we have used for our inperson workshops, except that some extra security precautions were taken since the workshop was livestreamed on YouTube, and, therefore, had a higher chance of being noticed by individuals who were not authorized to access the server. For this workshop, our cluster was composed of 13 m5.8xlarge machine instances (the compute nodes) and 1 m5.2xlarge machine instance (the login node). A total of 8 user accounts were assigned to each compute node. Participants are given their own login credentials to the system. We tend to include 1 or 2 extra compute nodes in these clusters so that if a compute node becomes inaccessible during the workshop, we can move participants to another one. This has happened on multiple occasions during our in-person workshops. To connect to their compute node, all participants connect to the single login node of the cluster by SSH, which redirects them to the compute node they will work on for the workshop. This redirect is handled automatically without participant interaction. Participants perform their work in directories that are hosted over HTTP, so they can access the data they generate in their web browser. To view interactive QIIME 2 results, we have the participants copy URLs to the files they've generated from their web browser and paste the URL into QIIME 2 View, which allows them to interact with the results in their web browser. Before the workshop, we have participants install Google Chrome and the Google Chrome Secure Shell App, which provides an SSH client that they can access through Google Chrome. This allows us to provide connection instructions that work across operating systems with a unified user interface, as our users may be working on macOS, Windows, or Linux. Participants sometimes ask if they can use their own SSH client. We always approve this, but we let them know that it may be harder for us to help them because we may not be familiar with their SSH client (in practice, we find that participants who are experienced enough to have a preferred SSH client often don't need help navigating connection to the server). Using the Google Chrome Secure Shell App, QIIME 2 View, and the cluster web server's index page for their data directory, participants can perform all hands-on steps of the workshop in their web browser. Participants were provided with a PDF "cheat sheet" that included graphical instructions for connecting to the server by SSH and accessing their data (example provided as S2 Fig). This sheet included connection details (hostname, username, and password) which they could copy and paste. Throughout the week of our online workshop, we had a paid Slack workspace, which all workshop participants were given access to. We configured the Slack workspace with several private instructor-only channels and several channels that all participants had access to. Additionally, we created Slack channels for subgroups of participants which we referred to as "pod" channels, and each participant was preassigned to 1 pod channel. We hoped to use the pod channels to capture the "neighborhood" dynamic that tends to arise in our in-person workshops, where clusters of individuals sitting near one another begin helping each other debug issues or understand concepts. Slack was used to facilitate all contact between participants and instructors, and we used instant messaging, video, and audio calls to communicate with participants either individually or in groups as needs arose. Details on how we specifically used Slack are presented below. Instructor roles In addition to delivering workshop content, 3 instructors had roles in workshop delivery: the technical leader, the broadcast leader, and the triage leader. These individuals formed the control room team. The technical leader was primarily responsible for running OBS Studio, and, thereby, controlling the livestream. The broadcast leader served as the master of ceremonies for the event, providing opening and closing remarks each day, reiterating the schedule, and serving as a consistent voice throughout the workshop. The triage leader was primarily responsible for monitoring Slack. This involved directing incoming questions for instructors to handle right away or to queues to be addressed during open Q&A sessions. Due to the pandemic, the instructors were not generally colocated. Only the technical leader and the triage leader were colocated, both working on campus at Northern Arizona University, enabling the livestream to be broadcast over the university's internet connection. These 2 instructors both received negative COVID-19 tests within the 48 hours preceding the workshop and worked in separate offices with windows opened in the same office suite to ensure appropriate social distancing. We chose to have 2 instructors working together even though, technically, only one was needed to manage OBS Studio, in case backup was needed. These instructors did get together for about 10 minutes in the same office, with the window and door opened, so the triage leader could be trained on OBS Studio. Masks were worn during this interaction, and social distancing was maintained. Workshop schedule The full workshop schedule is presented in Fig 2. A workshop day typically proceeded as follows. All times noted here are the United States of America Mountain Standard Time, corresponding to the time zone that the workshop was delivered from. The workshop ran from 8:00 to 14:00 on Days 1 to 4 and 8:00 to 13:00 on Day 5. At about 7:30, our livestream broadcast would begin, and we would share the link to the day's broadcast on Slack (for participants) and on the QIIME 2 Forum (for viewers). A static slide would be presented on the livestream, indicating that the workshop would begin shortly. At this time, the instructor's Skype video conference was initiated for the day. The control room team would join the Skype video conference, and other instructors might also join to say hello and to help with any last-minute tasks that needed to be accomplished for the day. At 8:00, the day's broadcast would begin with the broadcast leader opening the session to describe the schedule for the day and to share any announcements. The livestream would then transition to the first prerecorded video for the day. Prerecorded videos ranged in length from 4 minutes to 64 minutes, with a median length of 22 minutes. Between each pair of prerecorded videos, the broadcast leader would join the livestream. Depending on the schedule, this time would sometimes be used to answer questions about the content that was just presented. All available instructors, ideally including the instructor whose prerecorded video was just streamed, would join the livestream through the instructor Skype to answer questions. The day would proceed, presenting lecture content mixed with hands-on work. Two 30-minute breaks were included in the schedule: the first from 9:30 to 10:00 and the second from 11:30 to 12:00. We committed to sticking to our break times, sometimes pausing the livestream during a video. This is essential so participants and viewers can plan for their family, work, and other responsibilities during the workshop week. On the first 4 days of the workshop, the day would conclude with a 1-hour open Q&A session from 13:00 to 14:00. During this time, all available instructors would join the instructor Skype, and questions about any of the content presented or general questions about QIIME 2 or microbiome research were fielded by the instructors. If at any point during the Q&A sessions there were fewer questions, the broadcast leader would ask questions of the instructors currently on the call-for example, all instructors who joined these sessions were asked at some point during the week to describe the path they took to getting involved with microbiome research and QIIME 2. This allowed us to present the diverse career paths that lead to work in biomedical research and bioinformatics, which we hope is useful for encouraging participants and viewers from diverse backgrounds to pursue their interest in microbiome bioinformatics. At the end of each day's Q&A session, the broadcast leader would conclude with any final announcements for the day. Within the hour following each day's events, links to the prerecorded videos for the next day were shared with participants through Slack, so they could watch those in advance if that was helpful for their schedule the following day. On the final day of the workshop, we concluded with a summary presentation and acknowledgments of all instructors who were involved in the workshop. All available instructors joined the livestream by chat to conclude and thank the participants and viewers for attending the workshop. Recreating in-person workshop online The lecture component of our courses was relatively straightforward to recreate for an online workshop. We planned for specific topics that we wanted to cover in this workshop and reached out to a team of individuals who have been involved in previous QIIME 2 workshops and invited them to prerecord lectures on these topics. Prerecording lectures rather than presenting live is a popular option for virtual meetings [6]. It facilitates participation across time zones for both instructors and participants, reduces chances for technical hurdles during the workshop (e.g., if an instructor's internet connection becomes unstable while presenting), and reduces the amount of work during the event so the hosts can focus their attention on serving the participants during that time. The prerecorded videos were made available to participants the day before they were to be covered in the workshop, allowing them to watch the videos when it was most convenient for them and at their own pace (enabling pausing of the video as needed to self-study or debug an interactive step). Then, during the live broadcast, we played back the same prerecorded videos for participants and viewers. In addition to providing flexibility for participants to view videos in advance if they had a conflict during a workshop day's events, this gave participants the opportunity to watch the video before and during the workshop. In our in-person workshops, the lines are often blurred between lectures and hands-on tutorial sessions. For example, an instructor might have the participants start a long-running command before they teach what it does so that it will complete by the time the lecture is completed. For the online workshop, we chose to separate lecture content and hands-on tutorial content into different prerecorded videos for a few reasons. First, this helped us to ensure continuity between hands-on sessions. Over the course of the workshop, the analyses that participants ran were built on previous steps of the analysis. So, if a file needs to exist at the beginning of one hands-on session, it needs to be generated-with the expected file path-in a previous section. Separating lecture and hands-on tutorial sessions kept the recordings shorter, making it quicker to rerecord tutorial sessions if errors were discovered during video review. Second, this approach allowed instructors who were recording lectures to focus on the content they wanted to teach and not the technical aspects of how participants would be using QIIME 2 during the workshop. Third, an auxiliary goal for our online workshop was to create video content that could be reused, and, in some cases, that could stand alone without the rest of the workshop content. Separating lecture content from tutorial content enabled this, since tutorial content wouldn't necessarily make sense to someone watching a single video without the context of the other videos. Similarly, keeping lecture and tutorial content separate facilitates the reuse of lecture content in future workshops. While both types of video will undoubtedly need to be rerecorded in the future to keep pace with the field, we expect that tutorial content likely needs more frequent updating to support new versions of QIIME 2 (which are released quarterly). Finally, splitting lecture content and tutorial content provided opportunities for lessexperienced instructors to get involved. One undergraduate student, 2 graduate students, and 1 junior research software engineer in the Caporaso Lab-none of whom had taught a QIIME 2 workshop before-taught the majority of the hands-on tutorial sessions for this workshop. Q&A sessions are frequently noted as among the most valuable sessions by participants in our in-person workshops, so we put a lot of effort into trying to recreate that experience in our online workshop. In our in-person workshops, brief Q&A sessions are held after each lecture, and longer Q&A sessions are scheduled at the end of each day. We try to target general questions to the longer sessions and keep shorter sessions focused on content that was most recently presented. A frequent challenge that we encounter in the Q&A sessions at in-person workshops is keeping the discussion focused around topics that are likely to be of general interest, opposed to delving into topics that are relevant only to a single person's analysis. We try to handle individualized questions through one-on-one discussions between an instructor and a participant, time permitting. If time is not available (e.g., a participant asks a very specific question at the end of the workshop), we direct participants to the QIIME 2 Forum. In our online workshop, we handled user questions through Slack. A flowchart illustrating how these questions were triaged and addressed is presented in Fig 3. Questions were asked on the #general Slack channel, which all participants had access to. A moderator on the instruction team would forward these questions to 1 of 3 instructors-only channels: #AU : Pleasenotethatasper questions-endof-day, #questions-broadcast, or #questions-individual. The #questions-end-of-day channel was intended for more general questions that would be covered in the general Q&A session at the end of the day. When a participant asked a question on Slack that we redirected to this channel, we would reply to them inline to let them know we had logged the question and would bring it up later. The #questions-broadcast channel was intended for questions that should be addressed on the broadcast directly following a session because they were relevant to the most recent content. This channel also served as a broadcast-ready channel: A direct capture of the channel was overlaid into the live feed to show everyone watching the source of the question. If we ran out of time for one of these questions, we would move it to the #questionsend-of-day channel. The #questions-individual channel was intended for the questions that were very specific to a participant's own project and not likely to be of general interest. Instructors monitored this channel and followed up directly with the participants to answer their questions on Slack (either in chat or in a Slack video call). At all times, one instructor was monitoring the #general channel to triage incoming questions. The other type of question that would come up during sessions was technical support requests, for example, when a participant needed help with a step in the tutorial because they observed an error or had a technical issue. The triaging instructor would forward these questions to the #questions-individual channel, and an instructor would follow up with participants in their pod channel. In general, we always tried to answer technical support and individual questions in pod channels so others with the same question could see the discussion. This helps others learn from the discussion and reduces the support burden on the instructor team. This system worked well. We found that it was easier to keep the Q&A sessions on target in this format than in in-person workshops, primarily because triage didn't need to happen in real time in front of an audience. We additionally set up 3 #watercooler-chat channels that were intended for small-group discussions (by chat or video) around a topic of interest. These ended up being widely used for introductions among the participants and were a good space for instructors and participants to chat about topics not directly related to the workshop content. For example, people shared where they were attending the workshop and pictures of their pets. Small-group discussions about workshop-relevant topics didn't happen much in these channels, but they emerged as a way to get to know each other without the usual coffee breaks, lunches, and dinners that happen at our in-person workshops. We did not host parallel sessions in this workshop, primarily because we didn't have time to prepare the additional video content that would be required, and we didn't want to attempt too many new things on this first online workshop. These could be achieved by running multiple video streams at dedicated times throughout the workshop, and we may revisit this idea for future online workshops. Outcomes We performed a postworkshop survey of participants to evaluate the success of the workshop. A total of 41 participants (55% of those attending the event) responded to this survey. Because our surveys were performed on the last day of the workshop, it is possible that replies are biased by attendees who found the event valuable enough to stay until the end. We present survey responses in S1 Data and summarize our findings here. Based on postworkshop survey responses, we consider this to have been an extremely successful workshop. A total of 98% of respondents reported that the workshop helped them learn what they most hoped to learn from participating in the workshop. Moreover, 93% agreed or strongly agreed that they could immediately apply what they learned at the workshop. In addition, 100% agreed or strongly agreed that they felt comfortable learning in the workshop environment. Furthermore, 93% agreed or strongly agreed that they were able to get clear answers to their questions from the instructors. Additionally, 95% strongly agree that the instructors were enthusiastic about the workshop. A total of 95% agreed or strongly agreed that they felt comfortable interacting with the instructors. Moreover, 98% agreed or strongly agreed that the instructors were knowledgeable about the material being taught. And, finally, 98% reported that they were likely or highly likely to recommend this workshop to a friend or colleague. We additionally polled attendees on accessibility of the workshop and how much of the workshop they attended. A total of 93% of respondents reported no accessibility issues. Of the 7% who did report accessibility issues, the issues were related to their own internet connectivity problems or other work commitments distracting them from the workshop. Both of these are drawbacks to an online workshop that participants join from their own locations. Moreover, 78% of respondents reported being able to complete all of the interactive work, while 22% reported being able to complete some of the interactive work. No respondents reported not being able to complete any of the interactive work or not trying to complete the interactive work. This suggests that our AWS approach, paired with the Google Chrome Secure Shell client, was effective for allowing participants from around the world to simultaneously run QIIME 2 on the same infrastructure (something we were not certain of before the workshop). A total of 100% of the respondents reported attending all or some of the sessions each day, and none of the respondents reported attending none of the sessions on any day. (This question is particularly sensitive to bias, because if a participant didn't attend the workshop on Day 5, they would have been more likely to miss our request to complete this survey. They would have been able to access the survey still, however, since the link and the request was shared on Slack in addition to being discussed during the workshop.) The percentage of participants who reported attending all sessions in a given day declined slightly over the course of the workshop: On Day 1, 95% reported attending all of the sessions; on Day 2, 95%; on Day 3, 90%; on Day 4, 73%; and on Day 5, 78%. Our second online workshop In January of 2021, we presented a second online workshop. This workshop was hosted by the FAES at the NIH. FAES is a nonprofit organization that was created by NIH researchers to build a university-like environment at the NIH. FAES offers educational and training activities to the greater scientific community. This workshop followed a very similar approach as our first workshop, with a few modifications. In the remaining discussion, we refer to the first workshop as the CZI/CABANA workshop and the second workshop as the FAES workshop. FAES offered a tiered pricing structure to accommodate individuals representing different industries where discounts were factored into the tiers (e.g., members of the NIH community and those affiliated with other US government, US military, or academic institutions can attend the workshop at a lower cost than those not affiliated with any of these groups). We had 57 participants and 24 instructors in this workshop, and the livestream for this event was not shared publicly. While this workshop was available for global participation, the majority of our attendees were from NIH and North American universities (approximately 85% of participants were from institutions in the USA or Canada). The majority of the prerecorded content developed for the first workshop was reused in the second workshop. All live sessions in the first workshop were again performed as live sessions in this workshop. The daily schedules were largely the same as for the first workshop. We made a few changes to our technology and instructor roles for the second workshop. First, instead of a paid Slack workspace for this workshop, we hosted our own Zulip (https://zulip.com/) server. This was a lower cost option because the pricing plan for Slack involves paying a fee per user. Administration and creation of user accounts was much quicker in Zulip than in Slack. Zulip supports bulk invitation using a CSV file, while Slack requires filling form fields for each user account. Zulip also has a command line interface for administration, which was a natural fit for the programmers on our team. This interface allowed us to quickly customize Zulip to our liking. Zulip has a similar user interface to other chat platforms like Slack or Microsoft Teams, so even though many of the attendees told us they had not worked with Zulip before, they were able to become proficient relatively quickly. We hosted our Zulip server on a Digital Ocean droplet (2 GB/2 vCPUs/US$15 month) and provided a basic login and usage tutorial for users. Zulip uses a threaded discussion model where conversations are organized by theme in a Zulip "stream." Individual discussions occur within a theme's list of "topics," which are chat threads created by users and instructors. This is different from the traditional "chat room" approach where all of a room or theme's discussion occurs all at once, in a single location. We found that this model was not intuitive to many participants at first: We saw multiple parallel topics starting at once, which made supporting workshop participants slightly more difficult. Zulip does allow for moderator intervention in the form of moving and merging discussion topics, which helped manage the discussion (e.g., by allowing us to merge independent discussions of the same topic). After the first day of the workshop, we started preemptively creating new stream topics for each video/segment. We then sent notifications to all attendees asking them to post their questions about the current video inside the topic we created instead of creating their own. By the end of the second day, this approach appeared to work well for everyone. In future workshops using Zulip, we plan to dedicate more time in the beginning of the workshop providing guidance on how to use streams, topics, and notifications to reduce confusion. The second change we made in this workshop was that we excluded the role of triage manager, opting instead for all instructors to assist with this. We felt that this was not as well organized, and, in the future, will add the triage manager back. A total of 46 participants completed our preworkshop survey, which was identical to the preworkshop survey used for the CZI/CABANA workshop. Moreover, 24% of our participants were graduate students, 24% were postdoctoral researchers, 15% were faculty members, 7% were medical professionals, and 26% were research staff or government employees. In general, this suggests that our FAES workshop cohort were at later career stages than our CZI/ CABANA workshop cohort. Furthermore, 72% of our participants reported using programming languages several times per year or more, but the majority reported never using databases such as SQL or Access (59%) or version control software (70%). In addition, 54% reported using a command shell at least several times per year. This cohort was thus very similar to our CZI/CABANA workshop participant cohort in terms of their experience with advanced computing tools. Also, 24% of our participants reported using QIIME 2 at least several times per year, and less than 5% reported using QIIME 2 weekly or daily. Only 7% of our participants reported using QIIME 1 at least several times per year. These findings suggest that our FAES cohort was generally less experienced with QIIME 1 or QIIME 2 than our CZI/ CABANA cohort. We performed a postworkshop survey of participants to evaluate the success of the workshop. All 57 participants responded to this survey, and based on these responses, we also consider this workshop to have been very successful. We present survey responses in S1 Data and summarize our findings here. A total of 96% of respondents reported that the workshop helped them learn what they most hoped to learn from participating in the workshop. Moreover, 77% agreed or strongly agreed that they could immediately apply what they learned at the workshop. Also, 91% agreed or strongly agreed that they felt comfortable learning in the workshop environment. Furthermore, 93% agreed or strongly agreed that they were able to get clear answers to their questions from the instructors. In addition, 100% agreed or strongly agreed that the instructors were enthusiastic about the workshop. A total of 98% agreed or strongly agreed that they felt comfortable interacting with the instructors. Moreover, 100% agreed or strongly agreed that the instructors were knowledgeable about the material being taught. And, finally, 93% reported that they were likely or highly likely to recommend this workshop to a friend or colleague. We again polled attendees on accessibility of the workshop and how much of the workshop they attended. A total of 93% of respondents reported no accessibility issues. Of the 7% who did report accessibility issues, the issues were related to their own internet connectivity problems. Moreover, 91% of respondents reported being able to complete all of the interactive work, while the remaining 9% reported being able to complete some of the interactive work. No respondents reported not being able to complete any of the interactive work or not trying to complete the interactive work. In addition, 100% of the respondents reported attending all or some of the sessions each day, and none of the respondents reported attending none of the sessions on any day. The percentage of participants who reported attending all sessions in a given day varied slightly over the course of the workshop and was even higher than for the CZI/CABANA workshop: On Day 1, 100% reported attending all of the sessions; on Day 2, 98%; on Day 3, 93%; on Day 4, 88%; and on Day 5, 96%. This suggests to us that participants found considerable value in the workshop throughout the week. Lessons learned and improvements for future online workshops We asked open-ended questions of participants at both workshops to compile information on what they liked, what they didn't like, their suggestions for improvements, and what they found to be the most useful parts of the workshop. The features that respondents liked included the degree of organization, including our adherence to the schedule and providing regular breaks. Multiple respondents reported they liked having access to the prerecorded lectures before a given day's session, sometimes noting that it helped them to watch the videos ahead of time. The Q&A sessions were very popular, and a lot of respondents reported that Slack/Zulip worked well for this. We also received positive feedback on the quality of the lectures and the choice of content. Some individuals noted that they prefer the online format and that instant communication through Slack/Zulip made it feel as though "we were in the same room." One individual noted that "Everything was perfect"-we're glad it appeared that way! When polled on what aspects of the workshop they didn't like, or which didn't work well for them, the most common comment was that some of the lectures moved too fast at times (and some participants in the CZI/CABANA cohort reported that our accents posed a challenge to following the material). We did receive some consistent feedback on which specific lectures moved too quickly and which moved at a good pace, so that gives us guidance on which videos need attention. One respondent noted that the 30-minute breaks were long for individuals who were attending the workshop during the night in their local time. This is clearly a drawback of hosting a workshop for individuals from all over the world at the same time (other individuals noted that the breaks were well timed and important for allowing them to attend the workshop-this is likely reflective of how well aligned a participant's time zone is with the workshop schedule). Several respondents noted that they would have preferred to have QIIME 2 installed on their own computers, as they left the workshop confused on how to run QIIME 2 when they no longer had access to the workshop server. Several respondents also mentioned that they missed the opportunity to gather in person for the workshop, including the opportunity to meet the instructors and other participants in person: a sentiment that we empathize with a year into the pandemic. We received many excellent suggestions on how to improve future offerings. Several individuals reported that the workshop moved too fast, while others indicated that they wish we covered some more advanced usage of QIIME 2. This suggests to us that we should begin offering 2 tiers of workshop: a basic and advanced workshop. Our basic workshop could potentially spend more time on getting started with QIIME 2, including installation clinics (which are popular in parallel sessions at our in-person workshops), while our advanced workshop could include more challenges focused around understanding command line documentation to construct and run commands that complete some analysis challenge and working with noisier data. It was also suggested that daily quizzes could be added to help participants gauge whether they mastered the concepts we most hoped to teach in a given day. Several participants suggested that we provide instructions ahead of the workshop to optimize their learning environment, including perhaps using dual displays and suggestions for how to manage notifications from Slack/Zulip during the workshop. Several participants requested a companion book for the lecture content and a session on bioinformatics recordkeeping to facilitate reproducibility. Based on feedback from participants and discussion among the instructors, there are several additional changes that we expect to implement in future online workshops. First, we will provide clearer direction on the purpose of the pod channels and encourage their use by creating small group activities that pods can work on together. These could include providing icebreaker prompts on the first day and group exercises of increasing complexity on subsequent days (e.g., where participants must construct their own series of commands to complete an analysis task). Similarly, to support networking in online events, we are interested in experimenting with pairing individuals for brief (e.g., 5 minutes) one-on-one meetings, either by matching participants based on research interests (as has been reported for other online meetings [7]) or by providing discussion prompts such as "What could we collaborate on?," which require individuals to give brief "elevator pitches" on their background and interests and then explore how they complement each other. One change that we made during our first workshop was to create a short refresher video on how to connect to the workshop server. On Day 1 of the workshop, we presented a video that introduced the workshop server and took users through the steps of connecting to the server. We ultimately presented this video multiple times throughout the workshop to help users connect to the server, and, after showing it a few times, it began to feel very repetitive even though it was only 4 minutes long. We decided to edit the video to create a new version with introductory content removed. For technical steps such as this that are repeated throughout the workshop, in the future, we will create an introductory video and then a shorter "refresher video" (that doesn't, for example, have the instructor introduce themself). This refresher video will only repeat the information that needs to be repeated. Finally, before workshop registration, we had individuals interested in the CZI/CABANA workshop email to request our purchasing power parity discount. Managing these emails was very labor intensive. In the future, we'll handle these requests through an online form. Conclusions In response to the COVID-19 pandemic, we have moved the popular QIIME 2 workshop series online. There are undoubtedly drawbacks to holding these types of training events online. Spontaneous conversations often arise at in-person meetings, for example, during dinners or coffee breaks, which lead to new collaborations, employment relationships, and friendships. Teaching and learning can move quickly when a teacher and student can sit together for a few minutes and work on a challenging concept. There are also clear benefits to online events. They are more inclusive, enabling engagement by individuals who are unable or unwilling to travel, and can at least partially remove economic and political barriers to engagement. Travel visas and expensive plane tickets are not needed. Online events can also have a considerably smaller carbon footprint than in-person meetings. Our approach to hands-on bioinformatics instruction translated well to online delivery. Overall, we did not experience any more technical challenges than we do at similar in-person events. Based on the experiences presented here, we feel that online delivery of bioinformatics education workshops can be effective and empowering. After the pandemic, we expect to continue hosting online workshops. A recent study on multitasking during remote meetings found that shortening meeting duration, inserting breaks between meetings, and reducing the number of redundant meetings can improve the attendee's experience by reducing mental fatigue [8]. While our workshop's structure wasn't directly modeled after these recommendations, we were happy to find that we fostered several of these best practices. For example, by having clear visibility on the schedule and access to prerecorded lectures, workshop attendees would have the opportunity to skip a presentation if they already had familiarity with the topic or watch a lecture beforehand to accommodate scheduling conflicts. A benefit of online educational events with prerecorded content is that it prompts the development of video content that can be reused. In January of 2021, we launched the QIIME 2 YouTube channel (https://www.youtube.com/c/qiime2) and are releasing our prerecorded content with accompanying slides under the CC-BY license. To facilitate access to this content, we copyedit auto-generated captions before release, and we ultimately hope to integrate captions in languages other than English. We plan to continue releasing content on this channel with materials from future online workshops, providing new microbiome bioinformatics educational content for researchers and opportunities for QIIME 2 plug-in developers and others to disseminate their work.
2021-06-26T06:17:13.642Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "ee9e2cdf87d52bb991ab0d46b6ce372792db192b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1009056&type=printable", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0e4366050ec5f0f7e9f17cbdc9e0b3fccbdfbf66", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Computer Science", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
14067595
pes2o/s2orc
v3-fos-license
Towards semi-episodic learning for robot damage recovery The recently introduced Intelligent Trial and Error algorithm (IT\&E) enables robots to creatively adapt to damage in a matter of minutes by combining an off-line evolutionary algorithm and an on-line learning algorithm based on Bayesian Optimization. We extend the IT\&E algorithm to allow for robots to learn to compensate for damages while executing their task(s). This leads to a semi-episodic learning scheme that increases the robot's lifetime autonomy and adaptivity. Preliminary experiments on a toy simulation and a 6-legged robot locomotion task show promising results. I. INTRODUCTION Recent research on autonomous systems and robotics has achieved important progress in increasing the autonomy of robots, which makes it possible to operate robots for long periods of time in real-world scenarios. Nevertheless, as robots move from controlled and well-structured environments to more complex [1] and more natural ones [2], they must be able to react to unforeseen situations; in particular, they have to face the inevitable fact that they will be damaged [3], [4]. Current methods for robot damage recovery can be divided into two categories: (1) diagnosis-based approaches [5], and (2) learning methods -mostly Reinforcement Learning (RL) techniques [6], [7], [8]. Most of the techniques in the first category require to anticipate the situations that the robot may have to face; an issue can be diagnosed only if the right sensors are present in the right place. These requirements make diagnosis-based techniques difficult to use in complex robotics systems/scenarios -typically they are only used in the lowest levels of control. Nevertheless, the state-of-the-art RL approaches are also difficult to use for damage recovery because they require many iterations to converge. For example, many RL approaches require tens if not hundreds or thousands of iterations to learn problems with low-dimensional state spaces and fairly benign dynamics, like the mountain car [9]. The data efficiency of RL approaches is a critical aspect that limits their application in real-world robotics scenarios [10]. A promising approach is the Intelligent Trial and Error algorithm (IT&E), a recently introduced algorithm [8]. The intuition behind IT&E is that, before the mission, an off-line and computationally expensive evolutionary algorithm can be used to create a behavior-performance map that predicts the performance of thousands of different behaviors. While in mission, this map, guides a fast and on-line search, based on *Corresponding author: jean-baptiste.mouret@inria.fr 1 Inria, Villers-lès-Nancy, F-54600, France 2 CNRS, Loria, UMR 7503, Vandoeuvre-lès-Nancy, F-54500, France 3 Université de Lorraine, Loria, UMR 7503, Vandoeuvre-lès-Nancy, F-54500, France 4 Personal Robotics Lab, Department of Electrical and Electronic Engineering, Imperial College London, UK Bayesian Optimization [11], to find a compensatory behavior. An important idea is that the behavior-performance map is created using a simulated intact robot, but the algorithm is able to find a working behavior on the damaged real robot because some behaviors from the map perform similarly on the intact and the damaged robot (typically, the behaviors that do not rely on the broken part). The most recent results showed that IT&E can allow various types of robots (a 6legged robot and an 8-DOF manipulator) to compensate for many different types of injuries in a matter of minutes [8], [12]. Although the IT&E approach is promising, its main limitation is the pure episodic approach it has adopted: for each trial (episode), the robot has to begin in the same starting state (Figure 1a). This is limiting because learning a compensatory behavior has to be achieved in two steps, first learn a compensatory behavior, and then use it to complete the task. On the contrary, a wounded animal, for example, can perform trial and error "episodes" to learn how to walk again, while going back to its nest for protection. In this paper, we extend the IT&E algorithm by (1) using a generic reward of the outcome of each atomic behavior of the robot in the adaptation part, and by (2) adding a specialized reward selection layer that selects a specialized reward function at each episode. These additions allow for a semi-episodic learning scheme that improves the robot's long-term autonomy by allowing to recover while attempting to achieve its task(s) (Figure 1b). A. Bayesian Optimization with Gaussian Processes Bayesian Optimization (BO) is a well-established strategy for finding the extrema of functions that are expensive to evaluate [11], [13]. It is applicable in cases where one does not have a closed-form expression for the objective function (the function is a "black-box"), but where one can obtain observations (possibly noisy) of this function. One of the distinctive features of BO is that it constructs a probabilistic model for the objective function and then exploits this model to make decisions about which point to evaluate next, while taking into account the uncertainty. There are two major choices that must be made when performing BO. First, one must select a prior over functions that will express assumptions about the function being optimized. Second, one must choose an acquisition function, u(x|D 1:t ), which is used to construct a utility function from the model posterior, allowing us to determine the next point to evaluate. Many models could be used for the BO prior, but Gaussian Process (GP) priors are the most common choice [11]. A GP is an extension of the multivariate Gaussian distribution to an infinite-dimension stochastic process for which any finite combination of dimensions will be a Gaussian distribution [11]. A GP is a distribution over functions, completely specified by its mean function, m(·) and covariance function, k(·, ·): } is a set of observations and σ 2 noise the sampling noise, the GP is computed as follows: where: We used Upper Confidence Bound (UCB) as the acquisition function. We refer the reader to Brochu et al. [11] for a more detailed explanation. B. Intelligent Trial & Error Algorithm IT&E proposed a novel approach for robot damage recovery that consists of a 2-step process. An off-line evolutionary algorithm, MAP-Elites [14] [8], that generates many thousands of potential good behaviors is followed by a trial and error on-line adaptation part, based on BO (M-BOA), in order to find a compensatory behavior. MAP-Elites is an evolutionary illumination algorithm: instead of searching for a single, best solution, like optimization algorithms, MAP-Elites searches for the highestperforming individual for each point in a user-defined space. This user-defined space is often called the behavior space, because the dimensions of variation (behavior descriptors) usually measure behavioral characteristics. In IT&E, the authors made a slight modification to the classical BO scheme. Their BO variation, called Map-Based BO Algorithm (M-BOA), models the difference between a mean function and the actual performance, instead of directly modeling the objective function (P(·) is the mean function): In the original work, the mean function was the prediction of the performance in the map generated from MAP-Elites. Algorithm 1 shows the pseudo-code for M-BOA. ∀x ∈ map : 3: while stopping criteria not met do 5: x t+1 = argmax x u(x|D 1:t ) ⊲ Next test point 6: Y t+1 = perf ormance(execute behavior(x t+1 )) 7: In the original IT&E paper, the GP modeled the performance of each atomic behavior given a task. In this paper, we suggest learning a mapping from the atomic behaviors to the resulting relative outcomes. We call it a Generic Reward (GR) of the outcome of each atomic behavior of the robot. We use one GP for each dimension of the GR. For example, imagine we have a robot moving in 2D space using an 1D continuous atomic behavior (direction to move 0.1-step). A GR could be the relative position of the robot after executing a behavior -(x, y). Thus, we need 2 GPs: GP x (θ), GP y (θ). If we query the GPs at the point θ 0 , then we get a position, p 1 = (GP x (θ 0 ), GP y (θ 0 )), as the prediction. In that way, we can now compute specialized rewards for different locomotion tasks, like the distance to different target points. Put differently, the GR is a description of the outcome of each atomic behavior of the robot that it is generic-enough to be independent from one task to another, but specificenough so that the performance of the outcome of one atomic behavior given a task can be computed. The changes for M-BOA to work are: • define a Reward function that takes the GPs' prediction as input and returns the expected task performance; • define an Aggregator function (afun) that takes as input the execution of an atomic behavior and returns the GR. B. Specialized Reward Selection Layer We, also, augment the proposed algorithm, by adding a layer responsible for selecting the Reward function, defined above. We call it Specialized Reward Selection Layer (RSL). Since we are modeling a GR of the outcome of each behavior of the robot and not the actual performance (given a task), we can change the Reward function as often as needed. This is true, because only the acquisition function needs an actual reward to select a new test point. We suggest updating or selecting the Reward function at each iteration of M-BOA. For instance, if we consider the previous mobile robot example, at each iteration a planner algorithm chooses the next best point to reach. This point can then be used by the RSL in order to update the Reward function so that it outputs the Euclidean distance between the point selected by the planner and the prediction of the GPs. A. Toy Simulation As a toy example, we consider the mobile robot example introduced previously. This mobile robot is a point (no dimensions, no orientation) and can take a 0.1-long step in any direction. We represent each atomic behavior by a scalar value, θ: the direction of the corresponding move. This environment was inspired by Engel et al. [15]. The task of the robot is to reach a target point despite some damage. Because the example is too simple, but also to show the effectiveness of our method without relying on simulated data, we did not generate any behavior-performance map. We used the exact model of the intact robot as the mean function. Also, for the GR, we used the (x, y) relative end position of each behavior, for the Reward function the Euclidean distance between the next target and the prediction of the GPs and for the reward selection layer an A* path planner. To evaluate our technique we used the following two control experiments: • learn the model of the robot (using GPs) via random babbling and then use it to complete the task; • solve the problem with the classic IT&E approach: we first learn with IT&E how to walk in 4 major directions (up, down, right, left) and then use these behaviors to reach the target. For both of the baseline approaches, we measure the number of iterations required to learn and the number of steps that they take to complete the task. We ran 50 replicates of each approach. We ran 50 replicates of each approach for the scenario: "Reach the target point (2.0, 2.0) starting from the origin despite a 0.5 radians angle offset in the range direction θ > 0". To make the task a little more realistic we added a small Gaussian noise (µ = 0, σ 2 = 0.01) to the position observations. Figure 2 shows the resulting performance (number of atomic behaviors taken to reach the target) for the different approaches. Our algorithm is able to reach the target with almost the optimal number of steps (i.e. if we perfectly knew the model), that is in much fewer steps than the other approaches. B. 6-Legged Simulated Robot locomotion task As a more realistic example, we consider a simulated 6legged (hexapod) robot moving in space with the same task as in the toy simulation. See Figure 1b for the scenario and [8] for more details on the simulated hexapod. We evolved different atomic behaviors using the MAP-Elites algorithm with an 8D behavior descriptor (2 dimensions for space diversity + 6 dimensions for walking diversity), inspired by [16], [12]. The number of atomic behaviors evolved were approximately 1 million. We used this behavior-performance map as the mean function. All the other parameters were the same as in the toy simulation experiment. To evaluate our technique we used similar control experiments as in the toy simulation experiment: • IT&E variant #1: we learn the outcome of the atomic behaviors (using GPs) via selecting the most uncertain behavior for N = 15 iterations. This can be considered as a uniform sampling of the behavior space. We then use what we learned to reach the target. • IT&E variant #2: we first learn with IT&E how to walk in 4 major directions (forward, backward, turn cw, turn ccw) and then use these behaviors to reach the target. Comparison between the baseline approaches and SELA for the 6-legged robot simulation. For both of the baseline approaches, we measure the number of the iterations required to learn and the number of steps that they take to complete the task. We ran 50 replicates of each approach. We ran 50 replicates of each approach for the scenario: "Reach the target point (2.0, 2.0) despite the middle right leg being removed". We, also, added a small Gaussian noise (µ = 0, σ 2 = 0.01) to the position observations. Figure 3 shows the resulting performance (number of atomic behaviors taken to reach the target) for the different approaches. Our algorithm is able to find solutions in fewer steps than the other approaches. C. 6-Legged Robot locomotion task We, also, applied our technique on a real 6-legged robot. Preliminary experiments show promising results 1 . V. CONCLUSION AND FUTURE WORK We have introduced a semi-episodic learning scheme for robot damage recovery and a novel algorithm in this direction: Semi-Episodic Learning Algorithm. The intuition behind this scheme is that the robot can learn in a data-efficient way how to compensate for damages while completing its task(s). This is achieved by (1) shrinking the search space, using simulated or computed data as prior knowledge, and by (2) using a generic reward of the outcome of the atomic behaviors of the robot instead of their performance given a task. 1 https://www.youtube.com/watch?v=Gpf5h07pJFA Future work includes performing more experiments with the real robot as well as experiments with different robots. In addition, BO can be replaced by other techniques that scale better. What is more, we used a naive reward selection layer, but more efficient/sophisticated methods can be used. We are currently investigating in this direction. Additionally, theoretical guarantees and analysis should be investigated in detail. Overall, this work is a first step towards semi-episodic and life-long learning for robot damage recovery. APPENDIX For all experiments the following parameters were used: Error threshold for reaching goal: ǫ goal = 0.1
2016-10-05T13:21:43.000Z
2016-05-16T00:00:00.000
{ "year": 2016, "sha1": "daf3766f3a44c58f4befb91e2ae5d40bedf0bfda", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "71b7979c407d337e12d03db68308d61f011115d7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
5899561
pes2o/s2orc
v3-fos-license
Identification of Farnesyl Pyrophosphate and N-Arachidonylglycine as Endogenous Ligands for GPR92* A series of small compounds acting at the orphan G protein-coupled receptor GPR92 were screened using a signaling pathway-specific reporter assay system. Lipid-derived molecules including farnesyl pyrophosphate (FPP), N-arachidonylglycine (NAG), and lysophosphatidic acid were found to activate GPR92. FPP and lysophosphatidic acid were able to activate both Gq/11- and Gs-mediated signaling pathways, whereas NAG activated only the Gq/11-mediated signaling pathway. Computer-simulated modeling combined with site-directed mutagenesis of GPR92 indicated that Thr97, Gly98, Phe101, and Arg267 of GPR92 are responsible for the interaction of GPR92 with FPP and NAG. Reverse transcription-PCR analysis revealed that GPR92 mRNA is highly expressed in the dorsal root ganglia (DRG) but faint in other brain regions. Peripheral tissues including, spleen, stomach, small intestine, and kidney also expressed GPR92 mRNA. Immunohistochemical analysis revealed that GPR92 is largely co-localized with TRPV1, a nonspecific cation channel that responds to noxious heat, in mouse and human DRG. FPP and NAG increased intracellular Ca2+ levels in cultured DRG neurons. These results suggest that FPP and NAG play a role in the sensory nervous system through activation of GPR92. A series of small compounds acting at the orphan G proteincoupled receptor GPR92 were screened using a signaling pathway-specific reporter assay system. Lipid-derived molecules including farnesyl pyrophosphate (FPP), N-arachidonylglycine (NAG), and lysophosphatidic acid were found to activate GPR92. FPP and lysophosphatidic acid were able to activate both G q/11 -and G s -mediated signaling pathways, whereas NAG activated only the G q/11 -mediated signaling pathway. Computer-simulated modeling combined with site-directed mutagenesis of GPR92 indicated that Thr 97 , Gly 98 , Phe 101 , and Arg 267 of GPR92 are responsible for the interaction of GPR92 with FPP and NAG. Reverse transcription-PCR analysis revealed that GPR92 mRNA is highly expressed in the dorsal root ganglia (DRG) but faint in other brain regions. Peripheral tissues including, spleen, stomach, small intestine, and kidney also expressed GPR92 mRNA. Immunohistochemical analysis revealed that GPR92 is largely co-localized with TRPV1, a nonspecific cation channel that responds to noxious heat, in mouse and human DRG. FPP and NAG increased intracellular Ca 2؉ levels in cultured DRG neurons. These results suggest that FPP and NAG play a role in the sensory nervous system through activation of GPR92. G protein-coupled receptors (GPCRs) 2 are the largest family of cell surface receptors and play a wide variety of roles in pathophysiological processes by transmitting extracellular signals to cells via heterotrimeric G proteins. Many members of this superfamily are major targets of pharmaceutical drugs (1). Completion of the human genome project revealed the structures of many novel GPCRs for which the natural ligands remain to be identified, so-called orphan GPCRs. Because identification of the ligands for orphan GPCRs is important for understanding the physiological roles of these receptors and promises a rich source of potential drug candidates, efforts have been made to identify the ligands of these receptors (2)(3)(4). The human GPR92 (also known as GPR93) gene was first discovered by customized GenBank TM searches (5). It is located on chromosome 12, region p 13.31 and contains two noncoding and coding exons encoding 372 amino acid residues (6). GPR92 belongs to the rhodopsin-like GPCR family and is structurally related to GPR23/LPA 4 receptor, an atypical lysophosphatidic acid (LPA) receptor that shares only 20 -24% amino acid identity with conventional LPA receptors (LPA 1-3 ) (7). This structural relationship led to an identification of LPA as a GPR92 ligand (6,8). LPA increased cAMP levels and inositol phosphate (IP) production and induced intracellular calcium mobilization in cells expressing GPR92. This indicates that GPR92 is likely coupled to G s and G q/11 . Further LPA-induced neurite retraction and stress fiber formation in GPR92-expressing cells suggests that GPR92 is also coupled to G 12/13 (6). This LPA responsiveness of GPR92 along with its structural similarity to LPA 4 led to the suggestion that this receptor be renamed the LPA 5 receptor (6). It should be noted, however, that GPR92 has only a 35% amino acid homology with the LPA 4 receptor. Additionally EC 50 values for various LPA compounds in the production of cAMP and IP in GPR92-expressing cells range between 0.5 and 5 M. This indicates that LPA is a moderately potent activator of GPR92 (6,8). More recently, a luminal protein hydrolysate was found to activate GPR92 and possess a higher potency for cAMP production than LPA (9). These findings suggest the possibility that molecules other than LPA can serve as ligands for GPR92 and that perhaps reference to this compound as the LPA 5 receptor would be premature. While searching for orphan GPCR ligands using various chemical compounds, we found that many lipid-derived molecules are able to activate GPR92 with different potencies. Of these, farnesyl pyrophosphate (FPP) was more potent than LPA in activating GPR92 as revealed by E max and EC 50 values. N-Arachidonylglycine (NAG) was as potent as LPA in activation of GPR92. FPP and LPA activated both G s -and G q/11linked signaling pathways, whereas NAG was able to activate only the G q/11 -linked signaling pathway. Using computer-simulated molecular modeling combined with site-directed mutagenesis, we identified amino acid residues responsible for the interaction of GPR92 with FPP and NAG. Reverse transcription-PCR analysis revealed that GPR92 mRNA was present in high levels in the dorsal root ganglia (DRG) in the nervous system as well as in the spleen, stomach, intestine, and kidney. Further FPP and NAG induced Ca 2ϩ mobilization in cultured DRG cells. Plasmids-The cDNA for human GPR92 was constructed at HindIII and XbaI sites of pcDNA3, whereas cDNA for mouse GPR92 was constructed at EcoRI and XhoI of pcDNA3 (Invitrogen). Site-directed mutagenesis of human GPR92 was generated by the PCR overlapping extension method (11). All PCR-derived sequences were verified by automatic sequencing. pCMV ␤-Gal was purchased from Clontech. SRE-luc, containing a single copy of the serum response element (SRE; CCATATTAGG) followed by a c-fos basic promoter and luciferase, was constructed. Adenoviruses containing human GPR92 or an SRE-luc reporter were obtained from Neugex Co. (Seoul, Korea). Adenovirus Infection, Transfection, and Luciferase Assay-Adenovirus infection and luciferase assays were performed as described previously (4). CV-1 cells were maintained in DMEM in the presence of 10% fetal bovine serum. Cells (5 ϫ 10 3 ) were plated into a 96-well plate. Cells were coinfected with adenoviruses containing human GPR92 and an SRE-luc reporter with 50 multiplicity of infection (m.o.i.) under serum-free condi-tions for 3 h. Cells were then incubated with 10% fetal bovine serum-containing DMEM. For SRE-luc analysis, cells were maintained in serum-free DMEM for at least 16 h before treatment. Following treatment of cells with an agonist for 6 h, cells were harvested. Luciferase activity in the cell extracts was determined according to standard methods using a Wallac 1420 VICTOR 3 plate reader (PerkinElmer Life Sciences). For transfection of GPR92 mutants, CV-1 cells were plated into 24-well plates and transfected 24 h later with Effectene reagent (Qiagen, Valencia, CA). The total amount of DNA used in each transfection was adjusted to 1 g by adding appropriate amounts of pcDNA3. Forty-eight hours after transfection, cells were treated with ligands for 6 h. GTP␥S Binding Assay-Forty-eight hours after infection of CV-1 cells with adenoviruses containing human GPR92 or vehicle, cells were homogenized in 5 mM Tris-HCl, 2 mM EDTA (pH 7.4) and centrifuged at 48,000 ϫ g for 15 min at 4°C. The resulting pellets were washed in 50 mM Tris-HCl, 10 mM MgCl 2 (pH 7.4) and stored at Ϫ80°C until use. Agonist-stimulated [ 35 S]GTP␥S binding was determined as described previously (12). Briefly membrane fractions (15 g) were incubated for 15 min at room temperature in binding buffer (20 mM HEPES (pH 7.4), 100 mM NaCl, 3 mM MgCl 2 , and 3 M GDP) in the presence of various concentrations of FPP, LPA, or NAG. [ 35 S]GTP␥S (0.2 nM) was added. Then samples were further incubated for 30 min at 30°C. The incubation was stopped by centrifugation at 1000 ϫ g for 10 min at 4°C. Bound GTP␥[ 35 S] was counted in a scintillation mixture. Nonspecific binding was determined in the presence of 10 M GTP␥S to be less than 10% of total binding. Measurement of Inositol Phosphate Production-An IP production assay was performed as described previously (13). CV-1 cells were seeded into a 12-well plate and infected with adenoviruses containing human GPR92 with 100 m.o.i. After infection, cells were incubated in M199 medium (Invitrogen) in the presence of 1 Ci/ml myo-[ 3 H]inositol (Amersham Biosciences)/well for 20 h. Medium was removed, and cells were washed with 0.5 ml Buffer A (140 mM NaCl, 20 mM HEPES, 4 mM KCl, 8 mM D-glucose, 1 mM MgCl 2 , 1 mM CaCl 2 , and 1 mg/ml fatty acid-free bovine serum albumin). Cells were then preincubated for 30 min with Buffer A containing 10 mM LiCl followed by addition of 1 M FPP, 10 M NAG, or 10 M LPA at 37°C for 30 min with or without 15-min prior treatment with a phospholipase C inhibitor, U73122. Replacing incubation medium with 0.5 ml of ice-cold 10 mM formic acid terminated the reaction. After 30 min at 4°C, the formic acid extracts were transferred to columns containing Dowex anion-exchange resin (AG-1-X8 resin, Bio-Rad). Total IPs were then eluted with 1 ml of ammonium formate, 0.1 M formic acid. Radioactivity was determined using a Tri-Carb 3100TR scintillation counter (PerkinElmer Life Sciences). cAMP Assay-Cyclic AMP levels were determined by measuring [ 3 H]cAMP formation from [ 3 H]ATP (13). Twenty-four hours before transfection, HeLa cells were seeded into 12-well plates. Cells were transfected with the pcDNA3-GPR92 plasmid with Effectene reagent. One day later, cells were labeled with 2 Ci/ml [ 3 H]adenine (PerkinElmer Life Sciences) for 24 h. Cells were first washed in phosphate-buffered saline and then incubated at 37°C for 20 min in serum-free DMEM containing 1 mM 1-methyl-3-isobutylxanthine. Cells were stimulated with 1 M FPP, 10 M NAG, or 10 M LPA for 30 min with or without 15-min prior treatment with an adenylate cyclase inhibitor, MDL-12330A. The reactions were terminated by replacing the medium with ice-cold 5% trichloroacetic acid containing 1 mM ATP and 1 mM cAMP. [ 3 H]cAMP and [ 3 H]ATP were separated on AG 50W-X4 resin (Bio-Rad) and alumina columns as described previously (13). The cAMP accumulation was expressed as Ca 2ϩ Mobilization Assay-F11 rat embryonic neuroblastoma ϫ DRG neuron hybrid cells (generous gifts from Dr. Henning Otto, Freie Universität, Berlin, Germany) were transiently transfected with hGPR92-GFP and grown on poly-D-lysinecoated glass coverslips for 24 -48 h. Cells were then incubated in a physiological solution (138 mM NaCl, 6 mM KCl, 1 mM MgSO 4 , 2 mM CaCl 2 , 1 mM Na 2 HPO 4 , 5 mM NaHCO 3 , 5 mM glucose, 10 mM HEPES, and 0.1% bovine serum albumin) with 5 M fura-2/AM (Molecular Probes, Eugene, OR) at room temperature for 45 min. Cells were washed twice with the dye-free physiological solution and mounted in an acrylamide chamber that allows for perfusion of incubation medium. Fura-2/AM fluorescence was measured in GFP-positive cells at excitation wavelengths 340 and 380 nm and at emission wavelength 510 nm using an IX70 fluorescence microscope (Olympus, Tokyo, Japan) coupled to a digital cooled CCD camera (CoolSNAP fx CCD camera, Roper Scientific, Tucson, AZ). ERK Activation-CV-1 cells (2 ϫ 10 5 ) were plated on a 60-mm dish for 1 day before infection with adenoviruses containing GPR92. Cells were infected with 100 m.o.i. adenoviruses for 3 h in serum-free DMEM. Then the medium was changed to 10% fetal bovine serum-containing DMEM. Before protein preparation for ERK Western blotting, cells were incubated for at least 16 h in serum-free medium prior to agonist stimulation. After agonist stimulation in the presence or absence of 100 ng/ml pertussis toxin, cells were lysed by RIPA buffer (50 mM Tris HCl (pH 8.0), 150 mM NaCl, 1% Nonidet P-40, 0.1% SDS, 1 mM EDTA, and protease inhibitor mixture) followed by boiling at 95°C for 5 min. Equal amounts of cellular extracts were separated on 10% polyacrylamide gels and transferred to nitrocellulose membranes for immunoblotting. Phosphorylated ERK1/2 and total ERK1/2 were detected by immunoblotting with mouse monoclonal anti-phospho-p44/42 MAPK (Cell Signaling Technology, 1:2000) and anti-p44/42 MAPK (Cell Signaling Technology, 1:2000), respectively. Chemiluminescence detection was performed using the ECL reagent. Molecular Modeling-A homology model of GPR92 was built on the basis of the 2.2-Å crystal structure (Protein Data Bank code 1U19) of the bovine rhodopsin (14) using the homology modeling program molIDE (15). Hydrogen atoms and Kollman-all charges were added to the homology model of GPR92 using Sybyl v7.0 (Tripos Inc., St. Louis, MO). The three-dimensional structure of FPP and NAG was sketched and refined using the Sybyl program with the Gasteiger-Huckel charge method. The virtual docking of clomipramine was performed using GOLD v3.0, a program that applies stochastic genetic algorithms for conformational searching (16). The number of genetic operations was set to 1 ϫ 10 5 , and the population size was set to 1 ϫ 10 2 . All structural figures were prepared using PyMol v0.98 (DeLano Scientific LLC, San Francisco, CA). Reverse Transcription-PCR Analysis-Total RNA was isolated from mouse tissues using TRI Reagent (Sigma). Two micrograms of total RNA were reverse transcribed with Moloney murine leukemia virus reverse transcriptase (Invitrogen) according to the manufacturer's instructions. The cDNA was amplified by PCR using the following primers: mGPR92-F (5Ј-TGGCAGAGTCTTCTGGACACT-3Ј; 681-701) and mGPR92-R (5Ј-GCCAAAGGCCTGGTTTCAGCG-3Ј; 989 -1010). The PCR cycling parameters were as follows: denaturation at 95°C for 5 min followed by 35 cycles of denaturation at 95°C for 30 s, annealing at 57.5°C for 30 s, and extension at 72°C for 45 s. PCR products were separated on a 1.5% agarose gel, stained with ethidium bromide, and photographed under a UV light source. Immunocytochemistry and Histochemistry-The mouse lumbar DRG was isolated from 6-week-old male C57B/6 mice. The human DRG was isolated from fresh cadavers with the informed consent of the relatives of body donors and the approval of the ethics committee of the Korea University College of Medicine, Seoul, Korea. The isolated DRG was fixed for 4 h in 4% paraformaldehyde, washed with 1ϫ phosphate-buffered saline, and cryoprotected overnight in 30% sucrose. The DRG was serially cut (10 m) on a cryostat and mounted on gelatin-coated slides. Slides were incubated in 1ϫ phosphate-buffered saline containing 5% bovine serum albumin and 0.1% Triton X-100 for 1 h at room temperature. Sections were incubated overnight in primary antibody at 4°C and washed three times with 1ϫ phosphate-buffered saline. The primary antibodies used in this study are as follows: mouse GPR92 (1:100, MBL International, Woburn, MA), human GPR92 (1:1000, Abcam), mouse and human TRPV1 (1:1000, Abcam), and neurofilament 200 kDa (NF200; 1:1000, Chemicon, Temecula, CA). After several washes, sections were incubated with fluorescence-labeled secondary antibodies (anti-rabbit-Alexa594 antibody for human and mouse GPR92, anti-guinea pig-Alexa488 antibody for TRPV1, and antimouse-Alexa488 antibody for NF200, Molecular Probes), and nuclei were counterstained with Hoechst33342 (10 (17). Briefly DRG were isolated from all levels of the lumbar and sacral spinal cord from Sprague-Dawley rats (100 -150 g) and incubated with 0.15% collagenase (Sigma) for 20 min and then with 0.125% trypsin (Sigma) for 10 min in Ca 2ϩ -and Mg 2ϩ -free HEPES buffer solution at 37°C. DRG neurons were then mechanically dissociated with Pasteur pipettes by trituration and plated on poly-L-lysine-coated coverslips. Cells were maintained in DMEM supplemented with 10% fetal bovine serum, 1 mM sodium pyruvate, 100 units/ml penicillin, and 100 g/ml streptomycin under a humidified atmosphere of 95% air, 5% CO 2 at 37°C. For intracellular Ca 2ϩ imaging recordings, cells were used within 1-2 days after plating. The diameter of each DRG neuron was defined as the average of the distance along the longest and shortest axis of the cell body. All chemicals used for cell preparation were purchased from Invitrogen. For intracellular Ca 2ϩ imaging, the acetoxymethyl ester form of fura-2 (fura-2/AM, Molecular Probes) was used as the fluorescent Ca 2ϩ indicator. DRG neurons were incubated for 60 min at room temperature with 5 M fura-2/AM and 0.001% Pluronic F-127 in a HEPES-buffered solution composed of 150 mM NaCl, 5 mM KCl, 1 mM MgCl 2 , 2.5 mM CaCl 2 , 10 mM HEPES, and 10 mM glucose (pH adjusted to 7.4 with NaOH). Cells were then washed with HEPESbuffered solution and placed on an inverted microscope (Olympus). Cells were illuminated using a xenon arc lamp. The required excitation wavelengths (340 and 380 nm) were selected with a computercontrolled filter wheel (Sutter Instruments, Novato, CA). Emitter fluorescence light was reflected through a 515-nm long pass filter to a frame transfer cooled CCD camera. The ratios of emitted fluorescence were calculated using a digital fluorescence analyzer. All imaging data were collected and analyzed using Meta Imaging Software (Molecular Devices Corp., Downingtown, PA). Cell Transfection-DRG cells were transfected after 1 day in culture using Lipofectamine 2000. Briefly GPR92 siRNA (5Ј-CGUUUGCAUAUGGUGUdTdT-3Ј) or a nontargeting scrambled siRNA (negative) and the Lipofectamine 2000 diluted in serum-free DMEM/F-12 were combined and incubated at room temperature for 20 min. After the culture media were removed and saved, cells were incubated with the siRNA/Lipofectamine 2000 mixture at 37°C for 4 h. Cells were then incubated with the saved culture media for 48 h. To monitor transfection efficiency, the GFP expression vector pEGFP-N1 (Clontech) was co-transfected with each siRNA. For each Ca 2ϩ imaging experiment, the transfected cells without the GFP vector were loaded with fura-2/AM after separately confirming transfection efficiency (Ͼ80%). Data Analysis-All assays were performed in triplicate and repeated three times. The data are presented as mean Ϯ S.E. of at least three independent experiments. Data analysis was performed using nonlinear regression. Data are expressed using sigmoidal dose-response curves. Agonist concentrations inducing half-maximal stimulation (EC 50 ) and maximal -fold increase (E max ) were calculated using the GraphPad PRISM4 software (GraphPad, San Diego, CA). One-way analysis of variance followed by Newman-Keuls post-test was used for data analyses. p Ͻ 0.05 was considered statistically significant. RESULTS Ligand Screening for GPR92-We previously established a widely applicable screening system to identify ligands for orphan GPCRs (4). In this system, cDNAs for orphan GPCR and GPCR signaling-specific reporter genes are introduced to adenovirus. These reporter genes include a cAMP-responsive element-driven luciferase gene (CRE-luc) for G s -and G i/olinked pathways and SRE-luc for G q/11 -or G 12/13 -linked pathways (4,13). We screened ligands for GPR92 using CV-1 cells that were doubly infected by adenoviruses containing human GPR92 and SRE-luc or CRE-luc. Cells were treated with various lipid-derived molecules. GPR92 activity was determined by measuring luciferase activity. To examine whether the agonistinduced SRE-luc activity is mediated by G protein coupling, a [ 35 S]GTP␥S binding assay was performed using cells expressing GPR92 in the presence of various concentrations of FPP, LPA, or NAG. FPP dose-dependently increased [ 35 S]GTP␥S binding to cell fractions containing GPR92 with a higher potency than NAG and LPA (Fig. 1B) . No agonist-induced increases in [ 35 S]GTP␥S binding were observed in mock-infected cells. Signaling Pathways of GPR92-To investigate the signaling pathways of GPR92, we determined the levels of various second messengers after treatment with FPP, NAG, or LPA. Treatment of GPR92-expressing cells with these agonists resulted in a concentration-dependent increase in IP production. FPP showed the highest potency for IP production followed by LPA and NAG ( Fig. 2A and Table 2). Agonist-induced production of IP was completely blocked by U73122, a phospholipase C inhibitor, indicating that GPR92 is G q -coupled (Fig. 2B). FPP and LPA, but not NAG, increased cAMP levels in a dosedependent manner (Fig. 2C and Table 2). Agonist-induced increases in cAMP levels were suppressed by MDL-12330A, an adenylate cyclase inhibitor (Fig. 2D), indicating that GPR92 is able to couple with G s . The -fold induction of IP and cAMP by FPP was higher than the induction levels produced by LPA and NAG. To further determine the involvement of a cAMP-linked pathway, we performed an additional experiment using a CRE-luc reporter. The cells were coinfected with adenovirus containing GPR92 and CRE-luc and treated with various concentrations of FPP, LPA, and NAG. FPP and LPA increased CRE-luc reporter activity, whereas NAG failed to do so (supplemental Fig. 1A). H89, a cAMP-dependent protein kinase inhibitor, was able to inhibit both FPP-and LPA-induced CRE-luc activity (supplemental Fig. 1B). This study again demonstrated the involvement of the cAMP/cAMPdependent protein kinase pathway in GPR92-mediated signaling. TABLE 3 EC 50 and E max values of FPP and NAG for wild type and mutant GPR92 The plasmids containing the wild type and mutant human GPR92 were cotransfected with the SRE-luc reporter vector into CV-1 cells along with ␤-galactosidase as an internal control. Forty-eight hours after transfection, cells were treated with graded concentrations of FPP and NAG for 6 h, and luciferase activity was determined. NR, no response to ligands. Each value represents the mean Ϯ S.E. of three independent experiments performed in triplicate. We then examined whether FPP and NAG can increase intracellular Ca 2ϩ levels in GPR92-expressing cells. F11 cells transfected with GFP-fused human GPR92 were treated with various concentrations of FPP or NAG. Dose-dependent inductions of intracellular Ca 2ϩ concentration were observed only in GFP-positive cells (Fig. 2E). Receptor Treatment of cells with FPP or NAG for 5 min elevated phospho-ERK levels (Fig. 2F, middle panel), whereas these agonists did not elevate phospho-ERK levels beyond basal amounts in mock-infected cells. A slight increase in phospho-ERK levels by LPA in mock-infected cells is likely due to expression of other types of LPA receptors in these cells (Fig. 2F, upper panel). Increased phospho-ERK levels were maintained for 15 min. Increases were not blocked by pertussis toxin, a specific inhibitor of G i (Fig. 2F, lower panel), indicating that the increase in phospho-ERK level is independent of G i activation. Identification of Ligand-binding Residues of GPR92-Although FPP and NAG have different chemical structures, they are able to activate the same receptor, which provided the impetus to investigate the ligand-binding residues of GPR92 used in binding to different agonists. To address this question, we constructed a molecular model of GPR92 based on the rhodopsin structure and performed a virtual docking of the ligands to the receptor to predict the putative binding sites of GPR92 for FPP and NAG. Molecular modeling results revealed that the negatively charged diphosphate moiety of FPP has an electrostatic interaction with Arg 276 in the third extracellular loop of GPR92 (Fig. 3A). Gly 98 and Phe 101 at transmembrane domain 3 have close contacts with the hydrophobic carbon chain of FPP (Fig. 3B). As Gly has the smallest side chain, it may provide a space for docking of the hydrophobic carbon chain of FPP. The glycine moiety of NAG interacts with Thr 97 at transmembrane domain 3 of GPR92 (Fig. 3C). Like FPP, the hydrophobic carbon chain of NAG has close contact with Gly 98 and Phe 101 (Fig. 3D). Support for the modeling data was obtained by mutating these potential binding residues. EC 50 and E max values of FPP and NAG for wild type and mutant GPR92 are summarized in Table 3. Mutation of Thr 97 to Ile (T97I), to Leu (T97L), and to Ala (T97A) significantly decreased response to NAG. However, these mutants responded to FPP with potency similar to that of the wild type receptor (Fig. 4, A and B), indicating that Thr 97 might be important for NAG interaction. Mutation of Gly 98 to Phe (G98F) or to Lys (G98K) completely abolished responsiveness to both FPP and NAG (Fig. 4, A and B). Mutation of Gly 98 to Ala (G98A) also reduced responsiveness to both FPP and NAG (supplemental Fig. 2, A and B). This result indicates that the small Gly 98 may be important for forming a binding pocket. A tight steric hindrance caused by the bulky side chains of Phe and Lys may hamper binding to both FPP and NAG. Mutation of Phe 101 to Trp (F101W) drastically decreased sensitivity to NAG, whereas it reduced sensitivity to FPP to a level 10-fold lower than wild type (Fig. 4, C and D). Mutation of Phe 101 to Ala (F101A), however, did not change sensitivity to either NAG or FPP. Replacement of Arg 276 with Ser (R276S) completely abolished the FPP response but resulted in retention of responsiveness to NAG. Lys substitution (R276K) significantly decreased sensitivity to FPP and NAG. This finding suggests that the positively charged side chain of Arg 276 may participate in an electrostatic interaction with the negatively charged diphosphate group of FPP. To examine whether reduced responsiveness of mutant receptors to agonists is due to aberrant surface expression of the receptor, the subcellular localization of GFP-fused mutant receptors was determined. Like the wild type receptor, all mutant receptors were localized to plasma membrane (Fig. 4E). Further we examined protein expression level by Western blot using an anti-GFP antibody, showing no remarkable differ-ences in expression levels except for G98F (24% of wild type expression) (Fig. 4F). This result together with the membrane expression of mutant receptors suggests that reduced or complete loss of responsiveness to agonists is not mainly due to aberrant or reduced expression of the mutant except for G98F. Expression Pattern of GPR92 in the Mouse Tissues and Human DRG-Regional distribution of GPR92 mRNA in the mouse brain and peripheral tissues was assessed by reverse transcription-PCR analysis ( Fig. 5A and supplemental Fig. 3A). GPR92 mRNA was expressed broadly at low levels throughout the brain regions and was expressed at relatively high levels in the DRG, spleen, stomach, small intestine, kidney, and thymus. GPR92 protein expression was further evaluated by immunohistochemical staining. To examine whether antibodies for GPR92 were appropriate for immunohistochemistry, we first performed immunocytochemistry in cells expressing mouse or human GPR92. Antibodies for mouse and human GPR92 specifically recognized mouse and human GPR92, respectively (Fig. 5B). For DRG immunohistochemistry, GPR92 was doubly stained with the neuronal markers NF200 and TRPV1 to distinguish subgroups of DRG neurons. In both mouse and human DRG, the anti-NF200 antibody recognized large diameter neurons, whereas TRPV1 mainly stained medium and small diameter neurons. In the mouse DRG, GPR92 was faintly labeled in NF200-negative cells, whereas some signals were observed in NF200-positive cells. GPR92 was strongly labeled in TRPV1-positive cells (Fig. 5C). In the human DRG, GPR92 was distributed in small diameter neurons but rarely detected in NF200-positive neurons (Fig. 5D). Double staining of human DRG sections with GPR92 and TRPV1 revealed that the majority of TRPV1-positive neurons also expressed GPR92 (Fig. 5D). No cross-activity between GPR92 and TRPV1 antibodies was observed (supplemental Fig. 3, B and C). FPP and NAG-induced Ca 2ϩ Mobilization in Cultured DRG Neurons-We next examined the function of GPR92 by treating cultured rat DRG neurons with FPP or NAG. Agonist-induced Ca 2ϩ responses were mainly observed in small diameter neurons. Some neurons responded to both FPP (1 M) and NAG (10 M), whereas others responded to only FPP or NAG (Fig. 6, A-D). Of the small diameter neurons tested (diameter Ͻ30 m, n ϭ 200), 15% responded to FPP, NAG, and capsaicin. Additionally 10% responded to FPP and NAG but not to capsaicin. These FPP-or NAG-mediated Ca 2ϩ increases were FIGURE 5. Tissue distribution of GPR92 mRNA and immunohistochemistry of GRP92 in the mouse and human DRG. A, total RNA levels from 6-week-old C57B/6 male mice in the indicated tissues were assessed by reverse transcription-PCR. Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was used as a loading control. B and C, for validation of the anti-GPR92 antibody, an immunofluorescence assay was performed in HeLa cells expressing mouse GPR92 and human GPR92. Mouse GPR92 and human GPR92 were detected using an anti-rabbit-Alexa594 secondary antibody and an LSM510 confocal microscope. C and D, immunohistochemistry of mouse (C) and human DRG (D) using anti-GPR92 in combination with anti-NF200 or anti-TRPV1 antibodies. Scale bars in B-D represent 20, 100, and 50 m, respectively. repeatedly observed by second application of the agonist. However, treatment of medium (30 -40 m) or large (Ͼ40 m) diameter DRG neurons with FPP or NAG did not produce any changes in Ca 2ϩ responses. To confirm that these agonist-induced Ca 2ϩ increases occur via an activation of GPR92, we used siRNA-mediated silencing of GPR92 in cultured rat DRG neurons. The selective knockdown of GPR92 was confirmed by Western blot analysis (supplemental Fig. 4A). To monitor transfection efficiency in cultured rat DRG neurons, the GFP expression vector pEGFP-N1 was co-transfected with each siRNA (supplemental Fig. 4B). For each Ca 2ϩ imaging experiment, the transfected cells without the GFP vector were loaded with fura-2/AM after separately confirming transfection efficiency (Ͼ80% (Fig. 6E). For capsaicin-sensitive cells, the NAG-and FPP-mediated [Ca 2ϩ ] i increases were completely inhibited in GPR92 knockdown cells (Fig. 6F). DISCUSSION Human GPR92 is structurally most closely related to GPR23/ LPA 4 , a recently identified receptor for LPA (7). Indeed GPR92 binds to LPA. This binding induces G q/11 -, G 12/13 -, and G s -mediated signaling pathways (6,8). Because the EC 50 value for LPA in the production of IP and cAMP in cells expressing GPR92 ranges in micromolar levels, there is likely to be an agonist that is more potent than LPA. This possibility was further supported by the observation that a luminal extract is more potent than LPA in inducing cAMP production in GPR92-expressing cells (8). While searching for potential GPR92 agonists, we observed that GPR92 is activated by various lipidderived molecules with different affinities. Of these molecules, FPP demonstrated a higher potency and efficacy for activating GPR92 than LPA. FPP showed an approximate 10-fold lower EC 50 value and 3-fold higher E max value than LPA in agonist-induced SRE-luc activity in cells expressing GPR92. NAG showed a potency similar to that of LPA for inducing SRE-luc activity. One might suggest that FPP-induced SRE-luc activity is likely not due to G protein coupling because FPP can contribute to transcriptional activation independent of GPCRmediated G protein activation. FPP is a donor for isoprenylation of many proteins, including the ␤␥ subunit of heterotrimeric G proteins and the small GTPases such as Ras and Rho. Isoprenylation is necessary for translocation of these G proteins to the plasma membrane and subsequent activation (18 -20). Because SRE can possibly serve as a downstream target of FPP-mediated Ras activation, it can be postulated that FPP-induced SRE-luc activity in this study is due to Ras activation. Further FPP is found to be a novel transcriptional activator for a subset of nuclear hormone receptors (21), suggesting possible G protein-independent 5). A and B are pooled data obtained from five DRG neurons, whereas C and D are sampled recordings showing differential responses of FPP and NAG on a cultured DRG neuron. The black dot (F) represents acute application of each indicated drug. The response to capsaicin (0.1 M) was examined in all DRG neurons tested. E and F are pooled data obtained from 12-27 siRNA-transfected DRG neurons. To knock down GPR92, cultured rat DRG neurons were exposed to siRNA against GPR92 (‚). As a negative control siRNA (E), a nontargeting scrambled siRNA with no sequence homology to any known gene sequences was used. E and F represent agonist-induced [Ca 2ϩ ] i signals in capsaicin-insensitive and -sensitive cells, respectively. effects of FPP on SRE-luc activity. However, this FPP-induced SRE-luc activity is only observed in GPR92-expressing cells but not in mock-infected cells. In GPR92-expressing cells, FPP induced immediate responses, such as Ca 2ϩ mobilization, accumulation of IP and cAMP, and elevation of phospho-ERK. Additionally increased [ 35 S]GTP␥S binding indicates that FPP may have a direct interaction with GPR92, leading to activation of a G protein such as G s and G q/11 . Most GPCRs respond to a single endogenous ligand. Some phylogenetically related receptors share a common ligand. However, some receptors respond to more than two different compounds, each with different affinities (2,3). GPR92 responds to various lipid-derived molecules including FPP, NAG, and LPA. These compounds possess a long hydrophobic carbon chain with a polar moiety. Our molecular modeling study, combined with site-directed mutagenesis, revealed that hydrophobic moieties of FPP and NAG have common binding sites, Gly 98 and Phe 101 , at transmembrane domain 3. Gly 98 is likely important for forming a binding pocket that allows a hydrophobic interaction between Phe 101 of GPR92 and the hydrophobic carbon chains of FPP and NAG. In addition to common binding sites, GPR92 has specific binding residues for charged moieties of FPP and NAG. The negatively charged diphosphate moiety of FPP and the glycine moiety of NAG interact with Arg 276 in the third extracellular loop and Thr 97 at the transmembrane domain 3, respectively. In this regard, molecular interactions between GPR92 and other molecules such as LPA and 2-AG remain to be elucidated. FPP and NAG are endogenously synthesized. FPP is a key intermediate in the biosynthesis of steroids, carotenoids, and polyisoprenoids. FPP is easily transported into plasma. The steady-state human plasma concentration of FPP is ϳ7 ng/ml (ϳ15 nM) (22). As the mRNA for GPR92 is expressed in a variety of peripheral tissues, including spleen, thymus, stomach, small intestine, and kidney, it is possible that circulating FPP may exert its effects in these tissues through activation of GPR92. NAG is naturally produced in a variety of tissues including the spinal cord, small intestine, kidney, glabrous skin, and brain (10). These tissues also contain GPR92 mRNA. Recently NAG was identified as an endogenous ligand for GPR18; it activates the G i -mediated signaling pathway (23). This suggests that NAG plays a role in immune regulation because GPR18 is highly expressed in lymphoid tissues such as the peripheral blood leukocytes, spleen, and thymus (24,25). The biological functions of NAG with regard to GPR92, however, are not yet understood. However, as tissue distributions of GPR92 and NAG are highly correlated (5,10), it is postulated that NAG has a role in these tissues through activation of GPR92. In good agreement with a previous observation (6), GPR92 was highly expressed in the DRG where FPP and NAG are able to induce Ca 2ϩ elevation. This indicates possible roles of FPP and NAG in the DRG. FPP and its synthase activities are present in the spinal cord (26), suggesting a role of FPP in sensory neurotransmission through activation of GPR92. NAG is highly concentrated in the spinal cord and glabrous skin (10), which indicates a physical interaction of NAG with GPR92 in sensory neurons. However, it is not known how FPP and NAG are reg-ulated in physiological conditions and whether FPP and NAG have direct actions on DRG neurons. An interesting finding of this study is that GPR92 was expressed at high levels in small and medium diameter neurons of the DRG, which include primary sensory fibers associated with acute and neuropathic pain. Many GPR92-positive cells in the DRG coexpress TRPV1, which functions as a nociceptor (27). FPP-and NAG-induced Ca 2ϩ elevation was observed only in small diameter DRG neurons. Some FPP-and NAG-responsive cells responded to capsaicin, a TRPV1 agonist. Of the capsaicin-responsive DRG neurons, ϳ38% of the cells responded to either FPP or NAG. This observation, together with coexpression of GPR92 and TRPV1, indicates a possible role of GPR92 in TRPV1-medicated pain sensing. Recently involvement of GPR92 in neuropathic pain has been suggested because GPR92 knock-out mice display significantly less sensitivity in the spinal cord to noxious mechanical and thermal stimulation than wild type littermates (28). Several reports suggest involvement of NAG in nociception in the sensory nervous system. No evidence for the role of FPP in nociception has been provided. Peripheral administration of NAG in the hind paw is capable of suppressing the phase 2 response of formalin-induced pain behavior in the rat (10). Further recent studies suggest involvement of NAG in inflammatory and neuropathic pain. Intrathecal administration of NAG reduces the mechanical allodynia and thermal hyperalgesia induced by either intraplantar injection of Freund's complete adjuvant or partial ligation of the sciatic nerve (29,30). NAG is a structural analog of the endogenous cannabinoid anandamide. However, the pain suppressive effects of NAG are not likely mediated by activation of the cannabinoid CB1 receptor because NAG has a very poor affinity to the CB1 receptor (31,32). One possible explanation for the effects of NAG is that NAG increases anandamide concentration by inhibiting the hydrolytic activity of fatty acid amide hydrolase on anandamide (33). Alternatively the coexpression of GPR92 and TRPV1 in the DRG raises the possibility that NAG can exert its pain suppressive effects through GPR92 in the sensory nervous system. As NAG acts as a partial agonist at GPR92, it is possible that it serves as an antagonist to FPP under physiological conditions thereby reducing activation of GPR92, although this possibility should be further investigated. In summary, we have demonstrated that multiple lipid-derived molecules can activate GPR92 with different potencies. Of these, FPP was by far the most potent ligand and exerted effects with much greater efficacy than any of the other compounds tested, including LPA. However, we do not exclude the possible presence of other, as yet unknown, GPR92 agonists that have even greater efficacy. The presence of multiple ligands for GPR92 portends a wide range of biological and medicinal roles for this receptor. The roles of FPP and NAG through activation of GPR92 in peripheral tissues and the DRG require further investigation.
2018-04-03T03:29:35.572Z
2008-07-25T00:00:00.000
{ "year": 2008, "sha1": "2dcd40c2aacbd645d621524e612cd3ec46bffa3c", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/283/30/21054.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "d01838a432e9390df8dffb206188fd1af9a7f346", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
230524158
pes2o/s2orc
v3-fos-license
Generative Adversarial Networks with Physical Evaluators for Spray Simulation of Pintle Injector Due to the adjustable geometry, pintle injectors are specially suitable for the liquid rocket engines which require a widely throttleable range. While applying the conventional computational fluid dynamics approaches to simulate the complex spray phenomena in the whole range still remains to be a great challenge. In this paper, a novel deep learning approach used to simulate instantaneous spray fields under continuous operating conditions is explored. Based on one specific type of neural networks and the idea of physics constraint, a Generative Adversarial Networks with Physics Evaluators (GAN-PE) framework is proposed. The geometry design and mass flux information are embedded as the inputs. After the adversarial training between the generator and discriminator, the generated field solutions are fed into the two physics evaluators. In this framework, mass conversation evaluator is designed to improve the training robustness and convergence. And the spray angle evaluator, which is composed of a down-sampling CNN and theoretical model, guides the networks generating the spray solutions more according with the injection conditions. The characterization of the simulated spray, including the spray morphology, droplet distribution and spray angle, is well predicted. The work suggests a great potential of the prior physics knowledge employment in the simulation of instantaneous flow fields. Introduction Due to a wider throttling range and greater combustion stability, pintle injectors are specially suitable for the liquid rocket engines that require deep, fast, and safe throttling [1,2], such as the descent propulsion system in Apollo program [3] and the reusable Merlin engine of SpaceX [4]. In the practical throttleable engine applications, the pintle is movable to alter the injection area so that the mass flow rate of the injected propellants can be varied continuously according to the economical and safe thrust curve in a given situation [5]. However, in the previous spray simulations of pintle injectors, the changes were only considered under discrete condition combinations over a limited amount of select operating points [6,7,8]. For the traditional discrete methods they used, simulations have to be conducted repeatedly to varies the operating conditions and the computational cost becomes prohibitively expensive [9]. Innovations for the spray simulation of the pintle injector are needed to address this issue. Contrarily, machine learning approach, especially the Neural Networks(NN), has demonstrated its efficiency to predict the flow fields under different conditions with a single surrogate model [10]. The previous researches on flow field prediction using NN are mainly focused on the data-driven method. Besides the indirect way using closure model [11,12], the field solution can also be directly obtained from the network model which is trained with a large number of samples [13,14,10,15]. However, some predictive results obtained by datadriven methods may still exhibit considerable errors against physics laws or operating conditions [13,16,17]. Besides, In some sparse data regime, some machine learning techniques are lacking robustness and fail to provide guarantees of convergence [18]. For the purpose of remedying the above mentioned shortcomings of data-driven methods, the physics-driven/informed methods are proposed recently [19]. By providing physics information, NN are able to directly obtain field solutions which obey physical laws and operating conditions. In these work, Partial Differential Equations (PDEs) was employed in the loss function to explicitly constrain the network training [20,21]. In the state-of-the-art neural networks methods, Generative Adversarial Networks (GANs) proposed by Goodfellow et al. [22], are efficient to generate the instantaneous flow fields [23,24]. In spite of the impressive performance for unsupervised learning tasks, the quality of generated solutions by GANs is still limited for some realistic tasks [25]. Besides, as shown in the training results later, the transient nature of the spray injection and liquid sheet break results in the extreme difficulty of usual networks to qualify the place and intensity of dominating characterizations. In this paper, based on one specific type of GAN and the idea of physics constraint, a novel Generative Adversarial Networks with Physical Evaluators (GAN-PE) framework is proposed. By introduction of mass conversation and spray angle model as the two evaluators, this framework has a better training convergence and predictive accuracy. The trained model is able to simulate the macroscopic morphology and characterization of the instantaneous flow fields under different conditions. This paper is organized as follows. We firstly introduce the experimental settings and data set acquisition. Secondly, the architecture of GAN-PE and the detailed parts are described. Then, the learning results of numerical experiments are presented for validation. Finally, conclusion is drawn. Data Set from Experiments Our training data are extracted from the spray experimental results of the pintle injectors. Experimental facilities The non-reactive cold experiments were conducted at atmospheric pressure. The dry air is used as the stimulant for axial flows and filtered water as the stimulant for radial flows. The schematic of experimental facilities are shown in Figure 1a. A back-lighting photography technique is used for instantaneous spray image visualization. The image acquisition system consists of an LED light source, a high-speed camera and a computer. The exposure time is 10 µs and frame rate is 50k fps. The detailed gas-liquid pintle injector is featured in Figure 1b. In order to study the influence of the momentum ratio on the spray angle, the experimental device is designed to use the replaceable parts. In the experiment, the height of the radial liquid jet outlet and the thickness of the axial gas sheet are adjusted by changing the height of the sleeve and the axial gap distance, respectively. When the liquid propellant is injected radially from the two sides of the pintle end through the manifold, the liquid columns is formed. These columns are broken by the axially gas propellant injection from the gap cling to the pintle. Finally, due to an impingement and collision, the liquid columns break and form a plane conical spray like a hollow-cone atomizer. This design induces vigorous mixing of the gaseous and liquid propellants which yields a high combustion efficiency [26] Data set acquisition The spray experiments are carried out with the throttling level L t of 40% ∼ 80%. L t is varied by the linear adjustment of the height of radial liquid jet outlet and the thickness of axial gas sheet. The radial liquid jet outlet height at throttling levels of 40%, 60%, and 80% are 2mm, 3mm and 4mm, respectively. When L t is fixed, the height of the radial liquid jet outlet is fixed and equal to the thickness of the axial gas sheet. Table 1 shows the operating conditions of the experimental campaign and the corresponding key specifications of the pintle injector. Table 1: Experimental operating conditions. Lopen and Tgs are injector opening distance and gas sheet thickness respectively. mg and m l are the mass flow rate of gaseous and liquid propellant respectively. C TMR is the momentum ratio of the two propellants. As shown in Figure 2, to measure the spray angle, the spray images obtained in the experiment are post-processed to clarify the spray boundary. The average of 10 images is used to measure the spray angle manually. Using this setup, the time-averaged images of sprays are obtained and the spray angles, defined as , are manually-measured. Then, the average images and the corresponding spray angles are used to train the spray angle estimator later. The resolutions of the instantaneous spray image are 640 × 480. In order to reduce the training cost, the images are interpolated to the images with a resolution of 128 × 128. While the measured angle values, which represented the nature of the spray phenomenon, are fixed in spite of the image scaling. Overview Here, a Generative Adversarial Networks with physical evaluators (GAN-PE) framework is proposed. As shown in Figure 3, The GAN-PE is composed of 4 parts, Generator (G), Discriminator (D) and two physical evaluators. After the initial field solution is generated by G, there are three parts are employed to guarantee the output being an accurate field solution. GANs is the base of the proposed networks framework, the G captures the real spray data distribution which is corresponding to the operation conditions, and the D estimates the probability that a condition-sample pair came from the training data rather than G. There are also two evaluator designed to improve the performance of GANs. The first Mass Conservation Evaluator (E MC ) is used to improve the generation robustness and training convergence. The second Spray Angle Evaluator (E SA ) is used to improve the predicting accuracy in the specific operating conditions. Fed with the outputs from G, the losses of D, E MC and E SA are calculated respectively. After that, the backpropagation is applied to adjust the U-net CNN of G to generate a new spray field that more satisfies the conditions and prier physics knowledge. After enough iterations, the network will be able to generate 'correct' spray field. Generator From inputs towards outputs, the network of G consists of two parts: encoding and decoding [27] . In the encoding process, the operating conditions L open , T gs , m g and m l are resized as four matrices for progressively convolutional downsampling with corresponding kernels. By this way, the matrices with a size of 128 × 128 is reduced into one liner data pool consisting of 512 neurons. Then the decoding part works in an opposite way, which can be regarded as an inverse convolutional process mirroring the behavior of the encoding part. Along with the increase of spatial resolution, the spray fields are reconstructed basing on the data pool by up-sampling operations. In addition, there are the concatenation of the feature channels between encoding and decoding. More details of the U-net architecture and convolutional block, including active function, pooling and dropout, are referred to Ref. [21]. The weighted loss function considering the following discriminator and evaluators is written as where L D , L EMC and L ESA are the loss terms that calculated by D, E MC and E SA respectively. Also, α and β are the constant hyperparameters which are tuned to adapt the scales of these loss terms. After proper training, the generator is able to map a spray sample from a random uniform distribution to the desired distribution which obey the physical knowledge and conditions. Discriminator Then D is used to feed the possibility of that samples come from the training rather than generation distribution back to G. We use Least Squares Generative Adversarial Networks (LSGANs) settings to train the D and the G simultaneously [28]. This special type of GANs helps to remedy the gradients vanishing by using the least square loss function instead of the sigmoid cross entropy loss function [29]. Here, D is modified by the encoder of the G, which means the generations are dawn-sampling by the re-convolutional calculation so that the spray field information are concluded into the linear 1-D tensor. And then this 1-D data pool will be used to be trained to maximize the probability of assigning the correct label (real/fake) to both training targets and generating solutions. Similar with the work in Ref. [30], we use the input-output pair to feed D instead of only G's outputs in the random image generation tasks. The operating conditions and the outputs are concatenated as the different feature channels in a unique 4-D data tensor. By this way, the D helps to judge whether the outputs accord with the corresponding conditions, not only having right spray morphology. The loss functions for LSGANs are defined as where x is the training data and z is the input variables. Also, a and b are the labels for fake data and real data respectively, and c denotes the possibility that G wants D to believe for fake data. Here, we apply a = 0 and b = c = 1. So, L D is equal to the second part of Equation (2). Mass conservation evaluator As shown in Figure 4, we assume that there are a few rings with different diameters that are tangent at the middle point of the upper boundary in both generating images and the average images. Following the definition of "L1loss" which is widely used in machine learning community, we define a mass conservation loss here. The difference is that the former measures the sum of absolute error between each element in the generation and target [31], but ours is calculating firstly a gray value summation of every element in one concerning ring and then the summation of absolute error between the corresponding rings in the generation and target. The idea is from that the spray phenomenon obeys mass conservation law, i.e. the mass fluxes of droplets through one specific ring in every instantaneous frame are equivalent. Here, the "mass flux" is a broader concept which also involves the background shadow. The mass conservation error, i.e. the loss term from E MC , is defined as where x and y are the gray values in generated images and average targets respectively. Also, m is the amount of the concerning rings and n is the amount of data points in one concerning rings in the matrix. E is the error between the output and average matrix and defined as where N is the amount of all the data points in the matrix. To improve the generation randomness and in view of the error caused by light transmission and reflection, loss threshold E thr is introduced herein. Once the loss is less than the threshold, this loss term will be ignored. Spray angle evaluator This evaluator is composed of two parts, one is the theoretical model of spray angle, the another is a CNN encoder to estimate the spray angles from the generated field solutions. The axial momentum equation can be written as follows: so that integration of the axial momentum equation with respect to time is with u l = du dt , a second integration with respect to time is For the collision between the gas sheet and the rectangular liquid jet, the momentum ratio is Equation 7 could then be expressed in terms of the momentum ratio C TMR as follows where the slope of the liquid jet θ at the thickness of the gas sheet could be written as The theoretical model assumes that the liquid jet does not deform, but in reality it will deform under aerodynamic forces, which results in a reduction of the effective momentum of the liquid jet. Therefore, the liquid jet deformation factor γ, which is obtained through the experimental results, is introduced to modify the spray angle theoretical model. The Eq(10) is rewritten as In the field of medical image analysis, the machine learning approach, especially the deep neural networks, has been employed for automated scoliosis assessment [32]. In these work, the X-ray images are fed into the neural networks estimator and the spinal Cobb angles are obtained [33,34]. Similarly, inside the E SA , there is a well-trained spray angle estimator to output the angle values from the predictive images. The architecture of this CNN encoder is like the D, except added one liner layer in the end to output the estimated spray angle θ . The loss term from E SA is calculated as 4. Results performs very well in some steady or mean state field prediction tasks, such as the work in Ref. [17,21]. However, when the training cases have a multimodal distribution, this loss will fail down. In our spray field prediction task, although the morphology under one specific operation condition are similar, but the detailed droplet distributions are distinguishable. So the spray field solution actually has many possibilities and the prediction in every iterative step should be one of them. But the L1 loss averages all the possibilities and produces a very blurry average image instead. However, the discriminator in GANs which can be regarded as the loss of generator is not an explicit loss function. Instead of the pixel-wise loss, D is an approximation loss and it denotes the overall spray morphology which discriminates between the real and fake data distributions. Model validation In the training process of GANs, the generator and discriminator have to been balanced trained and the convergence is often an unstable state. For the spray simulation task, the discriminator have difficulties to capture the detailed feature of the small droplets. The L D has a possibility of becoming less meaningful through the training process. And the G will update itself based on the random feedback and the quality of generation may collapse. The G outputs low-quality images through many epochs, some of them shows faint spray pattern in the background but are easily identified as fake. It will be very easy for the discriminator to distinguish the targets and generation so the values of the loss from D drop to zero rapidly. The comparison of generated spray images from different framework demonstrates the superior performance of the mass conservation evaluator as shown in Figure 7. In some generated images, the background is not agree with the real target, the introduction of the L EMC helps G identifying the position and intensity of the droplet as well as the background shadow. In our framework, when getting the predicted spray field, all the output matrices in one batch will be averaged to one image. Then this average image will be used to feed the down-sampling CNN to obtain the spray angle of this batch. Here, we use the average images from experiments as test samples for the validation of the spray angle estimator. Table 2 Predictions Literature shown that the macroscopic morphology study is important to characterize a spray [35]. Here, the simulated spray morphology is analyzed and compared with the experimental results. For the test cases which are not in the learning domain, the results have a small deviation and all the predicted angle values are less than the manually-measured ones. It is because these test cases are not constrained by the targets so the prediction have a trend to approach the mean value of the adjacent operating points which is happened to be larger. Conclusion In this paper, we proposed a novel deep learning framework constrained by physical evaluators to directly predict spray solutions based on generative adversarial networks. The normal discriminator and the mass conservation and spray angle evaluators are used to constrained the CNN to generate the spray solution, including macroscopical morphology and spray angle. The former evaluator is able to improve the training convergence and the latter one helps obtaining more accurate solutions that are consist with the operating conditions. It is noteworthy that the related network architecture and spray problem are generic and the proposed framework is potentially suitable for other fluid field simulations which have proper prior physics knowledge. Further research will be carried out for spray droplet size analysis and prediction with the present network framework.
2021-01-06T02:15:52.068Z
2020-12-27T00:00:00.000
{ "year": 2020, "sha1": "9e9b2c1270b4b9940d8e0f15197e4c7adad058b7", "oa_license": "CCBY", "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/5.0056549", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "cd5e10883c1928e06632733cabe79b2c2fd9dfeb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248700758
pes2o/s2orc
v3-fos-license
Mediating Role of Resourcefulness in the Relationship Between Illness Uncertainty and Poststroke Depression Objectives To examine the association between illness uncertainty, resourcefulness, and poststroke depression (PSD) and identifies whether stroke patients’ resourcefulness plays a mediating role in the relationship between illness uncertainty and PSD. Methods A cross-sectional study was conducted from September 2020 to April 2021. A convenience sample of 355 stroke patients was recruited. A general characteristic questionnaire, the Mishel Uncertainty in Illness Scale, the Resourcefulness Scale (RS), and the Patient Health Questionnaire-9 (PHQ-9) were used to obtain data. Descriptive analysis, Student’s t-test, Mann–Whitney U-test, chi-squared test, hierarchical regression analyses, Pearson correlation analysis, and mediation analysis with the PROCESS macro were used to analyze the data. Results Illness uncertainty, resourcefulness, and PSD were significantly related to each other. Resourcefulness partially mediated the relationship between illness uncertainty and PSD. Conclusion Illness uncertainty and resourcefulness were significantly associated with PSD, and resourcefulness played a mediating role between illness uncertainty and PSD. Interventions designed to reduce illness uncertainty and enhance resourcefulness may contribute to the prevention and improvement of PSD. INTRODUCTION Poststroke depression (PSD) is the most common mental health issue found in stroke patients. According to authoritative reports, approximately 33% of stroke survivors worldwide suffer from PSD (Towfighi et al., 2017). Patients with PSD mostly present with mood swings, a lack of interest, sleep disorders, changes in appetite, etc. (Zhao et al., 2018). PSD is associated with numerous negative outcomes, such as disability, severe cognitive impairment, and poor quality of life (Kim et al., 2018;Blöchl et al., 2019;Medeiros et al., 2020). Moreover, PSD contributes to high rates of recurrence and mortality in stroke patients (Cai et al., 2019). Therefore, it is very important to identify the modifiable factors related to PSD and the mechanism by which these factors lead to PSD. Illness uncertainty refers to an inefficient cognitive state due to an individual's lack of ability to understand the meaning of illness-related events (Johnson Wright et al., 2009). For stroke survivors, the severe clinical manifestations and a lack of disease-related knowledge make it difficult for patients to predict the prognosis of their disease; this in turn may result in uncertainty. It has been reported that stroke patients experience moderate levels of illness uncertainty . Illness uncertainty, as a stressor for patients, seriously endangers their mental health (Chen et al., 2020;Verduzco-Aguirre et al., 2021). Studies have found that illness uncertainty is significantly positively correlated with depression in patients with heart failure (Chen et al., 2020) and older adults with advanced cancer (Verduzco-Aguirre et al., 2021). Although studies have shown that illness uncertainty in stroke patients may be related to PSD (Peng et al., 2016;Wei et al., 2018), the relationship between them still needs more research to be proven. In addition, there is evidence that not all individuals who experience illness uncertainty report the appearance of depressive symptoms. Some protective factors may buffer the impact of illness uncertainty on depression. Specifically, patients' illness uncertainty not only directly promotes the occurrence of depression but also indirectly influences depression through positive psychological resources, such as hope (Cui et al., 2021) and coping with adaptive ability (Wu et al., 2020). Resourcefulness is also an important positive psychological resource for individuals, but whether it plays a mediating role in the relationship between illness uncertainty and depression has not been verified. Resourcefulness is defined as an individual's capability to deal with difficulties using cognitive and behavioral skills, including personal and social resourcefulness (Zauszniewski, 2016). Personal resourcefulness refers to the skill of solving problems alone, and social resourcefulness refers to the skill of solving problems by seeking help from others (Zauszniewski, 2016). With the development of positive psychology, positive psychological resources and qualities play a major role in helping individuals cope with difficult situations. Research has revealed that resourcefulness is an important psychosocial resource in protecting an individual's wellbeing, such as in terms of adaptability (Bekhet and Zauszniewski, 2016), mental health (Zauszniewski and Burant, 2020), and quality of life (Yu et al., 2019). Previous study reported that resourcefulness is closely related to depression, and individuals with higher resourcefulness have fewer symptoms or lower levels of depression (Choi et al., 2013;Lin et al., 2017). Moreover, resourcefulness, as a person's ability to cope with adversity, can buffer the negative impact of stress on mental health (Zauszniewski and Burant, 2020). When an individual experiences stress, he or she may use resourcefulness, such as self-help or help-seeking strategies to respond to his or her psychological reaction. Although no research has been found to explore the relationship between illness uncertainty and resourcefulness, there is evidence that illness uncertainty may reduce individuals' psychosocial adjustment ability (Knobf, 2011;Kazer et al., 2013), thereby aggravating depressive symptoms (Wu et al., 2020). In addition, the patient's illness uncertainty is positively correlated with perceived stress (Moreland and Santacroce, 2018). A previous study proved that resourcefulness plays a mediating role in mitigating the impact of perceived stress on PSD (Wang et al., 2015). Therefore, we have enough reason to hypothesize that resourcefulness might function as a mediator in the relationship between illness uncertainty and PSD. Based on the above literature review, this study aims to test the relationships between illness uncertainty, resourcefulness, and PSD, and whether resourcefulness plays a mediating role between illness uncertainty and PSD. Identifying the relationships of these variables is essential for developing corresponding interventions to improve PSD. Study Design and Participants A cross-sectional survey was conducted in a tertiary grade A hospital in southeast China from September 2020 to April 2021. Convenience sampling was used to recruit participants during their hospitalization. The participants' inclusion criteria were as follows: (1) patients of ≥18 years of age; (2) patients diagnosed with stroke; and (3) patients who had experienced a stroke more than 7 days prior. Participant exclusion criteria were as follows: (1) patients with a history of depression or anxiety prior to stroke; (2) patients with severe aphasia; (3) patients with serious physical conditions who were unable to cooperate with the investigation; and (4) patients with impaired consciousness. A total of 370 stroke patients were invited to participate in the study, of whom 12 refused and three did not complete the questionnaire. Finally, 355 patients completed the questionnaire, and their responses were valid (effective response rate: 95.9%). Procedures This study was approved by the Ethical Review Board of the data collection hospital. Face-to-face interviews were used to collect data. First, eligible participants were screened by reviewing the electronic medical records. Then, the researchers explained the purpose and content of the study to eligible participants and confirmed their willingness to participate. Before data collection, participants signed an informed consent form. After the questionnaires were distributed, participants with reading and writing skills completed the questionnaire by themselves. For participants with limited education or who were unable to fill out the questionnaire by themselves due to disease, the researchers helped them understand and fill out the questionnaire by oral translation but without any induced language. Finally, the researchers collected and checked the completeness of the questionnaires on site. If there were missing data, participants were asked to supplement their answers in time. Participants' activities of daily living ability (ADL) scores were obtained from the electronic medical records, which were assessed by nurses on the day of data collection. General Characteristics Questionnaire The general characteristics questionnaire covered the participants' sociodemographic characteristics including sex, age, spouse states, level of education, place of residence, self-evaluated economic pressure, and clinical characteristics including duration of stroke, type of stroke, location of stroke, and ADL score. Mishel Uncertainty in Illness Scale Illness uncertainty was measured by the Chinese version of the MUIS-A for adults. The scale was originally developed by Mishel (1981) and was translated into Chinese by Xu and Huang (1996). The scale contains 25 items divided into two dimensions: ambiguity (15 items) and complexity (10 items). Each item is rated on a scale of 1 (strongly disagree) to 5 (strongly agree). The total score ranges from 25 to 125, where the higher the score is, the greater the uncertainty level is. Cronbach's alpha was measured as 0.889 for the Chinese version of the MUIS-A in a previous study (Li et al., 2020). Cronbach's alpha of the MUIS-A was calculated as 0.860 in this study. Resourcefulness Scale The Chinese version of the resourcefulness scale (RS) was used to assess patients' levels of resourcefulness. The scale was developed by Zauszniewski et al. (2006) and translated to Chinese by Ke et al. (2015). The scale measures 28 items with two subscales: personal (16 items) and social resourcefulness (12 items). Ratings for each item range from 0 (extremely non-descriptive of one's behavior) to 5 (extremely descriptive of one's behavior). The total score ranges from 0 to 140, where the higher the score is, the higher the resourcefulness level is. Cronbach's alpha of the Chinese version of the RS was previously measured as 0.824 (Guo et al., 2019), and Cronbach's alpha was measured as 0.771 in this study. Patient Health Questionnaire-9 The Patient Health Questionnaire-9 (PHQ-9) developed by Kroenke et al. (2001) was used to assess PSD. The PHQ-9 contains nine items of two domains: cognitive/affective (four items) and somatic symptoms (five items; Johnson Wright et al., 2009). Each item is scored on a four-point scale ranging from 0 (never) to 3 (nearly every day). The total score ranges from 0 to 27, with higher scores indicating more severe depression. A total score of 5 or higher was considered to denote depression. The PHQ-9 has good internal consistency in stroke patients with a Cronbach alpha of 0.900 . Cronbach's alpha of the PHQ-9 was measured as 0.801 in this study. Statistical Analysis SPSS version 23.0 was used for data analysis. Descriptive statistics were used to analyze the general characteristics of the participants and the main variables. Specifically, continuous variables were described as mean and SD or median and interquartile range (IQR), and categorical variables were presented as frequency and percentage values. Student's t-test and Mann-Whitney U-test were employed to compare the differences between continuous variables, and chi-squared test was used for the comparison of categorical variables. Pearson correlation analyses were used to examine the correlations between illness uncertainty, resourcefulness, and PSD. Hierarchical regression models were used to examine the effects of illness uncertainty and resourcefulness on PSD. The general characteristics with significant differences between the PSD group and the non-PSD group were added to Model 1 to control their effect on PSD. Model 2 was built on the basis of Model 1, with illness uncertainty entered into. Model 3 was established by adding resourcefulness to Model 2. Besides, the PROCESS macro was used to analyze the mediation effect (Hayes, 2013). In this study, we take illness uncertainty as the independent variable, PSD as the dependent variable, and resourcefulness as the mediator to construct the mediation model. To control other factors that interfere with the results and improve the test's efficiency, the general characteristics with significant differences between the PSD group and the non-PSD group were added to the covariates module. The bias-corrected 95% CI was tested with 5,000 bootstrapping resamples. If the 95% CI of the indirect effect excluded zero, the mediating role of resourcefulness was deemed significant. All significance levels were set as p < 0.05 (two-tailed). General Characteristics of the Participants As shown in Table 1, more than half (65.60%) of the participants were male. The majority (93.24%) of the participants had spouses. Nearly 60% of the participants had a little education (primary school and below). Approximately two-thirds (66.20%) of the participants lived in rural areas. Over 60% of participants had moderate or high levels of self-evaluated economic pressure. The most common type of stroke in participants was cerebral ischemia, and nearly half of strokes were in the right hemisphere. The average age of the participants was 61.12 (SD = 10.88), the average ADL score was 73.23 (SD = 19.49), and the median (IQR) of duration of stroke was 8 (3) days. The Differences Between PSD and Non-PSD Groups As shown in Table 2, there were significant differences in age, level of educational, self-evaluated economic pressure, type of stroke, ADL scores, illness uncertainty, and resourcefulness among PSD group and non-PSD. Correlations Among Illness Uncertainty, Resourcefulness, and Poststroke Depression Illness uncertainty was significantly positively correlated with PSD. Illness uncertainty was significantly negatively correlated with resourcefulness. In addition, resourcefulness was significantly negatively correlated with PSD ( Table 3). The mean (SD) score of illness uncertainty, resourcefulness, and PSD were 69.08 (11.31), 80.01 (7.76), 4.05 (3.29), respectively. In the 355 cases, 93 (26.20%) participants suffered from PSD as evaluated by PHQ-9. Table 4 presents the hierarchical multiple regression results for PSD. Age, level of education, self-evaluated economic pressure, and ADL scores significantly affected PSD in Model 1. Self-evaluated economic pressure, ADL scores and illness uncertainty significantly affected PSD in Model 2; age, level of education no longer had an effect on PSD. The effect of illness uncertainty on PSD was reduced but still significant when resourcefulness entered into the regression in Model 3; self-evaluated economic pressure, ADL score still had a significant effect on PSD. Figure 1. DISCUSSION This study examined the relationships between illness uncertainly, resourcefulness, and PSD in stroke patients. The results reveal that illness uncertainty, resourcefulness, and PSD were significantly related to each other. We also found that resourcefulness mediated the relationship between illness uncertainty and PSD. Current Status of Illness Uncertainly, Resourcefulness, and PSD The illness uncertainly of stroke patients in this study was at an upper-middle level. This result was supported by the study of Yao (2021). The resourcefulness score of stroke patients in this study was at an intermediate level, lower than the study of Guo et al. (2019). The difference might be explained by the level of educational. In Guo et al. 's study (Guo et al., 2019), nearly 80% of the patients received junior high school education or above, but only 40% of the patients in this study had junior high school education or above. Previous research pointed out that individuals with higher education levels had higher levels of resourcefulness (Guo et al., 2019). The score of PSD in this study was 4.05 ± 3.29, which was lower than the study by Dajpratham et al. (2020). The difference might be related to the physical functional status of patients. The physical functional status of the patients in this study was generally mildly disabled according to ADL score. However, more than half of the patients were severe disability in the Dajpratham et al. 's study (Dajpratham et al., 2020). Although the average score of PSD in this study was low, 26.6% of patients still suffered from PSD, which was similar to the results of previous study (Towfighi et al., 2017), so PSD should still be a major concern for medical staff. Hierarchical Multiple Regression Analyses Multiple regression analyses showed age, level of education, self-evaluated economic pressure, and ADL score significantly predicted PSD in Model 1, which was consistent with previous findings (Guo et al., 2021). However, the effect of age and level of education on PSD was no longer significant in Model 2. The reason may be that age and level of education affected PSD through stroke patients' illness uncertainty. Self-evaluated economic pressure, ADL score, illness uncertainty, and resourcefulness significantly predicted PSD in Model 3. Notably, self-evaluated economic pressure and ADL scores were significant influencing factors of PSD in all three models, suggesting that healthcare workers should pay more attention to stroke patients with poor economic conditions or severe physical disability. In addition, the effect of illness uncertainty on PSD was significantly decreased after adding resourcefulness, which inferred that illness uncertainty would have an indirect effect on PSD through patient resourcefulness. Furthermore, the mediation model was utilized to further verify the relationship between disease uncertainty, resourcefulness, and PSD. The results implied that illness uncertainty had a direct positive impact on PSD, and resourcefulness partially mediated the relationship between illness uncertainty and PSD. The Relationship Between Illness Uncertainty, Resourcefulness, and PSD in Stroke Patients The Association Between Illness Uncertainty and PSD As predicted, after controlling for general characteristics, illness uncertainty was significantly positively associated with PSD, consistent with previous studies (Peng et al., 2016;Wei et al., 2018;Chen et al., 2020). Patients with higher illness uncertainty tended to experience more severe PSD. Illness uncertainty mainly stems from a patient's ambiguity concerning disease progression and prognosis and from the complexity of disease treatment and care-related information (Xu and Huang, 1996), which may increase a patient's psychological stress and lead Level of education (coded as 1 = primary school and below, 2 = junior high school, and 3 = high school and above), self-evaluated economic pressure (low = 1, moderate = 2, high = 3), type of stroke (hemorrhagic stroke = 1, ischemic stroke = 2); ADL, activities of daily living ability. *p < 0.05; **p < 0.01. to depression (Verduzco-Aguirre et al., 2021). Furthermore, Mishel's theory of illness uncertainty (Johnson Wright et al., 2009) notes that the impact of illness uncertainty on patients depends on how a patient evaluates illness uncertainty. When a patient regards illness uncertainty as a threat, it will lead to negative results; when it is evaluated as an opportunity, it will have a positive effect. Stroke is a shock event for stroke survivors. For stroke survivors, physical disability and risk of death make them more likely to view illness uncertainty as a threat rather than an opportunity, which may have a negative impact on patients' psychology, such as in terms of depression. In view of this result, assessing patients' illness uncertainty levels and reasons for them in a timely manner and carrying out corresponding interventions may help reduce their illness uncertainty, which may ultimately improve the depressive symptoms of stroke patients. Specifically, to reduce stroke patients' illness uncertainty, the following recommendations are made for health care professionals: on the one hand, medical staff should strengthen the health education of patients to improve patients' illness perception; on the other hand, medical staff should communicate with patients more and answer their questions in time to improve patients' understanding of the treatment plan, progress, and prognosis of their disease. The Association Between Illness Uncertainty and Resourcefulness After controlling for general characteristics, the illness uncertainty of stroke patients was negatively correlated with their resourcefulness in the present study. This finding is supported by a previous study showing illness uncertainty to have a negative impact on patients' psychological adjustment ability (Li et al., 2020). For stroke patients, illness uncertainty may increase their psychological pressure, and excessive pressure will hinder them in adopting resourcefulness skills, such as positive thinking and seeking outside help. Furthermore, individuals with high level illness uncertainty generally have poor disease conditions and illness perception , which are internal factors of resourcefulness (Zauszniewski, 2016). Thus, higher illness uncertainty leads to a lower level of resourcefulness. This reminds health professionals that reducing the illness uncertainty of stroke patients may be an important means to improve their resourcefulness. The Association Between Resourcefulness and PSD After controlling for general characteristics and levels of illness uncertainty, resourcefulness was negatively correlated with PSD among stroke patients. This result is aligned with previous studies that show resourcefulness to be a protective factor for depression (Wang et al., 2015;Lin et al., 2017). Resourcefulness is divided into personal and social resourcefulness, both of which are protective factors for individual mental health (Zauszniewski, 2016). Specifically, individuals high in personal resourcefulness are good at using internal resources, such as self-control, positive selftalk, and self-evaluation, to effectively cope with stressful events. Individuals high in social resourcefulness have a strong ability to seek help from external resources, that is, social support. Social support not only provides material assistance to patients but also enables patients to have positive experiences, such as care and respect, preventing negative emotions, including depression (Volz et al., 2016). Therefore, developing resourcefulness skills may be an effective means to manage PSD. Previous studies have confirmed that resourcefulness training as a cognitivebehavioral intervention is an effective means to promote the development of resourcefulness skills (Toly et al., 2014;Bekhet and Zauszniewski, 2016). In the future, medical staff can consider carrying out resourcefulness training to prevent and manage PSD. Resourcefulness Mediated the Relationship Between Illness Uncertainty and PSD Notably, resourcefulness played a partially mediating role in the relationship between illness uncertainty and PSD, which indicates that high illness uncertainty among stroke patients may worsen PSD. However, this association may be reversed by enhancing resourcefulness. Previous studies have shown that the higher a patient's illness uncertainty is, the greater the perceived stress level becomes (Moreland and Santacroce, 2018); the higher a patient's resourcefulness is, the stronger his or her ability to adapt becomes (Bekhet and Zauszniewski, 2016). Although no prior studies have examined the mediating effect of resourcefulness on the relationship between illness uncertainty and PSD, the findings of this study are similar to the results of previous studies that show the mediating effects of resourcefulness on the relationship between perceived stress and depression in stroke patients (Wang et al., 2015) and the mediating effects of adaptive ability on the relationship between illness uncertainty and psychological stress response among patients in emergency observation room settings (Wu et al., 2020). This finding is also in line with the view of resourcefulness theory that resourcefulness mediates the impact of situational factors on outcomes (Zauszniewski, 2016). In addition, this finding is supported by Lazarus's coping model, which FIGURE 1 | Mediating effect of resourcefulness between illness uncertainty and poststroke depression (PSD). All path coefficient was unstandardized. c Path illness uncertainty-PSD (total effect). a * b Path (indirect effect) and c' Path (direct effect) accounted for 55.56% and 44.44% of the c Path (total effect), respectively. indicates that when faced with stressors (such as illness uncertainty), individuals with strong coping capabilities (such as resourcefulness) can effectively control the development of an event, thereby maintaining a good mental state (Lazarus, 1992). This finding is valuable in that it strengthens our understanding of the path of PSD generation by examining the relationship between patient uncertainty and resourcefulness. In addition, this finding further demonstrates that improving patients' resourcefulness is an important means to help patients cope with psychological problems. Limitations and Avenues for Future Research Several limitations of this study should be noted. First, the study participants were recruited using convenience sampling, which may lead to significant selective bias. A randomized sampling study should be conducted in the future. Second, our participants were recruited from one hospital, which might limit the generalizability of the findings. A multicenter study is planned to increase the generalizability of the study results. Finally, other variables, such as social support and self-efficacy, that may be related to PSD were not included in this study. Further research is under the way to include more variables to gain a more comprehensive understanding of the pathogenesis of PSD. CONCLUSION This study demonstrated that illness uncertainty and resourcefulness were significant influencing factors of PSD, and resourcefulness mediates the relationship between illness uncertainty and PSD. Interventions designed to reduce illness uncertainty and enhance resourcefulness may contribute to the prevention and improvement of PSD. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethics Committee of the First Affiliated Hospital of Wenzhou Medical University. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS JiL: conceptualization, methodology, and manuscript preparation and revision. JuL: conceptualization, methodology, and manuscript revision. HW: acquisition of data and revising it critically for important intellectual content. BL: acquisition of data and writing-review and editing. LN and DL: acquisition of data and writing-review. All authors contributed to the article and approved the submitted version.
2022-05-12T13:54:00.259Z
2022-05-12T00:00:00.000
{ "year": 2022, "sha1": "6c721af72714468ddeb184b881ca034486eb48fc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "6c721af72714468ddeb184b881ca034486eb48fc", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
14635640
pes2o/s2orc
v3-fos-license
Propagation through trapped sets and semiclassical resolvent estimates Motivated by the study of resolvent estimates in the presence of trapping, we prove a semiclassical propagation theorem in a neighborhood of a compact invariant subset of the bicharacteristic flow which is isolated in a suitable sense. Examples include a global trapped set and a single isolated periodic trajectory. This is applied to obtain microlocal resolvent estimates with no loss compared to the nontrapping setting. Introduction In this paper we study the following phenomenon: losses in high energy, i.e. semiclassical, resolvent estimates caused by trapping are removed if one truncates the resolvent (microlocally) away from the trapped set. Such results go back to work of Burq [Bur02] and Cardoso and Vodev [CaVo02]. Our result is based on a microlocal propagation estimate and is able to distinguish between different components of the trapped set. As an illustration, consider the following example: Let (X, g) be the catenoid or the hyperbolic cylinder, i.e. the quotient of the hyperbolic upper half plane by z → 2z . Let P = h 2 ∆ g − 1. Let R h (λ) = (P − λ) −1 . We are interested in behavior of this resolvent family when Re λ = 0, Im λ → 0 + (this corresponds to energy 1/h 2 for the non-semiclassical ∆ g ). It is well known that the limiting behavior of the resolvent is closely connected to dynamics of the geodesic flow on the energy surface, i.e. on the unit cosphere bundle. In this case the trapped, or nonwandering, set consists of two periodic orbits whose projections to X are the same, see Figure 1. Denote these two orbits by Γ 1 and Γ 2 , and denote by Γ 1 ± the set of ρ ∈ S * X such that the lifted geodesic through ρ tends to Γ 1 as t → ∓∞, and define Γ 2 ± similarly. Let u = R h (λ)f with λ as above. If f is O(1), then u is O(| log h| h −1 ) by a result of Christianson [Chr07,Chr08]. A consequence of our main result is that if in addition f vanishes microlocally near Γ 1 but not near Γ 2 , then u is actually O(h −1 ) on T * X \ (Γ 1 ∪ Γ 2 + ). If we assume that f vanishes microlocally near Γ 2 as well, then a result of Cardoso and Vodev [CaVo02] (following earlier work of Burq [Bur02]) Figure 1. The two closed orbits Γ 1 and Γ 2 are obtained by lifting the geodesic at the neck of the catenoid or hyperbolic cylinder to S * X. The sets Γ j ± , which by definition contain the Γ j , each consist of the infinitely many trajectories spiraling towards Γ j as t → ∓∞. If u = R h (λ)f and f is O(1) then u is O(| log h| h −1 ) globally by [Chr07,Chr08]. If f vanishes microlocally near Γ 1 ∪ Γ 2 then u is actually O(h −1 ) off of Γ 1 ∪ Γ 2 by [CaVo02]. If f vanishes microlocally only near Γ 1 , we find that u is actually O(h −1 ) off of Γ 1 ∪ Γ 2 + . implies that u is O(h −1 ) on T * X \ (Γ 1 ∪ Γ 2 ). The novelty in this example is that we keep this improvement on Γ 1 + even when f is nontrivial on Γ 2 . More generally, let (X, g) be a complete Riemannian manifold, P = h 2 ∆ g +V −1 a semiclassical Schrödinger operator, V ∈ C ∞ (X; R) bounded, h ∈ (0, 1). We say that a bicharacteristic (by which we always mean a bicharacteristic in Σ = p −1 (I) for some I ⊂ R compact) is backward nontrapped if the flowout of any point on it is disjoint from any compact set for sufficiently negative time (this definition is generalized in §2). Suppose the resolvent family R h (λ) for λ ∈ D ⊂ {Re λ ∈ I, Im λ ≥ −O(h ∞ )}, where D is any subset, is polynomially bounded in h over compact subsets of T * X. This means that for any a, b ∈ C ∞ 0 (T * X) there is k ∈ N such that Op(a)R h (λ) Op(b) L 2 →L 2 ≤ h −k . Suppose further that R h (λ) is semiclassically outgoing with a loss of h −1 at backward nontrapped points in the following sense: if u = R h (λ)f and ρ lies on a backward nontrapped bicharacteristic, and if f is O(1) on the backward flowout of ρ, then u is O(h −1 ) at ρ. Suppose also thatΓ, the trapped set (the set of precompact bicharacteristics), is compact. The following theorem generalizes the example at the beginning of the introduction: Theorem 1.1. Let (X, g), P and λ be as in the above paragraph. Let a ∈ C ∞ 0 (T * X) have support disjoint fromΓ, the trapped set. Let b ∈ C ∞ 0 (T * X) have support disjoint from all connected components ofΓ intersecting the closure of the backward bicharacteristic flowout of supp a. Then nontrapping estimates hold: Op(a)R h (λ) Op(b) L 2 (X)→L 2 (X) ≤ Ch −1 , (1.1) Here Op denotes the semiclassical quantization: see §2. Since the projection of the cotangent bundle to the base π : T * X → X is a proper map when restricted to Σ, the condition that a, b ∈ C ∞ 0 (T * X) can be weakened using microlocal elliptic regularity. Indeed, we may replace that condition with the condition that a, b ∈ C ∞ (T * X) are bounded together with all derivatives, and that π supp a and π supp b are compact. Note that if X has suitable ends at infinity (for instance, asymptotically conic or hyperbolic), then the semiclassically outgoing assumption is satisfied (see §6 below), we can use resolvent gluing to weaken the condition that π supp a and π supp b are compact to a decay condition, leading to the following theorem. Theorem 1.2. Let (X, g) be a complete Riemannian manifold which is either asymptotically conic or asymptotically hyperbolic and even in the sense of §2, let ∆ g be the nonnegative Laplace-Beltrami operator on X, let V ∈ C ∞ 0 (X), and fix E > 0. Suppose that for any Let K E ⊂ T * X be the set of trapped bicharacteristics at energy E, and suppose that a ∈ C ∞ 0 (T * X) is identically 1 near K E . Then there exist C 1 , h 1 > 0 such that for any ε > 0, h ∈ (0, h 1 ] we have the following nontrapping estimate: Here by bicharacteristics at energy E we mean integral curves in p −1 (E) of the Hamiltonian vector field H p of the Hamiltonian p = |ξ| 2 + V (x), and the trapped ones are those which remain in a compact set for all time. We use the notation r = r(z) = d g (z, z 0 ), where d g is the distance function on X induced by g and z 0 ∈ X is fixed but arbitrary. Such results were first obtained by Burq [Bur02], and were later refined by Cardoso and Vodev [CaVo02]. The improvement here is that to obtain the nontrapping bound the only condition on that cutoffs is that they vanish microlocally near K E (while in those papers the cutoffs are functions on the base manifold, and are required to vanish on a large compact set whose size is not effectively controlled), but the assumption (1.2) is not needed in [Bur02,CaVo02]. The assumption (1.2) is not true in general. Indeed, when there is elliptic (stable) trapping we have instead lim sup h→0 χ 0 (h 2 ∆ g +V −E −iε) −1 χ 0 L 2 (X)→L 2 (X) ≥ e 1/(Ch) (this has been well known for a long time -see e.g. [Ral71] for an example and [BBR10] for a recent introduction to the subject of semiclassical resolvent estimates). Nonetheless, (1.2) is satisfied for many hyperbolic trapped geometries, including those studied in [NoZw09,WuZw10]. See [DaVa10, Theorem 6.1] for (1.2) in the asymptotically hyperbolic case, and see [Dat09] and [WuZw10, Corollary 1] for the asymptotically conic case. Bony and Petkov [BoPe06] prove (1.2) for a general "black box" perturbation of the Laplacian in R n assuming only that there is a resonance-free strip, and it is likely that this condition suffices for asymptotically conic or hyperbolic manifolds as well. It is an open problem to find the optimal general bound implied by a resonance free strip, or to find assumptions under which one has a polynomial bound (1.2) but no resonance free strip. We remark that, in the setting of [NoZw09,WuZw10], (1.2) holds with C 0 h −k replaced by C 0 (log h −1 )h −1 , and so the improvement in our result is only of a factor of log(1/h). On the other hand, in [BBR10], Bony, Burq and Ramond prove that for P a semiclassical Schrödinger operator on R n , the presence of a single trapped trajectory implies that provided χ ∈ C ∞ 0 (X) is 1 on the projection of the trapped set, so in this case (and probably in general) the improvement in Theorem 1.1 is of no less than a factor of log(1/h). In [ChWu11], Christianson and Wunsch give some examples of surfaces of revolution on which a resolvent estimate holds with a bound h −k (but not C 0 (log h −1 )h −1 ). We actually prove our main theorem in the following still more general setting. Suppose X is a manifold, P ∈ Ψ m,0 (X) a self adjoint, order m > 0, semiclassical pseudodifferential operator on X, with principal symbol p. For I ⊂ R compact and fixed, denote the characteristic set by Σ = p −1 (I), and suppose that the projection to the base, π : Σ → X, is proper (it is sufficient, for example, to have p classically elliptic). Suppose that Γ T * X is invariant under the bicharacteristic flow in Σ. Define the forward, resp. backward flowout Γ + , resp. Γ − , of Γ as the set of points ρ in the characteristic set, Σ, from which the backward, resp. forward bicharacteristic segments tend to Γ, i.e. for any neighborhood O of Γ there exists T > 0 such that −t ≥ T , resp. t ≥ T , implies γ(t) ∈ O, where γ is the bicharacteristic with γ(0) = ρ. Here we think of Γ as the trapped set or as part of the trapped set, hence points in Γ − , resp. Γ + are backward, resp. forward, trapped, explaining the notation. Suppose V , W are neighborhoods of Γ with V ⊂ W , W compact. Suppose also that If ρ ∈ W \ Γ + , resp. ρ ∈ W \ Γ − , then the backward, resp. forward bicharacteristic from ρ intersects W \ V . (1.4) The main result of the paper, from which the other results follow, is the following: Note that there is no conclusion on u at Γ; typically it will be merely polynomially bounded. However, to obtain O(h −1 ) bounds for u on Γ + we only needed to assume O(h −1 ) bounds for u on Γ − and nowhere else. Note also that by the propagation of singularities, if u is O(h −1 ) at one point on any bicharacteristic, then it is such on the whole forward bicharacteristic. If | Im λ| = O(h ∞ ) then the same is true for backward bicharacteristics. In certain more complicated geometries it is possible to apply Theorem 1.3 with Γ a proper subset ofΓ which is not a connected component, allowing both supp a and supp b to intersect Γ. More specifically, when applying Theorem 1.3, W ∩Γ does not have to be a subset of Γ. This is because of the possibility of interesting dynamics withinΓ, for example a trajectory which tends to different closed orbits as t → ±∞, and thus is trapped. In this case Γ could be one of the closed orbits. In §5.3 we give an (admittedly contrived) example of this. An interesting open question concerns the optimality of the condition Im λ ≥ −O(h ∞ ) in Theorem 1.1. That some such condition is needed is suggested by the following result of Petkov and Stoyanov [PeSt09,§4] for obstacle scattering on R n with n odd. They show that if the cutoff resolvent continues analytically to {| Re λ| ≤ E, Im λ ≥ −Ch log(1/h)}, then a polynomial bound for χ(h 2 ∆ g − λ) −1 χ in this range of λ, even for χ ∈ C ∞ 0 (X) supported very far from the trapped set, implies the same bound for a general χ ∈ C ∞ 0 (X), with possibly worse constant C. In other words, no improvement is possible for such a large range of λ. In fact, we have been informed by Vesselin Petkov that the assumption that the cutoff resolvent continues analytically to a logarithmic region can be replaced by the same assumption on a strip, using the same method. The general idea of proving propagation estimates through trapped sets via commutator estimates is that near the trapped set Γ, where we cannot expect any improvement over a priori bounds, the commutator should vanish, which is in particular the case if the commutant is microlocally near Γ a (possibly h-dependent) multiple of the identity operator. Such a commutant, which is in addition decreasing along the Hamilton flow elsewhere on the characteristic set, at least apart from backward non-trapped bicharacteristics (where one has O(h −1 ) a priori bounds), can indeed be constructed, see §4. In fact, under additional geometric assumptions, namely a certain convexity (which also plays a role in [Bur02,CaVo02]), one can use as commutants cutoff functions which are constant on the projection of the trapped set to the base manifold X; this is the special case we consider in §3. This scheme has much in common with an aspect of N -particle scattering. In order to prove asymptotic completeness for the short range N -particle problem, it suffices to obtain improved weighted estimates in z 1/2 L 2 , where z is the variable on R N d (or R (N −1)d ), away from the radial set of the Hamilton vector field of the various subsystems, also called the propagation set of Sigal and Soffer [SiSo87] (the corresponding global weighted estimate is in z 1/2+ε L 2 , and the improvement though small is crucial in the argument). Since there cannot be an improvement at the radial set, the commutant used in the proof must commute microlocally with the Hamiltonian there. Similarly, in our case, there cannot be an improvement at the trapped set, and so our commutant must commute microlocally with P there. In the N -particle setting, the weights z s do not commute with the Hamiltonians, unlike the weights h −s in the semiclassical setting, so, to obtain a microlocally commuting commutant, one needs to work with s = 0, which in turn gives rise to weighted estimates only in the particular weighted space z 1/2 L 2 microlocally away from the radial set. See [SiSo87] and [DeGé99] for a discussion of asymptotic completeness, and [Vas03] for a discussion of the proof of this estimate from a microlocal point of view. More standard escape function methods can prove related but weaker results. For example in [BGH10, Lemma 2.2], Burq, Guillarmou and Hassell use a positive commutator argument with a global escape function (see also [GéSj87,Appendix] for a more general version of the same escape function) to prove local smoothing away from a trapped set. This corresponds in our setting to a resolvent estimate for Im λ ≥ Ch (i.e. not too close to the spectrum), and in this range of λ one has more flexibility in the behavior of the escape function near infinity, because the resolvent has good mapping properties for a wider range of pairs of weighted spaces. This difference is most significant in the case of an asymptotically hyperbolic space, such as the hyperbolic cylinder of the example at the beginning of the introduction, because here it does not seem to be possible to modify the global escape function so as to give uniform estimates up to the spectrum. In Theorem 1.3 the global construction is replaced by the assumption that u is O(h −1 ) on Γ − away from Γ. In the setting of resolvent estimates, this can be proved by commutator estimates on an asymptotically conic space (see [VaZw00], [Dat09]), but on more general spaces other methods may be more convenient, or even necessary. For instance, in [MSV11], Melrose, Sá Barreto and the second author construct a parametrix for manifolds which are strongly asymptotically hyperbolic in a certain sense (see §6.2), and the Lagrangian structure of this parametrix implies the semiclassically outgoing property. In [Vas10,Vas11], the second author proves the same result on more general even asymptotically hyperbolic spaces (in the sense of §2) using commutator methods, but in order to do this he considers a conjugated operator on a modified space. The other advantage over global escape function methods is that, because our assumptions and constructions are completely microlocalized to a neighborhood of Γ (which may be a proper subset of the full trapped set), our method can give more precise information about a solution u to P u = f in the case where different estimates on f are available on different parts of T * X. The key point is that in the Theorem 1.1 and in the example at the beginning of the introduction we apply Theorem 1.3 with Γ a proper subset of the trapped set. The structure of this paper is the following. In §2 we give definitions and notation. In §3, we prove a special case of Theorem 1.2 in which the ideas of the proof are more transparent. In §4 we prove Theorem 1.3. In §5 we prove Theorem 1.1 and give an example in which Theorem 1.3 can be applied to a subset of the trapped set which is not a connected component. In §6 we discuss the semiclassically outgoing assumption and give examples of situations where it is satisfied, and we deduce Theorem 1.2 from Theorem 1.1. We are grateful to Maciej Zworski for his interest in this project and for several stimulating discussions about polynomially bounded resolvents, and also to Vesselin Petkov for several interesting discussions about related results and problems in obstacle scattering. Thanks also to the anonymous referee for the suggestion to include a discussion of noncompactly supported weights. Definitions and notation • Let X be the interior of X, a compact manifold with boundary and let x be a boundary defining function on X, that is a function x ∈ C ∞ (X; [0, ∞)) with x −1 (0) = ∂X and dx| ∂X = 0. Let g be a Riemannian metric on X. We say that (X, g) is asymptotically conic (in the sense of the large end of a cone) if we have a product decomposition of X near ∂X of the form [0, ε) x × ∂X where the metric g takes the form g = dx 2 x 4 +g x 2 , whereg is a symmetric cotensor smooth up to ∂X withg| ∂X a metric. Such metrics are also sometimes called scattering metrics. If on the other hand whereg is a symmetric cotensor smooth up to ∂X withg| ∂X a metric, and withg even in x, we say (X, g) is asymptotically hyperbolic. See [Gui05, Definition 1.2] for a more invariant way to phrase this definition. • We denote by π the projection T * X → X. • If u is a function, u denotes the L 2 (X) norm. If A is an operator, A denotes the L 2 (X) → L 2 (X) norm. Angle brackets ·, · denote the inner product on L 2 (X). , in any coordinate patch, where the z are coordinates in the base and ζ are coordinates in the fiber, and α, β are multiindices. Acting on u ∈ C ∞ 0 (X) compactly supported in a patch, Op(a) is a semiclassical quantization given in local coordinates by Op(a)u(z) = 1 (2πh) n e izζ/h a(z, ζ) u(ζ)dζ. The operator Op(a) can be extended to general u ∈ C ∞ 0 (X) by using a partition of unity subordinate to an atlas of charts, and we say Op(a) ∈ Ψ m,k (X). The quantization depends on the choice of atlas and on the partition of unity, but the classes S m,k and Ψ m,k do not. Moreover, for given A = Op(a) ∈ Ψ m,k , the principal symbol, defined to be the equivalence class of a in S m,k /S m−1,k−1 , is also invariantly defined. If A ∈ Ψ m,k and B ∈ Ψ m ,k , then [A, B] ∈ Ψ m+m −1,k+k −1 and has principal symbol h i H a b. See, for example, [DiSj99,EvZw10] for more information on these and other results from semiclassical analysis discussed in this section. • By bicharacteristic we always mean a bicharacteristic of P , that is an integral curve of the Hamiltonian vector field of p (the principal symbol of P ), contained in p −1 (I). • For Γ T * X invariant under the bicharacteristic flow, we define the forward, resp. backward flowout Γ + , resp. Γ − , of Γ as the set of points ρ ∈ T * X from which the backward, resp. forward bicharacteristic segments tend to Γ, i.e. for any neighborhood Here we think of Γ as the trapped set or as part of the trapped, hence points in Γ − , resp. Γ + are backward, resp. forward, trapped, explaining the notation. One can also extend the definition to ρ ∈ S * X (thought of as the cosphere bundle at fiber-infinity in operator on X, with principal symbol p. For I ⊂ R compact and fixed, denote the characteristic set by Σ = p −1 (I), and suppose that the projection to the base, π : Σ → X, is proper (it is sufficient, for example, to have p classically elliptic). For w ∈ C ∞ (T * X; [0, ∞)). We say that a point ρ ∈ Σ is backward nontrapped with respect to p − iw, if either w(γ ρ (t)) > 0 for some t < 0 or if for any K T * X, there exists T K < 0 such that γ ρ (t) ∈ K whenever t ≤ T K . • We say that a polynomially bounded resolvent family R h (λ) is semiclassically outgoing with loss of h −1 at backward nontrapped points if the following holds. If u = R h (λ)f with f compactly supported and ρ lies on a backward nontrapped bicharacteristic, and if f is O(1) on the backward flowout of ρ, then u is O(h −1 ) at ρ. In the rest of the paper we will often write simply 'semiclassically outgoing' for brevity, but note that this condition is stronger than the one in [DaVa10] because the loss is specified to be h −1 . This condition is discussed in §6. • In this setting propagation of singularities A microlocal proof in a non-microlocal setting In the next section we prove our general result. In this section we prove a special case of Theorem 1.2, indeed essentially a special case of [Bur02, (2.28)] and [CaVo02, (1.5)], in which the ideas are more transparent. We assume the resolvent is polynomially bounded and semiclassically outgoing at backward nontrapped points. However, we do not assume a specific structure at infinity: this is replaced by the semiclassically outgoing assumption, which is currently known for certain asymptotically conic and hyperbolic infinities (see §6), but should hold in other cases as well. In this section we make a convexity assumption in an annular neighborhood of the trapped set, but this assumption is removed in the next section. Let X be a manifold without boundary, g a complete metric on X, and P a self-adjoint semiclassical Schrödinger operator on X. Assume that there exists a small family of convex compact hypersurfaces which enclose the trapped set in the following sense. Fix I ⊂ R compact and x ∈ C ∞ (X) such that {x ≥ 1} is compact and such that the trapped set Γ (i.e. the set of precompact bicharacteristics in p −1 (I)) sits inside {x > 5}. Suppose that the bicharacteristics γ of P in p −1 (I) satisfy the convexity assumption 1 < x(γ(t)) < 5,ẋ(γ(t)) = 0 ⇒ẍ(γ(t)) < 0. Here we note that if f is a C ∞ function on [0, ∞) with f > 0, and x satisfies (3.1) then so does f • x. In particular the specific constants above and below (such as x < 5) are chosen only for convenience, and can be replaced by arbitrary constants that preserve the ordering. In examples x might be the reciprocal of a function which measures distance to a given point, or more generally x might be a boundary defining function. Proposition 3.1. Let (X, g), P , I, and x be as in the above paragraph. Assume that there exists N > 0, χ 0 ∈ C ∞ 0 (X) with χ 0 = 1 on {x ≥ 1}, and C > 0 such that the resolvent satisfies Assume that the resolvent is semiclassically outgoing at backward nontrapped points. Then if To do so, we proceed inductively, assuming that for some k ≤ −3/2, u is O(h k ) in a compact subset of {3 < x < 4}, and show that it is in fact O(h k+1/2 ) on a slightly smaller subset. Note that the last assumption automatically holds with k ≤ −N by the a priori polynomial bound assumption, and thus the proof of the proposition is complete once the inductive step is shown. Take χ = χ(x) ≥ 0 to be a function such that χ ≡ 1 in x ≥ 4, χ ≡ 0 in x ≤ 3, and χ is a increasing function of x, and χ = ψ 2 with ψ smooth. By microlocal elliptic regularity, WF h (u) ∩ supp χ is a subset of the characteristic set of P h − λ. Then consider χu, (P − λ)u − χ(P − λ)u, u = [P, χ]u, u + 2i Im λχu, u . The semiclassical principal symbol of [P, χ] is Letting c > 0 to be determined later on, we now use a partition of unity for T * X corresponding to an open cover which in a neighborhood of the characteristic set over {3 ≤ x ≤ 4} is essentially given in terms of the sign of H p x. So consider a neighborhood of the characteristic set over {3 ≤ x ≤ 4} with compact closure K, and let O be a neighborhood of K with compact closure, and consider the open cover of T * X by and take φ ± ∈ C ∞ (T * X) with φ 2 + + φ 2 − = 1 and supp φ ± ⊂ supp U ± . Then (−H p x) 1/2 is C ∞ on supp φ − , and resp. e, and microsupport supp b, resp. supp e. Note that h 2 F u, u is O(h 2+2k ) by our a priori assumptions. Thus, if u is O(h k+1/2 ) on WF h (E) (half an order better than a priori expected), the same is true for u on the elliptic set of B, i.e. we have half an order improvement on the elliptic set of B. So far we worked with arbitrary c; however, if c is not suitably chosen, the assumption on u on WF h (E) is not necessarily satisfied. Namely, we need to choose c so that WF h (E) is in the union of the elliptic set with the backward non-trapped set, where we already have O(h −1 ) bounds on u. To do so we choose c > 0 sufficiently small so that all bicharacteristics from points ρ in {3 ≤ x ≤ 4} with (H p x)(ρ) ≥ −2c escape to x < 3 in the backward direction without entering the region x ≥ 5. This is possible due to convexity and compactness: by convexity, if H p x(ρ) ≥ 0 implies that on the backward bicharacteristic through ρ, x is decreasing as time decreases, so by compactness there exists T > 0 such that if ρ is as above, then at time −T the bicharacteristics are in x ≤ 2. Then by compactness again, there is c > 0 such that for all ρ with (H p x)(ρ) ≥ −2c, at time −T the bicharacteristics are in x ≤ 2.5. With this choice of c, every point in WF h (E) is backward non-trapped or elliptic. Thus, for k + 1/2 ≤ −1, one deduces that u is O(h k+1/2 ) on the elliptic set of B. In particular, we conclude that where χ > 0, u is O(h k+1/2 ) since such points are either in the elliptic set of B or of P − λ, or (H p x)(ρ) ≥ −2c there, and in either case u is O(h −1 ) (here we use k + 1/2 ≤ −1). One can iterate this by shrinking the support of dχ, hence those of B and E and deduce that u is actually O(h −1 ) in any compact subset of {3 < x < 4} (one has to choose the initial χ appropriately if this subset is large). This proves that u is O(h −1 ) on supp χ 1 , i.e. χ 1 R h (λ)χ 1 v ≤ Ch −1 . An application of Banach-Steinhaus finishes the proof, giving a constant C uniform in v. We remark that a key point in this argument is that because (P − λ)u = 0 in the trapping region, one needs to know nothing about u itself when one considers χ(P −λ)u, u − χu, (P − λ)u in (3.2), at least if Im λ ≥ −O(h ∞ ). If instead (P − λ)u is O(1) there, then all one can say is that u is O(h −N ) which completely destroys the bounds above, i.e. gives a loss. It is worth noting that although we needed Im λ ≥ −O(h ∞ ), in any region Im λ ≥ −Ch s , s > 1, we can do a finite amount of iteration and improve on the assumption that u is O(h −N ). However, it is not clear whether this can give any useful bounds in practice. The general case In this section we prove Theorem 1. 3. First observe that if u is O(h −1 ) at a point ρ ∈ Γ + , then it is O(h −1 ) on γ + ρ , the forward bicharacteristic from ρ. Hence it suffices to construct a microlocal commutant whose commutator is positive on points ρ such that γ − ρ is contained in a small neighborhood of Γ, and merely nonnegative on the rest of Γ + . The main constraint on the neighborhood in which we work is that it must be contained in the U of Lemma 4.1 and Remark 4.2. The proof uses an inductive iteration as in §3, so in Lemma 4.3 we introduce open neighborhoods Γ ⊂ U 1 U 0 U but no other properties of these neighborhoods will be used, and they may be arbitrarily close to Γ and to ∂U respectively. There is a neighborhood U ⊂ V of Γ such that if α ∈ U \ Γ + then the backward bicharacteristic from α enters U − . Remark 4.2. Note that from this and from the assumption that u is Proof. Suppose no such U exists. Then there is a sequence α j ∈ V \ Γ + such that α j → Γ but the backward bicharacteristics γ − α j through α j are disjoint from U − ; by passing to a subsequence, using the compactness of Γ, we may assume that α j → α ∈ Γ. By (1.4), the bicharacteristics γ − α j enter W \ V ⊂ W \ V , and the latter is compact. Let t j = sup{t < 0 : γ α j (t) ∈ W \ V }, and let β j = γ α j (t j ), so β j ∈ W \ V as the latter set is closed. Moreover, β j ∈ V : indeed γ α j ([t j , 0]) is connected and contained in V ∪ (T * X \ W ), a union of disjoint closed sets, and γ α j (0) ∈ V ⊂ V . By the compactness of V , the β j have a convergent subsequence, say β j k , converging to some β ∈ (W \ V ) ∩ V = ∂V . We claim that β ∈ Γ − , which is a contradiction with β j k / ∈ U − . Indeed, otherwise, by (1.4), the forward bicharacteristic γ + β from β intersects W \ V . Moreover, since γ β (0) = β ∈ V , there is T > 0 such that γ β (T ) ∈ W \V . Then, for sufficiently large k, the same is true for the forward bicharacteristic at time T from β j k as W \V is open, i.e. γ α j k (t j k +T ) ∈ W \V . By the definition of t j k , t j k + T > 0, so t j k > −T for all k. But, if γ α is the bicharacteristic through α, then γ α j k (t) → γ α (t) uniformly in [−T, 0]. By passing to a convergent subsequence of t j k , say t j k , γ α j k (t j k ) → γ α (lim t j k ) ∈ Γ by the flow-invariance of Γ, so β ∈ Γ which contradicts β ∈ V . Thus, β ∈ Γ − , as claimed. In the following lemma we construct an escape function q ∈ C ∞ 0 (T * X) which is constant near Γ, nonincreasing along Γ + , and has H p q < 0 on a sufficiently large subset of Γ + . This construction is based in part on the construction of a nontrapping escape function in [VaZw00,§4] and on the construction of an escape function away from a trapped set in [GéSj87,Appendix]. We will use a quantization of q as a microlocal commutant in this section, replacing the cutoff function χ of §3. U 0 U . Then there exists a nonnegative function q ∈ C ∞ 0 (U ) such that Moreover, we can take q such that both √ q and −H p q are smooth near Γ + . Recall that Γ E + is the set of points ρ ∈ Γ + whose backward bicharacteristic γ − ρ is contained in E. The condition that √ q and −H p q are smooth near Γ + is used only to avoid invoking the sharp Gårding inequality. Figure 3. We construct q so that it is identically 1 near Γ, and then nonincreasing along Γ + . We make q strictly decreasing along Γ U 0 + \ U 1 , and then identically 0 outside of U (because in this last region Remark 4.2 provides no information about u so we must not produce any error terms here). Since q must be nonincreasing along Γ + and compactly supported, it must remain 0 after this point, and in particular we cannot make H p q < 0 on any of Γ + \ Γ U + . To motivate the statement, we outline how Lemma 4.3 will be used to prove Theorem 1.3. We will see that a positive commutator estimate as in §3 directly gives us good control of u on Γ U 0 + \ U 1 , where the commutator is elliptic, up to errors which are of two types. By propagation of singularities we can extend these good estimates to the forward flowout of Γ U 0 + \ U 1 , namely to Γ + \ U 1 . The first type of error is in the region away from Γ + , where we do not have H p q ≤ 0, but here we know that u is O(h −1 ) thanks to Remark 4.2. The second type of error is in the region where H p q ≤ 0 but not uniformly bounded away from 0. We control this error using an iteration as in §3. We will need a finite sequence of q j (the number of iterations is determined by the polynomial bound on u) such that H p q j+1 < 0 on supp dq j ∩ Γ + . To obtain q 1 we apply Lemma 4.3 with any U 1 , U 0 satisfying the hypotheses of the lemma. To obtain q j+1 from q j we observe that and apply Lemma 4.3 with a new U 1 , U 0 such that U 1 T * X \supp(1−q j ) and supp q j ⊂ U 0 . To simplify notation we will not discuss the iteration in more detail, and will simply use q rather than q j . If f and χ q are chosen such that √ f and √ χ q are smooth, then √ q is smooth. Meanwhile, near Γ + , −H p q = −(f •q)H pq , and hence it suffices to make √ f and −H pq smooth. In the case of f it suffices to make f a translation of e −1/t | t>0 near the boundary of its support. We will indicate below how to achieve this forq. We takeq of the formq where each q ρ k is supported near a portion of the bicharacteristic through ρ k , a suitably chosen point in Γ U 0 + \ U 1 . To determine the ρ k we first fix open sets V 1 and V 0 with Γ ⊂ V 1 U 1 and U 0 V 0 U . We then associate to each ρ ∈ Γ U 0 + \ U 1 the following escape times: Note that these are finite because of the definition of Γ + and (1.4). Next let S ρ be a hypersurface through ρ which is transversal to H p near ρ. Then if U ρ is a sufficiently small neighborhood of ρ, the set . We use this diffeomorphism to define product coordinates on V ρ . If necessary, shrink U ρ so that Take ϕ ρ ∈ C ∞ 0 (S ρ ∩ U ρ ; [0, 1]) identically 1 near ρ, also considered as a function on V ρ via the product coordinates, and let V ρ ⊂ V ρ be an open set containing γ ρ ([T V 1 ρ −1/2, T U ρ +1/2]) such that ϕ ρ = 1 on V ρ . Observe that the V ρ with ρ ∈ Γ U 0 + \ U 1 are an open cover of Γ + ∩ U \ U 1 , because any backward bicharacteristic from a point in Γ + ∩ U \ U 1 enters Γ U 0 + \ U 1 eventually. Now take ρ 1 , . . . , ρ N such that For each ρ ∈ {ρ 1 , . . . , ρ N } put We further impose that χ ρ has χ ρ (t) ≤ 0 for all t and also satisfies Here ε ρ is a positive number less than 1/2 and small enough that γ α (t) ∈ U for α ∈ U ρ ∩ S ρ and t ≤ T V ρ + ε ρ . Such an ε ρ exists because V ρ ∩ {t ≤ T V 0 ρ } ⊂ U . Note that in condition (2) we use the same N as in (4.2). Observe that extending q ρ by 0 outside of V ρ gives a function which is C ∞ near U . We now check thatq has the desired properties. Thatq = 0 near Γ follows from the fact that suppq ⊂ V ρ k and each V ρ k is disjoint from Γ. That H pq ≤ 0 follows from χ ρ ≤ 0. That H pq < 0 andq ≥ −1/2 on Γ U 0 \U 1 follows from condition (2) on the χ ρ and from the covering property (4.2), as well as from the fact that we took care to make V ρ ∩ {t ≥ T V 0 ρ } ∩ Γ U 0 + = ∅ so none of the summands in (4.1) are too negative here. Thatq ≤ −2 near Γ + \ Γ U + follows from condition (3) on the χ ρ together with (4.2). To make −H pq smooth, let ψ(s) = 0 for s ≤ 0, ψ(s) = e −1/s for s > 0, and assume as we may that U ρ ∩ S ρ is a ball with respect to a Euclidean metric (in local coordinates near ρ) of radius r ρ > 0 around ρ. We then choose ϕ ρ to behave like ψ(r ρ 2 − |.| 2 ) with r ρ < r ρ for |.| close to r ρ , bounded away from 0 for smaller values of |.|, and choose −χ ρ to vanish like ψ at the boundary of its support. That sums of products of such functions have smooth square roots follows from [Hö94, Lemma 24.4.8]. We conclude this section and the proof of Theorem 1.3 by proving the inductive step in the iteration: if u is O(h k ) on a sufficiently large compact subset of U ∩ Γ + \ Γ, then u is First let U − be an open neighborhood of Γ + ∩ supp q which is sufficiently small that H p q ≤ 0 on U − and that −H p q is smooth on U − . Let U + be an open neighborhood of supp q \ U − whose closure is disjoint from Γ + and from where we used Im λ ≥ −O(h ∞ ) and supp q ∩ WF h (P − λ)u = ∅. So (1) If ρ ∈ Σ = p −1 (I), then u is O(1) at ρ by elliptic regularity and the polynomial boundedness of the resolvent. (2) If ρ is backward nontrapped, then u is O(h −1 ) at ρ by the semiclassically outgoing assumption. (3) If ρ is backward trapped, then ρ ∈ Γ + by the definition of Γ + and by the support property of a. Hence u is O(h −1 ) at ρ by Theorem 1.3. The assumption in Theorem 1.3 that u is O(h −1 ) on Γ − follows from case (2) above. This proves that The uniformity in v follows from Banach-Steinhaus. where χ ∈ C ∞ 0 (X) is identically 1 on some (large) compact set. Meanwhile, from Theorem 1.1 (and using microlocal elliptic regularity) we have for anyχ ∈ C ∞ 0 (X) (see §6.2 for a discussion of the semiclassically outgoing condition in this setting). If we takeχ to be identically 1 on a sufficiently large (compact) set, then we can apply the gluing method of [DaVa10] to deduce (1.3) from (5.1) and (5.2). Since the proof below follows the proof of [DaVa10, Theorem 2.1] closely we provide only an outline. Now [DaVa10, Lemma 3.1] implies that Note that the remainder is trivial in the sense that A 0 A 1 + A 0 A 1 A 0 r −1/2−δ ≤ O(h ∞ ). Since (5.1) and (5.2) imply that this completes the proof. 5.3. Nontrapping estimates on part of the trapped set. We now give an example, although a somewhat unphysical one, in which Theorem 1.3 can be applied with Γ a proper subset of the trapped set but not a connected component. In this example we obtain the nontrapping estimate (1.1) for a and b with supports overlapping a certain part of the trapped set. More specifically, we will apply Theorem 1.3 with Γ a union of closed orbits and with part of Γ + or Γ − contained in the trapped set. Let y = y(z) ∈ C ∞ (R) be even, positive-valued, with a nondegenerate local maximum at 0, and with y > 0 outside of a neighborhood of 0, such that y changes sign only twice. Let (X, g) be the surface of revolution obtained by revolving the graph of y around the z axis (see Figure 5). Suppose this surface is an asymptotically conic or hyperbolic manifold as in §2 (for example, it may be a catenoid outside of a compact set). We will use coordinates (s, θ) on X, where s = s(z) is an arclength parametrization of the graph of y with s(0) = 0, and θ measures the angle of revolution. Let a(s) = y(z) and (σ, µ) be dual to (s, θ). In these coordinates the manifold (X, g) and the geodesic Hamiltonian p 0 are given by Let s 0 be the point in {s > 0} at which the global minimum of a is attained. The unit speed geodesic flow has six closed orbits along latitude circles: two elliptic orbits at s = 0 and two hyperbolic orbits at each of s = ±s 0 . See Figure 5 for a sketch of the projection of the bicharacteristic flowlines to the (s, σ) plane. We would like to apply Theorem 1.3 with Γ taken to be one or several of the hyperbolic closed orbits at s = ±s 0 . However, the resolvent of the Laplacian on this surface will not be polynomially bounded because of the elliptic trapping, and Γ − in this case will include trapped trajectories on which O(h −1 ) resolvent bounds do not hold, so we introduce a complex absorbing barrier as in §6.1 to suppress some of the trapping. Let w ∈ C ∞ 0 (T * X; [0, 1]) be supported as in Figure 5 and satisfy w = 1 on S * X ∩ {s = 0, σ ≤ 0}. More specifically, we require that supp w ⊂ {−s 0 /2 < s < s 0 /2}, and that supp w be disjoint from bicharacteristics γ(t) with lim t→±∞ s(γ(t)) = ±s 0 . Let where W ∈ Ψ −∞,0 (X) has principal symbol w. In Lemma 5.1 we show that the resolvent of this operator is polynomially bounded. The proof uses microlocal estimates near the hyperbolic orbits originally due to Christianson [Chr07,Chr08] together with the gluing Figure 5. The surface of revolution (X, g) with its three geodesic latitude circles, and the unit speed geodesic flow on S * X projected onto the (s, σ) plane. The complex absorbing barrier w is supported inside the dashed outline. In Proposition 5.2 we apply Theorem 1.3 first with Γ taken to be the two hyperbolic closed orbits at (s, σ) = (−s 0 , 0), and then with Γ taken to be the orbits at (s, σ) = (s 0 , 0). The darkened arrow is the portion of the trapped set on which we prove a nontrapping resolvent estimate. method of [DaVa10]. To apply the gluing method, we use the following convexity properties of the bicharacteristic flow: If γ(t) is a bicharacteristic in S * X, theṅ s(γ(t)) = 0, ±s(γ(t)) > s 0 ⇒ ±s(γ(t)) > 0, s(γ(t)) = 0, 0 < ±s(γ(t)) < s 0 ⇒ ±s(γ(t)) < 0. (5.4) Lemma 5.1. For all χ 0 ∈ C ∞ 0 (X) there exist C, h 0 such that for 0 < h ≤ h 0 and Re λ = 0, Im λ ≥ 0. Although we have A 2 0 = A 2 1 = 0 as before, A 0 A 1 = O(h ∞ ) because there are bicharacteristics which pass from suppχ 1 to supp dχ 1 (s + s 0 /7) to suppχ 0 (s − s 0 /7). We accordingly iterate the parametrix three more times, writing The remainder is trivial in the sense that and our parametric obeys the estimate completing the proof of (5.5). Now by the discussion in §6 the resolvent (P −λ) −1 is semiclassically outgoing, and recall that trajectories which intersect {w = 1} at some negative time are considered backward nontrapped, allowing us to apply Theorem 1 with Γ taken to be one or several of the hyperbolic closed orbits at s = ±s 0 . For example, we have the following statement: Proof. We closely follow §5.1. Let f = χ 1 v, v = O(1), u = (P − λ) −1 f . We must show that for any ρ ∈ T * supp χ 1 , u is O(h −1 ) at ρ. There are four cases. Semiclassically outgoing resolvents In this section we discuss the assumption that the resolvent family is semiclassically outgoing. As mentioned above, this condition replaces any explicit assumptions about the structure of the manifold near infinity and allows us to work in an arbitrarily small neighborhood of the trapped set. In §6.1 we explain this condition in the case of a polynomially bounded resolvent with a complex absorbing barrier added, a convenient simple model of infinity used to study resolvents in trapping geometries. In §6.2 we consider manifolds which are asymptotically conic or asymptotically hyperbolic in the sense of §2. Finally, in §6.3 we give an example from 3-body scattering, illustrating that this assumption is flexible in the sense that it can hold on a manifold whose natural compactification is a manifold with corners rather than a manifold with boundary, and which is not covered by the analysis of [CaVo02,CPV04]. Introducing a suitable short-range three-particle interaction in this setting can produce a hyperbolic trapped set to which Theorem 1.3 can be applied. In all the examples discussed in this section, the semiclassically outgoing condition with a quantified h −1 loss follows from the proof of the same condition without the quantified loss. This weaker condition is discussed in [DaVa10] for several of the examples below, and since no significant changes are needed we omit many details. 6.1. Complex absorbing barriers. The simplest setting in which the resolvent is semiclassically outgoing is when "infinity is suppressed" by a complex absorbing barrier, which we denote by adding a term of the form −iW to a Schrödinger operator. See [NoZw09] and [WuZw10] for examples of theorems about resolvent estimates in the presence of trapping which are simplified in this setting, and see [DaVa10] for a general method for gluing in another (more interesting) semiclassically outgoing infinity once such an estimate is proved. This method is used in the present paper to construct the example in §5. The following lemma is standard, and the proof is essentially the same as that of [DaVa10, Lemma 5.1]. In applications, w is often chosen to be identically 0 near Γ, and the assumption on w is often replaced by the assumption that w ∈ C ∞ (X; [0, 1]) with w identically 1 off a compact subset of X. 6.2. Asymptotically conic and hyperbolic manifolds. On an asymptotically conic manifold (see §2 for a definition), the semiclassically outgoing assumption follows from the construction and estimates of [VaZw00]: see [Dat09, Lemma 2] for a very similar statement. Another approach is possible in the case when (X, g) is asymptotically hyperbolic and satisfies the additional assumptions that each connected component of ∂X is a sphere and that g = g H +g, where g H is a symmetric cotensor which agrees with the hyperbolic metric on H n in a neighborhood of each connected component ∂X, andg is a symmetric cotensor smooth up to ∂X. Namely, one can use an argument similar to that in [DaVa10, §4.2] and derive the semiclassically outgoing property from a description given in [MSV11] of the Schwartz kernel of the resolvent as a paired Lagrangian distribution, to which a semiclassical version of [GrUh90, Theorem 3.3] can be applied. 6.3. An example from 3-body scattering. Consider the following 3-body Hamiltonian on R 3 : The particles here are constrained to move on a line, and V is the interaction potential between each pair of them. Passing to center of mass coordinates, we obtain the following reduced Hamiltonian on the plane X = {x 1 + x 2 + x 3 = 0}: P = −h 2 ∆ + π * 1 V + π * 2 V + π * 3 V − 1, where π 1 is the projection (x 1 , x 2 , x 3 ) → x 1 − x 2 , and similarly for π 2 and π 3 . Note that even when V is small, the perturbation is very long range (and consequently not covered by [CaVo02,CPV04]), and it cannot be extended smoothly to a compactification X of X unless X is a manifold with corners. In [Gér90], Gérard shows that if V is classically nontrapping (for example it suffices to take V small) then the resolvent obeys the standard nontrapping bound: χR h (λ)χ ≤ Ch −1 , for 0 < h ≤ h 0 , | Re λ| ≤ ε 0 < 1, Im λ ≥ 0. Moreover the methods of the paper, more explicitly elaborated by Wang [Wan91] in the general N -body setting, imply that the resolvent is semiclassically outgoing [Wan91, (1.8)]. More specifically, Wang shows that for f compactly supported, R h (λ)f is O(h ∞ ) near spatial infinity, where the radial momentum is negative. If V is nontrapping, any backward bicharacteristic eventually enters this region, so the semiclassically outgoing condition follows from propagation of singularities.
2012-06-04T20:58:31.000Z
2010-10-11T00:00:00.000
{ "year": 2010, "sha1": "2a46e8e002a2dcf3d4f46d159b20b13d8b20c540", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1010.2190", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "54fed23104e8f61d05919599dad989f51371ac10", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
54964938
pes2o/s2orc
v3-fos-license
Biometric features and content of phenolic compounds of roseroot ( Rhodiola rosea L . ) Roseroot (Rhodiola rosea L.) belongs to important herbs in folk medicine of Scandinavia, Russia, Mongolia, and China. Its therapeutic usage is mainly associated with the adaptogenic properties of this species. Roseroot is characterized by high morphological, phytochemical, and genetic differentiation. The aim of the present work was to determine the biometric and phytochemical co-variability of this taxon. Samples of Rh. rosea were collected from 4-year-old experimental field cultivation established by rhizome division in western Poland. For each plant, the biometric measurements of the clumps, shoots, leaves, and rhizomes with roots were carried out. In the underground plant parts (raw material), the contents of the main active compounds (phenylpropanoids, phenylethanoids, phenolic acids, and catechins) were determined by the HPLC-DAD method. K-means clustering analysis showed three well-separated plant groups of Rh. rosea that differed significantly in the level of most of the investigated components. It was interesting that in the raw material with a high content of phenylethanoids, a low level of phenylpropanoids was found, and vice versa. These chemical groups clearly differed in luxuriance of plants, too. The important diagnostic feature was also the degree of leaf serration. The morphological and phytochemical co-variability of roseroot was confirmed by the correlations detected between some active compounds (especially catechins and rosavin) and biometric traits describing the size and serration of leaves, the size of clumps and shoots as well as the weight of the raw material. Introduction Rhodiola rosea L. (Crassulaceae) is a herbaceous perennial plant with fleshy leaves and thick rhizomes.This arctic-alpine species has a wide range of distribution, from the mountains of Western and Central Europe, Siberia, Mongolia to Far East and North America [1].Its largest resources are in the Altai and Sayan Mountains where Rh. rosea occurs in subalpine meadows along the rivers and streams as well as in low thickets [2,3].In Poland, roseroot grows in the Giant Mountains, Babia Góra, Tatra and Bieszczady Mountains [4][5][6].For centuries, rhizomes of this species, also known as golden root, arctic root, and Hongjingtian in Chinese, have been an important raw material in folk medicine of Scandinavia, Russia, Mongolia, and China [7][8][9][10].Traditional use was associated with the adaptogenic properties of this taxon.In the last decades, many research studies have found that roseroot increases mental and physical strength as well as it shows anti-stress, cardioprotective, antioxidative, immunomodulatory, and anticancer activities [7,[11][12][13][14].The above-described features of Rh. rosea are associated with the presence of phenolics, especially phenylpropanoids, so-called rosavins and phenylethanoids -salidroside and p-tyrosol [15]. Roseroot is characterized by high morphological, phytochemical, and genetic differentiation [16][17][18][19][20][21].Our previous investigations showed biometric and phytochemical co-variability of this species.We found correlations between flavonoid and total phenolic contents in the underground plant parts and biometric features, such as: water content in rhizomes with roots, leaf number per shoot as well as shoot and clump size [22].These relationships were statistically significant, but usually not strong, and they require further, more detailed research.Therefore, in this present study we took into account new phytochemical and morphological characteristics.In particular, the analyses included the individual chemical compounds from the phenyl propanoids, phenylethanoids, phenolic acids, and catechins.Our current hypothesis was that it is possible to separate homogeneous groups of roseroot samples differing in the content of the main active compounds and some biometric features.We made an assumption that the parameters characterizing the size and habit of plants as well as the leaf shape might correlate with the phytochemical traits of Rh. rosea.The above-described methodological approach and more precise data allow find new interesting relationships in the case of this species. Plant material and biometric analysis Roseroot samples were collected from 4-year-old field cultivation established by rhizome division in Plewiska near Poznań (Institute of Natural Fibres and Medicinal Plants, Poland).For the study, 25 morphologically diverse and well-developed plants were selected.The basic biometric measurements of plants concerning the size (luxuriance) of the clumps, shoots, leaves, and rhizomes with roots were carried out according to the methods described in the previous work [22].From each plant, three fertile shoots of the first generation [21] were collected, and then from their upper part with the largest leaves three successive leaves were taken (nine leaves per specimen).After scanning by the digiShape software [23], the roseroot leaves were used to determine leaf size and shape.In comparison with earlier research [22], leaf width in 1/4, 1/2, and 3/4 of the leaf length, leaf area, leaf skeleton length, skeleton end length, and skeleton end node number were additionally measured.Shape of leaves was described using the proportion between leaf length and width as well as the comparison of three leaf width values.Leaf skeleton was obtained in digiShape according to the method of symmetry axis transform of the shape [23,24].In this algorithm, the number of skeleton branches depends on the adopted threshold value, and for the level N = 15 the analyzed characteristics (especially end node number) were a good indicator of the degree of serration of roseroot leaves. After the harvest of underground plant parts, the fresh, air-dry, and dry weight of them as well as the share of rhizomes and roots in fresh weight of raw material were determined.The obtained plant material was cut into small pieces and dried at 40°C and relative humidity of 20% in a GoBest UZ-108 heating chamber (Poznań, Poland).Water content (%) and dry weight of samples were measured after drying at 105°C in a HR73 Halogen Moisture Analyzer, Mettler Toledo (Switzerland) [22].Airdried and powdered rhizomes with roots of Rh. rosea were used for the phytochemical analysis. Phytochemical analysis The analytical procedures have been described in the earlier work [25].Briefly, 1.0 g of a sample with 20 mL of 70% (v/v) methanol was refluxed for 15 min.The procedure was repeated three times using 15 mL of methanol.The combined extracts were evaporated to dryness, and then dissolved in 5.0 mL of 70% (v/v) methanol.The obtained samples were filtered through a 0.45 μm membrane filter and injected into the HPLC-DAD system (Agilent 1100, USA).The separation of analytes was performed on a LiChrospher C18 column, 250 × 4.0 mm, 5 μm (Merck, Germany) at 24°C.Phase A: 0.2% (v/v) phosphoric acid solution in water, phase B: acetonitrile, and mobile phase flow rate: 1.0 mL min −1 were used.The assay was carried out in the following gradient elution: 0-30 min -95% of phase A, 35 min -80% A, 40 min -20% A, 56 min -20% A, 60 min -95% A. Rosavins, salidroside, and p-tyrosol were detected at a wavelength of 205 nm, gallic acid -at 220 nm, while chlorogenic and caffeic acids -at 330 nm.Peaks were identified by comparison of their retention times and UV-spectra with parameters of chemical standards (ChromaDex, USA). Statistical analysis The plant samples were characterized by the basic statistics (mean, standard deviation, minimum, maximum, and variability coefficient) describing the biometric and phytochemical features of the raw material: rhizomes with roots.The chemotypes of roseroot were distinguished on the basis of k-means clustering analysis for the standardized contents of the active compounds.To determine the statistical significance of differences between these plant groups, Pillai's trace test for MANOVA and F-test were used.The chemical differentiation of the above-mentioned chemotypes of Rh. rosea were confirmed by the Kruskal-Wallis and post-hoc tests.The roseroot groups were also described by the morphological traits with the statistical analysis of the differences between them in respect of the size and habit of plants as well as their leaf shape.The normality of variable distribution was checked using the Shapiro-Wilk test.For the skewed distribution of variables, square root, logarithmic and inverse proportion transformations of data were performed.Pearson's and Spearman's rank correlations were applied to evaluate the relationships between variables. Raw material characteristics The investigated specimens of Rh. rosea from 4-year-old field cultivation were characterized by high variability of raw material yield, from 113.4 to 961.7 g fresh weight (FW) per plant.Rhizomes constituted the main part of the underground organs, with an average share of 82.2% FW.In the fresh weight of raw material, the water content ranged from 67.1 to 73.6%, and after drying at 40°C it was reduced to 6.4-8.8%(Tab.1). Tab. 1 Characteristic of the raw material (rhizomes with roots) of Rhodiola rosea from 4-yearold field cultivation. Content of phenolic compounds The obtained results showed a wide range in the level of the main active compounds (phenyl propanoids and phenylethanoids) in the roseroot raw material.Depending on the individual plant, the phenylpropanoid (sum of rosavin and rosarin) content varied from 0.86 to 4.11 mg g −1 dry matter (DM), and for phenyletanoids (salidroside and p-tyrosol)from 0.62 to 5.68 mg g −1 DM.The mean content of these two groups of compounds was similar: 2.33 and 2.32 mg g −1 DM, respectively.Catechins (epigallocatechin gallate and epigallocatechin) and phenolic acids (especially gallic acid) were also an important chemical component.The variability coefficient (V%) for these groups of compounds was significantly lower than for phenylpropanoids and phenylethanoids.The amount of catechins ranged from 1.07 to 2.06 mg g −1 DM, and for phenolic acids -from 0.41 to 1.09 mg g −1 DM.The minimum total content of the investigated active compounds was 5.00, and the maximum content reached 10.37 mg g −1 DM of Rh. rosea rhizomes with roots (Tab.2). Chemical groups of roseroot High differentiation in the content of some chemical compounds and distribution type of some variables suggested the possibility of the presence of chemical groups of Rh. rosea.K-means clustering analysis showed three well-separated groups of plant specimens that differed significantly in most of the investigated compounds (Fig. 1, Tab. 3).The first cluster was distinguished by a high level of phenylethanoids, epigallocatechin gallate (EGCG), and phenolic acids as well as by a low mean content of phenylpropanoids, especially rosavin.In the raw material of the second plant group, only a high amount of epigallocatechin (EGC) was detected.In the third cluster, primarily a high level of phenylpropanoids drew attention.It was interesting that in the raw material with the high phenylethanoid content (Cluster 1), a low level of phenylpropanoids was recorded, and vice versa (Cluster 3).This relationship was confirmed by the negative correlation found between phenylethanoids and the main component of phenylpropanoids -rosavin (r = −0.52,p = 0.008).The above-described chemical groups of roseroot plants also distinguished in the total content of the active compounds.The average sum of all investigated components in the clusters varied from 5.95 to 9.24 mg g −1 DM (Tab.3).Compound content -in mg g −1 dry matter, n = 25.Phenylpropanoidssum of rosavin and rosarin (rosin -not detected); phenylethanoids -sum of salidroside and p-tyrosol; catechins -sum of epigallocatechin gallate and epigallocatechin; phenolic acids -sum of gallic, caffeic, and chlorogenic acids; total -sum of all investigated active compounds; SD -standard deviation, V -variability coefficient. Biometric parameters of chemical groups Our research indicated some morphological features differentiating the separated chemical groups (Tab.4, Tab.5).Roseroot plants from the first cluster developed the largest above-and underground organs, while individuals in the third group were the smallest.In the case of characteristics such as clump height, shoot diameter, and number of leaves per shoot, the differences between the described clusters were statistically significant (Tab.4).Plant habit was not a clearly differentiating factor of the groups in question, but leaf shape served this role (Tab.5).Individuals from the first cluster were characterized by the most strongly serrated leaves (high values for leaf skeleton length and number of skeleton end nodes).Plants collected in the other two groups had the margin of the leaf blade visibly more entire.In turn, other parameters describing leaf shape (the ratio of leaf length to width and the proportions of leaf width in 1/4, 1/2, and 3/4 of its length) were not statistically significant. Morphological and phytochemical co-variability Biometric differentiation of plants from the individual clusters (Tab.4, Tab. 5) found confirmation in the correlations between morphological and chemical features (Tab.6).The leaf skeleton length (index of serration) was significantly related to the total content of the investigated active compounds as well as the amount of phenolic acids and phenylethanoids.On the other hand, negative correlations were observed between the main component of phenylpropanoids -rosavin and the parameters describing the size of the aboveground plant parts.Most of the relationships were detected in the case of catechins.The level of the sum of epigallocatechin and 1).Compound content -in mg g −1 dry matter.Phenylethanoids -sum of salidroside and p-tyrosol; phenolic acids -sum of gallic, caffeic, and chlorogenic acids; phenylpropanoids -sum of rosavin and rosarin; total -sum of all investigated active compounds; SD -standard deviation. epigallocatechin gallate was strongly correlated not only with the roseroot clump and shoot size, but with the mean leaf area and the number of leaves per shoot as well as fresh and dry weight of raw material. Discussion Rhodiola rosea is a slow-growing and long-lived plant occurring in the harsh climate of the polar tundra and high mountains.According to calculations of Nukhimovskiĭ [2], the total age of roseroot policormons in some cases can be up to about 300 years. In the Altai Mountains, the first symptoms of plant ageing are usually observed in the 15th-20th or even just in the 40th year of vegetation.The fresh weight of the largest specimens was 7.8 kg, and the weight of living rhizomes -3.5 kg [2].However, plant -not significant; n = 25. 1) Without one outlier observation.SD -standard deviation.Foliage density -number of leaves per 1 cm of shoot.The highest values are shown in bold.Values with the same letter are not significantly different (post-hoc test, p > 0.05).Groups of roseroot plants -according to k-means clustering (Fig. 1). growth in natural stands is very slow.For example, the yield of underground organs of 30-50-year-old individuals found in the Altai Mountains was similar to that of 5-year-old roseroot from field cultivation in the Moscow region.On the other hand, plants growing under these conditions showed some signs of ageing already after 6-8 years of cultivation.The following were noted: a decrease in the number of flowering shoots and in shoot height, necrosis foci in the rhizomes, weaker growth of rhizomes, and others [26].In the plantations established from seeds in the milder climate of Poland, the yield of raw material decreased already in the sixth year of cultivation [15,21].In the fifth year of vegetation, the mean weight of air-dry rhizomes with roots of Rh. rosea was 120.4 g plant −1 in the case of a field experiment located in Warsaw [15] and 209.9 g plant −1 for Lublin [21].In our investigations, the air-dry weight of underground organs of 4-year-old roseroot growing in a plantation established by rhizome division ranged from 36.3 to 295.2 g plant −1 , with an average value of 117.0 g plant −1 (Tab.1).The mean share of rhizomes in the total weight of the underground part amounted to 82.2%, and it was similar to the results obtained by Przybył et al. [15]: 81 and 83% in the case of the 4th and 5th year of cultivation, respectively.In the next year, a decrease in the percentage share of rhizomes to 69% of the total weight of raw material was detected.This was due to the decay of the oldest, central parts of rhizomes and the division of them into many smaller pieces [18]. Observations conducted by Kim [27] in natural stands in the Altai Mountains showed that, depending on the habitat type, the mean height of plants varies from 19.4 to 26.3 cm, leaf number per shoot: 39.4-58.6,leaf length: 1.2-3.1 cm, and the mean rhizome weight: from 13.4 to 54.8 g plant −1 .According to other investigations from this region, the weight of rhizomes ranged from 50 to 840 g per plant, and the weight of aboveground parts: from 20 to 300 g plant −1 [28].Our previous research [22] also indicated large variation of the biometric features describing the size of roseroot specimens: clumps, shoots, and leaves.For example, the height of 4-year-old plants varied from 12 to 40 cm, leaf number per shoot: 30-81, mean leaf length: 1.9-4.6 cm, and the fresh weight of raw material: from 113 to 1156 g plant −1 .In addition, comparative studies showed statistically significant differences between plant material collected in two investigated years.In the present work, the shape of leaves was also described and high variation in the degree of leaf serration was noted (Tab.5).On the basis of the serration, size, and color of leaves, Kurkin et al. [3] distinguished six morphotypes of Rh. rosea with varying yield and rosavin content.These plants had green or less silvery-green leaves, with serrate or entire margin.The length of the leaf lamina ranged from less than 1.5 to 5.0 cm, and the width from 0.5 to more than 1.5 cm. 1) Without two outlier observations.Total -sum of all investigated active compounds; phenolic acids -sum of gallic, caffeic, and chlorogenic acids; phenylethanoids -sum of salidroside and p-tyrosol; catechins -sum of epigallocatechin gallate and epigallocatechin.Index of clump size -Diameter × Height of clump; index of shoot size -Length × Diameter of shoot.FW -fresh weight; DW -dry weight. The raw material of roseroot originating from separate populations is characterized by high phytochemical variation [10,16,18,29].The results obtained by Wiedenfeld et al. [17] indicate that, besides the content of substances, also their composition can change in a broad range.Additionally, large intrapopulation variability of the level of the main active compounds and the yield of underground parts of Rh. rosea was also noted.However, no correlation was found between the raw material weight and the amount of analyzed components (phenylethanoids, trans-cinnamic alcohol, rosavins, and caffeic acid) [30].In our previous work, no significant relationships were detected between the level of total polyphenols, tannins, flavonoids and the weight of rhizomes with roots, either.On the other hand, a relatively strong correlation (r S = −0.68,p < 0.001) between flavonoid content in dry matter of roseroot underground organs and water content in fresh weight of this plant material drew attention [22].In the present investigations, we described relationships between the weight of raw material and catechin content (Tab.6).In addition, the water content in fresh raw material correlated with the level of phenylethanoids (r = −0.43,p = 0.031), caffeic acid (r = −0.45,p = 0.024), and chlorogenic acid (r = −0.44,p = 0.029). Interesting data about the morphological and phytochemical co-variability of Rh. rosea were provided by the studies of Kurkin et al. [3].They showed a clearly higher content of rosavin in the morphotypes with entire or slightly serrated leaf margin compared with plants with strongly serrated leaves.It was consistent with our results where individuals with the highest amount of rosavin in the underground organs (Cluster 3) were distinguished by a low level of the leaf skeleton parameters describing the degree of leaf serration (Tab.3, Tab.5).Additionally, plants belonging to the above-mentioned group were characterized by the lowest mean leaf area (Tab.4), which also confirms the previous observations of Kurkin et al. [3].According to these authors, the small-leaved morphotype had the highest amount of rosavin, but it gave a low yield of raw material.In our investigations, plants from the third cluster reached the smallest size of above-and underground parts: clumps, shoots, and rhizomes with roots (Tab.4).Some relationships between phytochemical and biometric features of roseroot can be found in the field experiments conducted in southern Finland [31].They showed the effect of organic fertilization on the growth of vegetative shoots, the fresh weight of raw material and water content, and at the same time on the content of salidroside, rosavin, and flavonoids.A similar conclusion arose from a field experiment which was carried out in Poland [25].In this case, organic fertilization significantly influenced the yield of fresh and air-dry matter of Rh. rosea rhizomes with roots and the level of phenylpropanoids, too. In summary, roseroot exhibits high morphological and phytochemical differentiation.Literature data concerning the co-variability of these two groups of features are limited.However, they largely confirm our observations of the occurrence of Rh. rosea chemotypes well-characterized in terms of morphology.In the present study, the groups of individuals which were distinguished on the basis of quantitative analysis of the chemical composition of the raw material clearly differed in luxuriance of plants.The important diagnostic feature was also the degree of leaf serration.These relationships were consistent with the correlations found between the individual compounds or groups of active compounds and the analyzed biometric traits. Content of the active compounds in the rhizomes with roots of Rhodiola rosea. 2ab.2 Chemical differentiation of Rhodiola rosea plant groups (mean ±SD). b Kruskal-Wallis test: p < 0.001; p < 0.01; p ≤ 0.05; n.s.-not significant; n = 25.The highest values are shown in bold.Values with the same letter are not significantly different (post-hoc test, p > 0.05).Groups of roseroot plants -according to k-means clustering (Fig. Differentiation of plant size in Rhodiola rosea chemical groups (mean ±SD).SD -standard deviation.The highest values are shown in bold.Values with the same letter are not significantly different (post-hoc test, p > 0.05).Groups of roseroot plants -according to k-means clustering (Fig.1).Differentiation of plant habit and leaf shape in Rhodiola rosea chemical groups (mean ±SD). Main correlations between morphological and chemical features of Rhodiola rosea.
2018-12-07T20:21:14.277Z
2016-09-26T00:00:00.000
{ "year": 2016, "sha1": "f239632df518992834b13ffc6f860638fea73986", "oa_license": "CCBY", "oa_url": "https://pbsociety.org.pl/journals/index.php/asbp/article/download/asbp.3500/6006", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f239632df518992834b13ffc6f860638fea73986", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
131505683
pes2o/s2orc
v3-fos-license
Characteristics of the 2011 Chao Phraya River flood in Central Thailand A massive flood, the maximum ever recorded in Thailand, struck the Chao Phraya River in 2011. The total rainfall during the 2011 rainy season was 1,439 mm, which was 143% of the average rainy season rainfall during the period 1982–2002. Although the gigantic Bhumipol and Sirikit dams stored approximately 10 billion m by early October, the total flood volume was estimated to be 15 billion m. This flood caused tremendous damage, including 813 dead nationwide, seven industrial estates, and 804 companies with inundation damage, and total losses estimated at 1.36 trillion baht (approximately 3.5 trillion yen). The Chao Phraya River watershed has experienced many floods in the past, and floods on the same scale as the 2011 flood are expected to occur in the future. Therefore, to prepare of the next flood disaster, it is essential to understand the characteristics of the 2011 Chao Phraya River Flood. This paper proposes countermeasures for preventing major flood damage in the future. INTRODUCTION A massive flood, the maximum ever recorded in Thailand, struck the Chao Phraya River during August through December 2011.This flood caused tremendous damage, including 813 dead and 3 missing nationwide (as of Jan. 8, 2012;Thai Ministry of Interior, 2012).The area of damaged agricultural land throughout Thailand peaked on Nov. 14 at 18,291 km 2 (Thai Ministry of Interior, 2012), and the total flood volume was estimated to be 15 billion m 3 .In the industrial sector, 7 industrial estates and 804 companies suffered inundation damage, and of those, 449 companies were Japanese (Japan External Trade Organization, 2011).The World Bank (as of Dec., 2011) estimates 660 billion baht in damage to property such as real estate, and 700 billion baht in opportunity loses, for a total loss of 1.36 trillion baht (approximately 3.5 trillion yen) due to this flood.The real economic growth rate in 2011 is expected to decelerate from 3.7% to 0.1% (National Economic and Social Development Board, 2012).Many floods have been experienced in the past in the Chao Phraya River watershed, and floods on the same scale as the 2011 flood are expected to occur in the future.In developing proper assessments and flood control measures to prepare for the next flood disaster, it is important to develop a solid understanding of the actual situation surrounding this flood, and get to the root causes of the flood damage. OVERVIEW OF THE CHAO PHRAYA RIVER Figure 1 shows a diagram of the Chao Phraya River watershed, and the inundation situation on Oct. 18, 2011.The area of the Chao Phraya River watershed is approximately 160,000 km 2 , which is 30% of the total area of Thailand.The Chao Phraya River watershed is divided into an upper watershed and lower watershed by the narrowed section at Nakhon Sawan. In the upper watershed, the Ping River (watershed area 33,900 km 2 ), Wang River (watershed area 10,800 km 2 ), Yom River (watershed area 23,600 km 2 ), and Nan River (watershed area 34,300 km 2 ) flow down from the northern mountain system and join together at Nakhon Sawan.The total area of the upper watershed is approximately 110,000 km 2 .For the purposes of irrigation and power generation, the Bhumibol Dam (reservoir capacity 13.5 billion m 3 , catchment area 26,000 km 2 , built in 1964) was constructed on the Ping River, and the Sirikit Dam (reservoir capacity 9.5 billion m 3 , catchment area 13,000 km 2 , built in 1974) was constructed on the Nan River.Another 5 dams have been constructed for the Ping, Wang, and Nan River watersheds, bringing the total reservoir capacity including the Bhumibol and Sirikit Dam reservoirs to 24.7 billion m 3 .In the Yom River watershed, plans have been made to build the Kaeng Sua Ten River Dam (1.15 billion m 3 ) and a conduit to the Sirikit Dam reservoir, but these are yet to be constructed. In the lower watershed, the Chao Phraya River joins with the Sakae Krang River (watershed area 5,000 km 2 ) from the right bank between Nakhon Sawan and the Chao Phraya Dam (built in 1957), which was constructed 96 km downstream from Nakhon Sawan.This dam controls the discharge of the Chao Phraya River, and irrigation water is diverted to the left and right banks of the river.The Tha Chin River and the Noi River branch off from the right bank upstream of the dam.The Tha Chin River flows down to the sea, but the Noi River joins the Chao Phraya River south of Ayutthaya.Downstream of Ayutthaya, the Chao Phraya River joins with the Pa Sak River (watershed area 14,300 km 2 ).The Pa Sak River Dam (960 million m 3 ) was constructed on the Pa Sak River in 1999, and another 2 dams (total 409 million m 3 ) have been built on the right bank of the Tha Chin River. Rivers in Thailand are generally gently sloped rivers, with gradients in the aforementioned lower watershed of the Chao Phraya River and the downstream parts of the Nan and Yom Rivers, particularly gentle.For example, the elevations in the lower watershed of Chao Phraya River are 15 m in the area around the Chao Phraya Dam located 186 km from the river's mouth, 7 m in the area around Ayutthaya located 90 km from the river's mouth, and 5 m around Bangkok, giving river gradients of around 1/10,000 to 1/15,000.Generally, discharge capacity increases on the downstream side where rivers come together, but the Chao Phraya River lacks downstream discharge capacity (Figure S1).For this reason, the flooding from upstream makes water levels rise downstream, dispersing flooding onto the floodplain.By the same token, in many tributaries which flow into the Chao Phraya River, floodwater from their own watersheds cannot flow into the Chao Phraya river due to elevated water levels in the Chao Phraya River itself, and the flooding is dispersed onto floodplains around the tributaries.That is, in the lower watershed, flooded areas naturally expand along the river, and mitigate the severity of flood disasters in the downstream sections of the Chao Phraya River. Historically, Thailand has taken advantage of these river characteristics to control flooding of the Chao Phraya River.Flooding is controlled by storing water in the dam reservoirs in the upper watershed of the Chao Phraya River, and by expanding the flood area to decrease the floodwater level in the lower watershed.Since the flood flow is slow, due to the gentle gradient of the Chao Phraya River watershed, flooding seldom causes real damage to human life if the inundation level is below the knee.In addition, floodwaters can also be effectively evaporated by widely expanding the flood area.According to a flood survey report by the Japan International Cooperation Agency (JICA) released in 1999, the discharge capacity of the Chao Phraya River at Bangkok is only about a 3-year probability river discharge if there is no flooding from the Chao Phraya Dam to Bangkok (JICA, 1999).However, floods have not occurred frequently in Bangkok because most of the excess water is stored upstream in floodplains of the Chao Phraya River lower watershed. HYDROMETEOROLOGICAL SETTING OF THE CHAO PHRAYA RIVER Thailand has a tropical savanna climate and basically two seasons: the rainy season (May-October) and the dry season (November-April).On the other hand, it can be assumed that there was no major difference in rainy season evaporation and infiltration rates between the flood year and other years, because rice paddies, namely wet surface, are consistently the major type of land use in Thailand.Taking, for example, observation data by the Thai Meteorological Department (Phitsanulok observatory; 16°47'N, 100°16'E; 45 m above mean sea level) in the Yom River watershed, which has many rain-fed paddies, the normal values of cumulative rainfall and pan evaporation in the rainy season were, respectively, 1,192 mm and 842 mm.Considering the water budget, the 350 mm difference is regarded simply as runoff, which flows into rivers.Assuming there is almost no change in evaporation rates, it is estimated that runoff was approximately 860 mm during 2011, which is 246% of normal values. Figure 3 shows the total discharge of the Chao Phraya River at Nakhon Sawan from June to October in 1956-1999 and 2011.The total discharge in 2011 was 32.6 billion m 3 , which was 232% of the average value for 1956-1999.This is a similar value to previous estimates of runoff at the Phitsanulok Observatory.Total discharge recorded in the flood year of 1995 was 23.5 billion m 3 , which is 167% of the average during 1956-1999.Applying runoff estimation from the Phitsanulok Observatory, runoff is estimated to have been 151% in 1995.Again, this agrees with estimates above.On the other hand, although there was high rainfall in the rainy season of 1983 (Figure 2), the total discharge was 11.0 billion m 3 , which is only 79% of 1956-1999 average.Similarly, total discharge in 1988 was 69% of the 1956-1999 average even though total rainfall was high in the rainy season (117%).According to the survey report by JICA released in 1989, peak discharge at the Chao Phraya Dam during the 1983 flood was 4,100 m 3 s −1 (JICA, 1989).Such discrepancies will need to be reviewed in the future, including consideration of the precision of data, to determine whether major flooding occurred between Nakhon Sawan and the Chao Phraya Dam, and how flooding occurred in Bangkok. The top 5 events in terms of total discharge during 1956-1999 and 2011 at Nakhon Sawan occurred in 2011, 1970 (28.4 billion m 3 ), 1961 (24.8 billion m 3 ), 1975 (24.1 billion m 3 ), and 1995.According to the Royal Irrigation Department, which is responsible for the operation of the Chao Phraya Dam, the threshold discharge capacity of the lower watershed of the Chao Phraya River (Figure S1) above which flooding occurs is 2,000 m 3 s −1 .Cumulative discharges exceeding the threshold at Nakhon Sawan are shown in Table І for the 5 largest events.The discharge in 2011 exceeded the threshold in the middle of August, as well as in the middle of September (See Figure S2).In Nakhon Sawan, flooding of the city center was prevented through flood prevention actions such as sandbagging, but on October 21 a small boat moored on the river smashed through the sandbagging and the entire city center was inundated with about 150 cm of water.A peak discharge of 4,698 m 3 s −1 was recorded on October 13.Later, by the end of October, the discharge dropped below the discharge capacity of Nakhon Sawan, and was below the threshold in late November.Just as in 2011, the discharge in 1970 exceeded the threshold in the middle of August, and the discharge capacity of Nakhon Sawan in the middle of September, but the discharge dropped below the discharge capacity of Nakhon Sawan in the middle of October, and below the threshold by the beginning of November.In other years, the discharge exceeded the threshold at the beginning of September, and the discharge capacity of Nakhon Sawan at the end of September, but the discharge dropped below the discharge capacity of Nakhon Sawan from the middle of October to the middle of November, and below the threshold in the middle of November.These results show that flooding in 2011 continued about 1 month longer than in other years, and that the cumulative excess discharge estimated to have flooded downstream was an extremely large 12 billion m 3 . DAM RESERVOIR AND FLOODING SITUATION IN THE CHAO PHRAYA RIVER The following describes the weather and dam reservoir storage situation in 2011 in Thailand.MAR: Precipitation began at the end of March.It was 2 months earlier than a typical year.APR: Low rainfall rate continued, in line with a normal year.MAY: Monthly rainfall was recorded at a very high level relative to the past 30 years (Figure 2).Water storage in the reservoirs of the two large dams (Bhumibol and Sirikit) was at a level far below the lower dam operation curve (See Figure S3).JUN: In late June, heavy rain fell due to the effects of Typhoon "HAIMA," and water storage in both reservoirs began to recover to a large extent (See Figure S3).JUL: At the end of July, there was intense rainfall due to the effects of Typhoon "NOCK-TEN."Monthly rainfall was the highest in the past 30 years (Figure 2).Flooding occurred at the confluence of the Yom River lower watershed and the Nan River downstream from the Sirikit Dam.Water storage in both reservoirs recovered at a steady rate (See Figure S3).AUG: There was a lot of rain in August, and water storage in reservoirs began to exceed the higher dam operation curve (See Figure S3).However, flooding had begun in the area near Nakhon Sawan at this time, and it was no longer possible to increase preliminary release to prevent flooding downstream from both reservoirs.SEP: The highest monthly rainfall in the past 30 years (Figure 2).The Sirikit Dam reservoir almost became full (See Figure S3).Discharge of the Chao Phraya River exceeded its discharge capacity from Nakhon Sawan to Ayutthaya (See Figure S1), and began to overflow.In the middle of the month, water gates on the right bank were destroyed by the flood, and massive flooding occurred.At the end of the month, levees on the left bank broke one after another, and there was flooding of around 5 billion m 3 which was estimated from the difference in the hydrograph between the upstream and downstream parts at the levee breakage location. OCT: Rainfall was in line with an average year.The Bhumibol Dam reservoir almost became full (See Figure S3).The flooding of the left bank in late September moved to the South, inundating a series of industrial estates on the left bank.By early October the two dam reservoirs stored approximately 10 billion m 3 , which is an amount equivalent to two-thirds of the total flood volume, and this effectively mitigated the flooding.If flooding due to rain from the typhoon at the end of June was not stored and released, it may have been possible to store about 1 billion m 3 extra at the Sirikit Dam reservoir.Similarly, if flooding due to rain from the typhoon at the end of July was not stored and released, it may have been possible to store about 1 billion m 3 extra at the Bhumibol Dam reservoir in September (See Figure S3).However, at that time, water storage was within the scope of both upper and lower dam operation curves (See Figure S3), and thus it may have been impossible to make the judgment to release water at the beginning of the rainy season in order to save water for the dry season.Seasonal weather forecasting is useful for such dam operation; however, such forecasting is still within a research phase and is difficult to incorporate into operational use. CONCLUSION The following facts regarding the 2011 Chao Phraya River Flood can be gleaned from Figure 1. 1) Flooding occurred at the downstream parts of the Nan and Yom Rivers in the upper watershed of the Chao Phraya River.2) All floodwater at the upper watershed flowed into the lower watershed from the narrow section at Nakhon Sawan.3) Flooding occurred over a wide area in the downstream part of the Chao Phraya River. 1) is strongly related to the fact that the downstream parts of the Nan and Yom Rivers have a particularly gentle slope.In terms of 2), due to the gentle slope of the downstream parts of the Nan and Yom Rivers, the flooded area became large, and a high flood discharge was supplied to the lower Considering the water budget of the upper watershed, total dam reservoirs water storage, evaporation and total discharge at Nakhon Sawan are subtracted from the total rainfall during from June to October, resulting in an estimate of approximately 17 billion m 3 .It is important to reduce flood discharge into the lower watershed of the Chao Phraya River by increasing the flood control capacity of dam reservoirs and other facilities. In the case of the lower watershed, if it is assumed that the total flood discharge from Nakhon Sawan (Table І) floods the entire lower watershed, then it is estimated that the water level of the flooded area will be 0.29 m.If it were possible to control flooding by artificially expanding the flood area and lowering the floodwater, this would have been more effective at mitigating flood damage.However, uncontrollable flooding occurred in 2011 due to water gate destruction and levee failure, especially in the upstream sections of the Chao Phraya Dam.The inundated water on the left bank of the Chao Phraya River was returned to the Pa Sak River by the emergency embankment on the left bank, which has a height of 1.8 m, and then the floodwater made the river water level in the Chao Phraya River increase at the confluence of both rivers (see Figure S4).The high water level in the Chao Phraya River caused back flow into the irrigation cannels at the left bank of the Chao Phraya River, and subsequent overflow (see Figure S4).On the left bank of Chao Phraya River, the railroad and National Route 1 play the role of secondary levees for both rivers (see Figure S4).Other National Roads also play the role of dikes for both rivers.On the other hand, when floodwater inundates the left bank, the railroad and national route stop the floodwater, and this prevents expansion of the flood area to the east (see Figure S4).Furthermore, there is a danger that the detained floodwater will flow intensely between the national route and railroad where many industrial estates have been constructed (see Figure S4).Therefore if a large flood occurs on the left bank side, it will be necessary to take a response which quickly broadens the flood area to the east side by, for example, artificial breaking the levees and dikes. In the area around Bangkok, the Royal Irrigation Department and the Bangkok Metropolitan Administration have installed drainage pump stations to pump floodwater into the Chao Phraya River.The stations on the left bank have a capacity of approximately 710 m 3 s −1 , and the stations on the right bank have a capacity of approximately 220 m 3 s −1 .On the other hand, pumping stations with a capacity of only approximately 100 m 3 s −1 have been installed on the east side of Bangkok to pump floodwater into the Bang Pakong River, and stations with a capacity of only approximately 150 m 3 s −1 have been installed on the west side to pump water into The Tha Chin River.During the approximately 3 weeks from Oct. 14 to Oct. 31 during recent flooding, the water level of the Chao Phraya River exceeded the parapet height, and it was difficult to pump water back into the Chao Phraya River.Since it is impossible to construct large flood control basins near Bangkok, pumping is one of the most important solutions to deal with flooding.It is inevitable that the water level of the main river will rise during flooding, and thus there is a need to consider measures for pumping floodwater which do not rely on only the main river. The 2011 Chao Phraya River flood was caused by high seasonal rainfall.Increased rainfall by 143% over doubled runoff.The resulting flood destroyed water gates and broke levees, especially at the left bank of the upper Chao Phraya Dam, and led to uncontrollable flooding.This resulted in significant damage to industrial estates on the left bank of the Chao Phraya River.The recent major flood damage was not just a domestic problem for Thailand, but also a problem for the world due to its impact on industrial supply chains.Improving the Master Plan for future major floods in the Chao Phraya River watershed is an extremely important aspect of the national infrastructure of Thailand, and must be considered a priority. Figure 2 shows monthly and total rainfall for the watersheds.Due to the limited availability of data, monthly and total rainfalls were calibrated from 15 weather stations of the Thai Meteorological Agency from May to October in 1982-2002 and in 2011 using the Thiessen method.In flood years (1983 and 1995) when Bangkok was inundated (Somkiat, 2009), monthly rainfall in July and August exceeded the monthly average for the period 1982-2002.In 1983, rainfall was also higher than average in October, being the highest recorded during 1982-2002.The highest August rainfall during 1982-2002 was recorded in 1995.Total rainfall in the rainy season exceeded the average total rainfall (1,003 mm) in both flood years, being 1,147 mm and 1,153 mm in 1993 and 1995, respectively.In 2011, monthly rainfall exceeded the average monthly rainfall for the entire rainy season, with the higher July and September rainfall than recorded during 1982-2002.The total rainfall during the 2011 rainy season was 1,439 mm, which is 143% of the average rainy season rainfall during 1982-2002.In addition, 5 typhoons made landfall in Thailand in 2011.The average number of typhoons per year during 1951-2011 was 1.5, with 5 or more typhoons making landfall in Thailand in a year only three times: 1964, 1971 and 1972.The prevalence of typhoons strongly influenced the rainfall in 2011. Figure 1 . Figure 1.Diagram of the Chao Phraya River watershed (right), and the inundation situation as of Oct. 18, 2011 (left). Figure 2 . Figure 2. Monthly and rainy season rainfall for Cho Phraya River watersheds from May to October in 1982-2002 and 2011.Dashed line indicates the average for the period 1982-2002, bar frame and amount of rainfall indicates the highest rainfall in 1982-2002 and 2011.Monthly and total rainfalls were calibrated from 15 weather stations of the Thai Meteorological Agency using the Thiessen method.(Data source: GaME-T2 Data Center) Figure 3 . Figure 3.Total discharge of the Chao Phraya River at Nakhon Sawan from June to October in 1956-1999 and 2011.Dashed line indicates the average for the period 1956-1999, and dot line indicates total discharge in 2011.Bar frame indicates the top 5 total discharge events in 1982-2002 and 2011.(Data source: GaME-T2 Data Center and the Royal Irrigation Department) Figure S4 . Figure S4.Diagram of the canal network in the area of the left bank of the Pa Sak and Chao Phraya rivers.Bold line indicates National Route 1, and thin lines indicate other National Routes.Numbers indicate the National Route numbers, dashed line indicates the railway, and squares indicate the industrial estates.Arrows indicate the inundated water flow in 2011.The map provided by The Royal Irrigation Department. Table І . Cumulative discharge and the period it exceeded 2,000 m 3 /s at Nakhon Sawan.Data provided from the Royal Irrigation Department.As a result, 3) occurred: water gates and levees broke due to the high water level of the Chao Phraya River, and flooding over a broad area inundated into the lower watershed.Further investigation of other flood cases are required to fully understand the characteristics of the Chao Phraya River flood.
2019-01-02T03:26:24.996Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "abc18c90b7f3ab54aba63abf9de317c0f3cb31f8", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/hrl/6/0/6_0_41/_pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "abc18c90b7f3ab54aba63abf9de317c0f3cb31f8", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Environmental Science" ] }
229377301
pes2o/s2orc
v3-fos-license
STEAM to R-SLAMET Modification: An Integrative Thematic Play Based Learning with R-SLAMETS Content in Early Child- hood Education STEAM-based learning is a global issue in early-childhood education practice. STEAM content becomes an integrative thematic approach as the main pillar of learning in kindergarten. This study aims to develop a conceptual and practical approach in the implementation of children's education by applying a modification from STEAM Learning to R-SLAMET. The research used a qualitative case study method with data collection through focus group discussions (FGD), involving earlychildhood educator's research participants (n = 35), interviews, observation, document analysis such as videos, photos and portfolios. The study found several ideal categories through the use of narrative data analysis techniques. The findings show that educators gain an understanding of the change in learning orientation from competency indicators to play-based learning. Developing thematic play activities into continuum playing scenarios. STEAM learning content modification (Science, Technology, Engineering, Art and Math) to R-SLAMETS content (Religion, Science, Literacy, Art, Math, Engineering, Technology and Social study) in daily class activity. Children activities with RSLAMETS content can be developed based on an integrative learning flow that empowers loose part media with local materials learning resources. Keyword: STEAM to R-SLAMETS, Early Childhood Education, Integrative Thematic Learning © 2020 Early Childhood Education Post Graduate Program UNJ, Jakarta e-ISSN (Online Media): 2503-0566 P-ISSN (Print Media): 1693-1602 Jurnal Pendidikan Usia Dini http://journal.unj.ac.id/unj/index.php/jpud Volume 14. Number 2. November 2020 INTRODUCTION The STEAM learning approach in the context of Early-Childhood Education is still a new discourse. The STEAM approach is one of the efforts to implement integrative thematic learning that involves a number of learning content (Broadhead, 2003;Mengmeng et al., 2019;Science, n.d.;Taylor et al., 2018). An approach with STEAM is carried out through a variety of play activities according to the needs, characteristics and stages of children's development (Colucci et al., 2017;Lillard et al., 2013). In the context of early-childhood education, the use of the STEAM education is also seen as a form of academic-oriented learning (Dell'Erba, 2019;Sawangmek, 2019) This learning is considered to prioritize academic education rather than child development. Such learning practices are also a concern among academics, policy makers and practitioners in Indonesia (Wang et al., 2018). STEAM is important because it allows teachers to combine several disciplines at the same time and facilitates learning opportunities that empower children to explore, challenge, study, discover and practice creative building skills (DeJarnette, 2018). It is a perfect fit to include the arts in the STEM disciplines because of STEAM's focus on innovation and design. The STEAM concept is natural for children, because they like to explore and experiment. Within their natural environment, and adding art to STEM, is to offer educators an additional option to present art-packed STEM concepts to children, especially at the elementary and early-childhood levels. Regarding to Quigley et al., (2017) opinion, in education, the STEAM conceptual model offer's educators the ability to teach effectively using trans disciplinary science. Science, Technology, Engineering, Arts and Mathematics (STEAM) has become an increasingly popular acronym in the educational because of the recent trend to incorporate art and design in science, technology, engineering, and mathematics (STEM education). In Costantino (2018) study, the STEAM category can describe transdisciplinary curriculum model to be implemented across disciplines, such as investigating the role of art and design in STEM. This is a reference for researchers to modify STEAM to be broader, so that they can use the STEAM approach to be more comprehensive, by adding literacy, social studies, and religion. Religion is combined in STEAM because of national culture with various religions, but each religion has the same teaching, which is to unite all important life elements introduced to children through daily learning, such as introducing the existence of God to all of his creations in learning science, math, technology, engineering, and art. Responding to the challenges of early-childhood learning, for a more substantive integration of art discipline material with the humanities and other subject areas such as science, technology, engineering, mathematics, social studies, and entrepreneurship, a pluralistic STEAM-driven model, easily adaptable in education refers to on any of these innovative paradigms impartially (Rolling, 2016). Over the last few years, there has been an increasing interest in incorporating STEAM-promoting practices into the formal education context (Cook & Bush, 2018). Other subjects such as history, music and geography could benefit from the STEAM mentality, apart from science and arts (Henriksen, 2017). The STEAM method is implemented through a modified project-focused learning model, showing that children are learning to develop data, literacy, and selfdirection skills (Ridwan et al., 2017). To make it easier for teachers to organize learning, it is necessary to add literacy elements to STEAM. The thematic approach is a way of teaching and learning, linking and incorporating multiple aspects of the curriculum within a theme (Varun A, 2014). There will be plenty of opportunities to connect with peers, teachers, parents, and good community engagement as all topics are incorporated. Thematic approach in ECE curriculum, that incorporates various areas of knowledge is deemed appropriate for improving the knowledge and skills of young children in early childhood education (Björklund & Ahlskog-Björkman, 2017). The academic approach in ECE practice is considered to shift the position and essence of play in children. There are concerns that this approach will lead to early schooling in children. The condition is considered normal because many practitioners commit malpractice in early-childhood education services. Learning uses in kindergarten classes, mostly paper and writing materials focused on reading, writing and arithmetic exercises (Dell'Erba, 2019;Krogh & Morehouse, 2014;Wang et al., 2018). Such educational practices have resulted in children losing their playtime. Conceptualizations of successful play-based learning that teachers experience as part of a collaborative school community, the enactment of play pedagogy by teachers, and collaboration with home schools. The studies recommend implementing a holistic integrative model of support in ECE curriculum and actively trying to incorporate parents, teachers and kindergartens in creating an optimal experience of play learning for young children (Keung & Cheung, 2019). STEM, STEAM, and R-SLAMETS STEAM is a broad term that seeks to integrative tie together education in science, technology, engineering, arts and mathematics, bringing together the methodologies of technical design typical of engineering and technology fields with the enquiry learning approach used in mathematics and science and the divergent style of thought coming from the arts. With regard to conventional ideas from multiple disciplines, STEAM must construct an indefinable transdisciplinary space. The theme of the universe, for example, if students do not identify what they are learning as science, technology, or art (Liao, 2016). Children's reflections on other things in learning show that transdisciplinary space has been achieved. In addition, they consider their learning to involve the creation of collaborative and critical-thinking skills through the application of STEAM skills. Arts-enriched STEM approach (STEAM) are believed to improve science lessons and make them more appealing. The intervention generated long-term knowledge and developed stable intrinsic motivation scores, but with a single STEAM intervention, self-reported aspects of creativity were not affected (Conradty & Bogner, 2019). Experts recommend that educators use integrative methods to present STEM content through subjects using design methods to facilitate literacy for all students (Gess, 2019). Integrative STEM education refers to a learning method that focuses on technology / engineering design that consciously combines the concepts and practices of science and / or mathematics education with the concepts and practices of technology and engineering education. Through potential integration with other school subjects, such as language arts, social sciences, architecture, etc., integrative STEM education can be strengthened. Educators are expected to focus on a deliberately designed pedagogical approach by placing the teaching and learning process of STEM concepts and practices in a pedagogy based on technology / engineering design, art design, literacy, social studies, and religion. Research has shown that providing early childhood and elementary-age children with meaningful hands-on STEM activities positively influences their attitudes and dispositions towards STEAM (DeJarnette, 2018). For preschoolers who are diligent and determined when designing, STEAM concepts are not too difficult; they naturally try to fix them when things don't turn out exactly the way they expected. Early exposure to the STEAM technique has many advantages for young children. Integrated and exciting learning interactions strengthen the interests and learning of students in STEM and help prepare them for the 21st century (Moomaw, 2012). Some have begun to pursue a transdisciplinary approach, in which the incorporation of the arts into the STEM disciplines provides a radically new way for learners to discuss and solve real-world issues, as educators and researchers strive to identify and describe STEAM. When limited to only five letters between the many others that can help students explore and clarify solutions to the puzzles and challenges they will face today and tomorrow, this transdisciplinary space can feel confined (Clapp et al., 2019). The same thing is felt by ECE educators in Indonesia, who already have a standard curriculum for ECE, which stimulates children based on basic competencies. Therefore, adding three other aspects, namely religion, literacy, and social studies, is to make it easier for teachers to implement the STEAM approach, because the limit has been expanded to an additional eight letters, R_SLAMET. This alphabetical arrangement was chosen because the term R-SLAMET feels more familiar and is more distinctly Indonesian. Integrative Thematic and Play-Based learning in ECE Each curriculum will undoubtedly describe the programs and learning experiences that early childhood will go through. Learning experiences that occur naturally in children is play experiences (Gronlund, n.d., 2015). Playing is a natural activity that early childhood does throughout the day and throughout their age range. Through the experience of playing, children interact with objects, people, tools, situations and environmental conditions that can help them acquire knowledge, skills and various value institutions (Inglese et al., 2014;Jacman, 2012;Sancar-Tokmak, 2015). Playing is an activity that makes children happy. Playing is believed and proven to be the most natural, meaningful way of learning and has a great impact on the development of children's potential. Playing activities always make children happy to do various things, both alone and in groups (Jay & Knaus, 2018) (Peng, 2017). In an atmosphere of playing together with friends (peer group), many children dissolve or drift away with an atmosphere that is built or created by themselves. The playing atmosphere like this can be categorized as immersing or immersing play. One of the keys and fundamental thoughts are the understanding that a good and correct curriculum in early childhood is a curriculum that is in accordance with the characteristics and stages of child development (Danniels & Pyle, 2018). In general, the characteristics and stages of earlychildhood development are in play, so the right curriculum for that is a play-based curriculum (Whitebread, 2012;Wong et al., 2011). Play-based curriculum is focused on developing learning activities in order to achieve developmental content and learning program content based on play activities. The development of play activities is at the core of the abilities of professional teachers in curriculum development (Zosh et al., 2017). In some references, the educator's ability is usually discussed in a study of the emergent curriculum, which is designed in various forms of play activities after a study of developmental aspects, and the program is planned in learning tools. Playing is synonymous with the world of children, even some experts call it early years is play (Jacman, 2012;Jay & Knaus, 2018). Playing is the main activity and needs of children throughout the early-childhood development range. Through play, a child reflects on a variety of activities filled with cheerfulness, fun and sincerity (Inglese et al., 2014;Sancar-Tokmak, 2015). For early childhood, playing becomes an activity that is carried out on a voluntary basis, without coercion, maybe even without end and ends and goals. Playing allows children to build various aspects of their personality such as knowledge, values, skills, attitudes and life skills that children can use to adapt and socialize with a wider environment. Play-based curriculum (PBC) has a key component that becomes the basis and reference in providing educational services for early childhood. Key components in a PBC include development references, program content, learning processes, management of learning areas and assessments (Gronlund, 2015;Hennessey, 2016). The development component becomes the center for curriculum and learning tools for childhood education (Sancar-Tokmak, 2015). Development needs to be placed as a basis for consideration in improving other components, especially in the content of learning programs (religion and morals, science, literacy, art, mathematics, social studies and technology) and the learning process (play activities). The selection and exploration of content or learning material in early childhood must consider the characteristics and stages of development. The learning process component in the play-based curriculum provides a reference overview of the choice of the pedagogical process in presenting an interactional learning model designed in the form of a play scenario. This shows the understanding that playing is a pedagogic process that is presented in the early years. The pedagogical approach to playing activities means that the learning process built through playing activities must be seriously presented a learning process or educational play activities (pedagogical instruction or pedagogical play) (Fesseha & Pyle, 2016;Pyle & Danniels, 2017). The learning process presented in the play-based curriculum must provide a play scheme that is continuous, meaningful and fun for children. The play-play scheme is designed in the form of a play activity scenario that has educational goals, contains playing content that is in accordance with the characteristics and stages of development as well as media and teaching materials are designed in an interesting way (Edwards, 2017;Finch et al., 1997;Gronlund, 2015). In order to complement their practice, which includes building successful play-based learning, teachers must be able to develop pedagogical skills. Play-based learning development is contextual (Keung & Fung, 2020). Play-based curriculum is based on a number of play assumptions that form the basis of curriculum development and development. In simple terms, the curriculum is often interpreted as a learning experience that children will go through in an educational activity. Learning experiences that will be compiled and developed in the curriculum must be based on the main characteristic of early-childhood learning, namely play experiences (Kennedy & Barblett, 2010;Peng, 2017). The play-based curriculum has at least four main components that will guide educators in providing ECE services. The four components referred to have been described in the previous section. One of the curriculum components that are at the core of learning is learning content. children activity content can be classified in the form of science, literacy, mathematics, art, social technology studies (Faas et al., 2017;Jacman, 2012). The content a number of concepts that knowledge, values and skills in accordance with their respective scopes. The selection of learning content is adjusted to the needs, characteristics and stages of earlychildhood development. Therefore, it is necessary to understand the contents of the content and how to develop it so that it becomes meaningful early enjoyable learning material for children (McLaughlin & Cherrington, 2018;Pyle & Bigelow, 2015). Early childhood learning content teaching materials can be packaged in a variety of interesting ways and media, for example, big books, comics and various forms of foster. Many kind of packages of ECE teaching materials can be used in the implementation of play-based learning. METHOD This research uses a case study method that uses a training activity background and a workshop on the implementation of playing R-SLAMET content based on loose part media. The study participants consisted of 37 active participants with backgrounds as early-childhood educators in various ECE units (17% formal and 83% non-formal). There are 90% of ECE educators who have never attended R-SLAMET training or the like and 10% of participants have attended R-SLAMET training for ECE units. Training held in meetings, the first meeting understanding the concept of learning. In the second meeting, participants must attend a comprehensive workshop. The third meeting, micro teaching training participants to evaluate training results. Data collection was carried out through participatory observation, interviews, focused discussions and analysis of document reflections on the results of activities submitted by the participants. Data analysis was performed using narrative analysis, which describes the process of assisting in understanding the concept and practice of designing and implementing loose part media-based R-SLAMET content playing. Orientation Change: from Competency-Based Practice to Integrative Thematic Play-Based Practice Along with the policy of implementing the competency-based curriculum in the 2013 curriculum, educators in ECE units have acquired a fairly permanent understanding of the concept that learning design (play) must start and be based on predetermined competency indicators. This is recorded in the statements and learning design documents prepared by ECE educators. The summary of the ECE educator's statements is revealed that we have always made the game based on the indicators of basic and core competencies, which previously were basic abilities. We often had difficulty connecting themes and competency indicators into play activities. Participants realized that the design pattern was too simple, and the activity was partial because one game was only intended for one indicator. After being given an understanding of the concept and illustration of an integrative thematic play plan containing STEAM content, participants feel and gain new understandings that show more integrated learning, use flexible themes and can contain a number of STEAM content in continuous play. Science is used as the main example since, while the lexical requirements vary, the literacy standards of this discipline somewhat align with those of the others. In particular, areas of content, the vocabulary of science illustrates how language works. However, (Doyle, 2019) suggests related requirements from the Australian Curriculum (ACARA, 2018) prior to the discussion of languages and literacies and notes Australian government positions on STEM (Science, Technology, Engineering and Mathematics) education (Australian Government, 2015). The Arts are then highlighted as a pivotal resource for the STEM content areas as a mode of artistic study, representation and expression. Finally, as springboards for realistic classroom events, some ideas are given for the creation of school STEAM language and literacy. Repositioning between Developmental Content and Play Activities So far, participants have made aspects and indicators of development as learning content that must be improve into learning activities (play). The scheme follows the theoretical and practical flow of instructional design-based curriculum development. When given a play-based curriculum understanding, participants began to realize that naturally early childhood plays with various games that can reach many aspects of development at once. Participants responded to the video showing natural play in early childhood activity, saying "ow...that was my childhood play activity and I learned a lot from the game." Through the illustration of the video show, participants are invited to reminisce about designing a game on a theme and playing scenario. After that, participants are invited to analyze play activities and each play scenario that is compiled will have an impact on what aspects and development indicators. Almost all participants gave correct answers and simultaneously. Finally, the participants began to reflect and grasp the understanding that activities that include regular and programmed play scenarios can actually build many aspects and indicators of development. This process is what builds participants' awareness that developmental aspects and indicators are the impact or consequence of the games given to children. In this position, participants began to change their way of thinking from exploring content-based learning to playbased learning. and how their perspectives affect its implementation in ECE classrooms. The findings demonstrated differences in the concepts and implementations of play-based learning by participants in kindergarten classrooms. The play enactment, which was entirely different from learning, was identified by several participants, but also showed some confidence in the ability to learn through play. Although positive perspectives of play-based learning were described by all participants, more than half described the implementation of kindergarten programs, which did not completely incorporate play-based learning. In their introduction of play-based pedagogy, participants were also asked to describe obstacles they faced. Participants in both enactment classes reported that their play execution was experiencing difficulties. These findings support the need for a clear and consistent concept of play-based learning to help decide how play is better incorporated and how academic skills are learned (Fesseha & Pyle, 2016). Play continuum and STEAM (R-SLAMET) content Training and workshop participants are also invited to compile play scenarios that are programmed with STEAM content (Science, Technology, Engineering, Art and Math). Researchers accompany and guide participants to find as many types of play (density of play) as possible on a theme and sub-theme without first thinking about aspects and indicators of development. After getting more than three play activities, participants are invited to choose one of the play activities, which will be compiled a play scenario that is continuous and has STEAM content. Participants are guided to formulate a continuum of playing scenarios from the beginning of the game to unlimited activities. In each scenario, participants are invited to enter the appropriate learning content. For example, in the initial scenario, children are invited to pray before playing activities and asked the participants what learning content did we include? Participants simultaneously answered, that is religious and moral content. The scenario was continued with the child observes pictures and videos of the growth process of grass jelly plants. Participants were again asked what learning content fits this scenario? They answered science content. This activity is carried out until it becomes a continuum of playing activities filled with STEAM learning content. Based on this continuum play, participants can examine the aspects and indicators that are expected to emerge as a result of the assigned play scenario. In line with research conducted by DeJarnette (2018) over the past decade, STEAM (Science , Technology , Engineering, Art, and Math) education has gained rising attention , particularly at middle and high school levels. At the early childhood level, this article focuses on the need for STEAM education. With their sense of curiosity and imagination, preschool children have a natural inclination for science. The researcher investigated how it will influence the arrangements, self-efficacy, and rate of implementation for teachers to include hands-on professional development, consistent encouragement, and rich resources for STEAM lesson implementation into the early childhood curriculum. The research also included observation by preschool children of the receipt of STEAM instruction. The results showed an improvement in the positive arrangements and self-efficacy of pre-school teachers, but the rate of teachers' adoption of STEAM lessons was initially reduced. Modification of STEAM Content to R-SLAMETS content Apart from STEAM, the curriculum content at ECE also provides learning content related to Literacy and Social Studies. Specifically, the ECE unit in Indonesia has a special content of religion and morals. Participants were then introduced to a complete analysis of curriculum content, namely R-SLAMETS (Religion, Science, Literacy, Art, Math, Engineering, Technology and Social Study). The implementation of content development is in principle the same as STEAM content-based playing activities, only being extended to playing the R-SLAMETS-based continuum. Participants are presented with illustrations of continuum playing activities containing R-SLAMETS content, either in whole or in part. To achieve these skills, participants are guided through the stages of developing continuous play with R-SLAMETS loads. The addition of art to STEM becomes STEAM important even though most children do not become professional artists from year to year. As an acronym for educational creativity, STEAM promises to enhance the distinct findings emerging from art + design studios by immersing students in multiple knowledge bases covering the contributing domains of science, technology, engineering, art, and mathematics (Rolling, 2016). After Art is entered into STEM content, it changes to STEAM. R is for Religion in R-SLAMET because the religion from the ECE Indonesia curriculum is one of the components that can support children's morals and behavior. Therefore, in the STEAM application so as not to confuse teachers, we are trying to integrate religion into the STEAM implementation content format. This has been presented by recent research that explains the application of STREAM-based approach (Science, Technology, Religion, Engineering, Art, and Mathematics) project-based learning models for student learning activities and student creative work item. The results showed that the introduction of the STREAM approach based on the project-based learning model had an impact on increasing student learning outcomes and their activities in learning activities made them more innovative in developing attractive products. There are four innovative products developed by students in this study, namely drip irrigation, water cycles, water quality testing, and story booklets with religious images. Great opportunities for primary school teachers to use the STREAM-based project-based learning model to increase students' curiosity, imagination, and social attitudes in learning (Azizah et al., 2020). The dialogue in the photo is an illustration of the development flow of playing continuum with E-SLAMETS content which can be described in Figure 1: The flow chart shows the development of playing continuum with R-SLAMETS content starting from; analysis of themes and sub-themes, identification of play types or density of children's play, drafting play scenarios with content SLAMETS, analyzed the aspects and indicators of development that were affected, and chose loose part media based on local materials. Strength of Loose Part Media Loose-part components provide children with opportunities to develop their imagination, collaborative actions, and cognitive functioning (Maxwell et al., 2008). With loose-part media, a significant consideration is that the materials are open-ended, to encourage unstructured child-led play, and to allow children to use these materials any way they want, while the idea of loose-parts has existed for many years. Loose-parts provide children with opportunities for unstructured play that adults do not dominate (Ridgers et al., 2012). Unstructured environments have minimal guidelines and rules set by adults who allow children to develop their own play activities and enable them to do so. Outdoor play environments with loose-part that are frequently altered to provide kids with challenges and a sense of wonder, as future play opportunities are continually evolving (Canning, 2010). In general, early-childhood teachers and programs that accept the use of loose-parts have more flexible schedules, while enabling children to practice their freedom to play and develop individual control and self-regulation skills. Among preschool children, loose parts encourage varied play activities. Integrative thematic play activities that contain STEAM or R-SLAMETS content will be more effective, and meaningful when ECE are can find and use surrounding part materials and media from the loose environment. ECE educators who prepared loose media from the environment are able to develop continuous play activities with STEAM to R-SLAMETS This figure 2 & 3 illustrates that the use of loose part media in playing activities with STEAM or R-SLAMETS content has provided opportunities for children to be more active, creative and innovative. Children are involved in play activities that are continuous, meaningful and fun (immersion play). More precisely, identifying loose parts enable greater adoption and deeper discussions and professional development of the subject. Loose parts of the sample were classified as natural or manufactured, and terms defining loose parts were analyzed (Gull et al., 2019). Spencer et al., (2019) found educators considered to play loose outdoor parts with several social and cognitive advantages for preschool children who are important for optimum growth and development and overall health and wellbeing. CONCLUSION Advocacy of early childhood educators through training and workshops has helped improve professional quality on an ongoing basis. There has been a change in the mindset of implementing the operational curriculum from a partial competency-based approach to an integrative thematic Play-based approach that integrates STEAM content across learning. Educators demonstrate the application of thematic integrative Play-based into a continuum game scenario containing STEAM content and R-SLAMETS content. This process becomes more effective and efficient and meaningful when PAUD educators are able to elaborate and empower teaching materials and loose-part media based on local material sources. The implication of this research is to invite other
2020-12-03T09:03:57.423Z
2020-11-30T00:00:00.000
{ "year": 2020, "sha1": "2a29b24197a95330671318796afd600054b93c01", "oa_license": "CCBY", "oa_url": "http://journal.unj.ac.id/unj/index.php/jpud/article/download/17092/9588", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "87dc064e9d35129440ebe2aa8ed0938021f8e0ea", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
55414902
pes2o/s2orc
v3-fos-license
Technical efficiency and profitability of potato production by smallholder farmers : The case of Dinsho District , Bale Zone of Ethiopia The study aimed to analyze the technical efficiency and profitability of potato production by smallholder farmers in Dinsho District of Bale Zone of Ethiopia. Cross sectional data collected in 2015/16 production year from 147 surveyed households was utilized in achieving these objectives. Non-parametric net crop revenue analysis and Cobb-Douglas stochastic frontier approach were used to analyze enterprise profitability and to estimate the technical efficiency levels in potato production, respectively. The result of net crop revenue analysis indicated that potato production was profitable wherein the producers had earned net return of about 11,740.9 ETB (Ethiopian Birr). Further analysis of the gross and net income data showed wide variation of the results between harvesting seasons and off-peak season. The test result of Cobb-Douglas stochastic frontier indicated that the relative deviation from the frontier due to inefficiency was 94%. The mean technical efficiency of farmers in the production of potato was 0.89. The estimated stochastic production frontier model indicated that area of the plots, amounts of NPS fertilizers, amount of seed and labor in man-days were positive and significant determinants of production level. The estimated SPF model together with the inefficiency parameters showed that age, age square, education, land ownership status, extension contact, number of plots (fragmentation), household size and livestock significantly determined efficiency level of farmers in potato production in the study area. To this end, the attention of policy makers to improve agricultural production should not revolve solely around the introduction and dissemination of new technology to increase yield, but also more attention should be given to improve the existing level of efficiency. INTRODUCTION Agriculture is the most significant contributor to Ethiopia"s national economy (World Bank, 2006).It employs about 85% of the total labor force (MoFED, 2013).Moreover, the share of agriculture to total export proceeds increased consistently from about 63% in 2002/2003 to 82% in 2008/2009, though it slightly declined to 71% in 2010/2011.In contrast to this, the share of non-agricultural goods (merchandise goods and gold) was, by and large, constant during the same period with a slight increase since 2008/9 (EEA, 2013). Agriculture accounted for 43% of GDP in 2012/13 fiscal year (MoFED, 2013).The World Bank (2006) noted that "The dominant agricultural system in Ethiopia is smallholder production under rain-fed conditions."The same report shows that there is strong positive correlation between growth in GDP as well as per capita GDP and agriculture and crop production which further demonstrates the importance of agriculture to the Ethiopian national economy.All these factors direct the country"s development policies, strategies and objectives towards improving the agricultural sector and the livelihood of rural population.In this context, various efforts were made by the preceding regimes.However, the sector could not produce enough food to support the rapidly increasing population.Consequently, both chronic and transitory food insecurity problems continue at the household level in Ethiopia (FAO/WFP, 2012). According to the Global Hunger Index (2013), levels of hunger are still "alarming" or "extremely alarming" in 19 countries, including Ethiopia, meaning food security is an urgent issue.Potato (Solanum tuberosum L.) has great potential when it comes to food security (UNDP, 2014).Thus, among the crops that have increasingly gained importance to overcome food insecurity problems in Ethiopia is potato.The potential of potato for food security is increasingly being noticed as witnessed by growing interest of private investors and policy makers in this crop.In recent years, potato production has expanded because of the availability of improved technologies, expansion of irrigation structure and increasing market value (EIAR and ARARI, 2013).However, the average yield in Ethiopia reaches only 7 tons/ha when the potential for smallholder is around 25 tons/ha (EIAR and ARARI, 2013).Furthermore, as cited in EIAR and ARARI (2013), for Sub-Saharan Africa (SSA), Scott et al. (2000) projected a 250% increase in demand for potato between 1993 and 2020, with an annual growth of 3.1%.The growth in area under production is estimated at 1.25% a year, the rest of the increase being achieved through predicted growth in productivity.Increased potato productivity will play a buffer role to the increasing food prices; thus, enhance household income in the project countries with a spill over to other countries in SSA. In the study area also, there is a problem of food insecurity.According to the Dinsho District"s Agricultural Office data (2015), more than 8,000 people have received relief food assistance only for the second half of 2015 fiscal year.In this regard, production of potato has great food security potential in the District.Farmers chose to increase the production and marketing of these enterprises, among others based on the potential that the crops had in the study area (Dinsho District Agricultural Office (DDAO), 2014).However, given the mounting pressure on land, sustaining higher rates of growth in agriculture production requires substantial improvements in factor productivity.Consequently, transformation in the structure of production (mostly subsistence-based) to more commercially-oriented production will be key in sustaining growth.In an economy where resources are scarce and opportunities for new technologies are limited, efficiency studies will be able to show that it is possible to raise the productivity by improving efficiency without raising the resource base or developing new technology (Tijani, 2006) Sample size determination Sample size was calculated according to Yamane (1967): (1) Where n is the sample size, N is the population size, and e is the level of precision.In order to determine the required sample size (total number of households) for this study following Yamane (1967), at 95% confidence level, 0. Sampling techniques Since farm household heads were responsible for day-to-day farming activities, they were taken as the basic sample unit in this study.Potato was produced by almost all households in the study area.However, to draw the required sample for this study, first complete list of the household data including the socioeconomic characteristics of the households were obtained from the district"s agricultural office after which producers and non-producers were differentiated.After that only those households producing potato during the survey period (2015/2016) were included in the sample selection.The distribution of the sampled kebeles and households drawn using random sampling techniques and probability proportional to size of each kebele"s population are shown in Table 1. Sources and method of data collection This study mainly relied upon primary data sources that were collected from a semi-structured questionnaire given to sampled 1 Commonly used Ethiopian term for areas of altitude above 2400 meters 2 Commonly used Ethiopian term for areas of altitude between 1800 and 2400 meters respondents by trained enumerators.Key informant interview was used to support the information collected though questionnaire.Relevant secondary data sources were also assessed to supplement the primary data. Non-parametric analysis Net crop revenue analysis was used to provide descriptive evidence of enterprise profitability through the following steps: Where: GFB is gross field benefits, OPH is output harvested, and AVP is the average selling price.Based on the GFB value calculated in equation ( 2), net crop revenue was calculated as: Where: NR is net returns, and TVC is total variable cost. Finally from NR, a return to factors used in the production of potatoes was calculated by using return to variable cost (RVC) as follows: RVC= NR/TVC (4) Parametric method Crop production in general in the study area and potato production in particular are likely to be affected by random weather events and pest infestation.Additionally, measurement errors are likely to be high.Thus, given the inherent stochastic nature of crop production (Coelli et al., 2005), the stochastic frontier production function approach appears to be an appropriate method for estimating technical efficiency in agriculture of potato production in Dinsho District.However, the difficulty of specifying in advance an appropriate functional form for the data at hand is one shortcoming of the stochastic frontier model.In stochastic frontier model, the two most important functional forms widely utilized were Cobb-Douglas and Translog production functions.Both functional forms have their own strengths (Haileselassie, 2005) and short-comings (Haileselassie, 2005).Therefore a generalized likelihood ratio test was used to determine an appropriate functional form to fit the data used in the present study.The Generalized log-likelihood ratio (LR) was calculated based on the hypothesis that all interaction terms were zero including the square specification (in the translog functional form): Where: LR = Generalized log-likelihood ratio Following Coelli et al. (2005), the farm"s technology is represented by a stochastic production frontier as follows: Where, Yi represents output of potato for the i th farmer in quintals/ha, f(X; ß) is a suitable production function, Xi are the inputs used in production of potato in units/ha, βi are the coefficients to be estimated, єi is a composite error term defined as: Where: vi represents random errors assumed to be distributed IID N (0, 2 v  ) and capture events beyond the control of farmers.ui (where ui ≥ 0; N (  , 2 u  ) ) capture technical inefficiency effects in the production of potato.According to Battese and Coelli (1995), the influence of the inefficiency component can be measured by: Where:  -is the parameter which measures the discrepancy between frontier and observed levels of output and is interpreted as the total variation in output from the frontier attributable to technical inefficiency.It has a value between zero and one. 2 u  -is the variance parameter that denotes deviation from the frontier due to inefficiency; 2 v  -is the variance parameter that denotes deviation from the frontier due to noise; 2 s  -is the variance parameter that denotes the total deviation from the frontier. The empirical model of the Cobb-Douglas production function for potato production in its logarithmic form is specified as follows: Where:y -is the total output of potato obtained during the survey period in quintal, ln -natural logarithm, X1 (Area)is the total area of land in hectare allocated for potato crop by the i th farmer.X2 (Oxen power) -the total number of oxen days used by the i th farmer 3 X3 (Amount of seed) -is the amount of seed used in kg, X4 (Amount of NPS 4 fertilizer used) -amount of NPS chemical fertilizer used in kg, X5 (Amount of Urea used) -amount of UREA chemical fertilizer used 3 One oxen-day is equivalent to plowing with a pair of oxen for 8 hours. 4NPS fertilizer is new fertilizer released to the area and used instead of DAP. in kg, X6 (Labour) -is the total amount of labour in man-days equivalent, β1 -parameters to be estimated, The inefficiency model based on Battese and Coelli (1995) was specified as follows: ui= g (Zi: σi ) Where, ui -Technical inefficiency error term i -Vectors of coefficients to be estimated Zi -Vectors of explanatory variables defined in the next section. Given the specification of the stochastic frontier production function defined in equation 10, the technical efficiency of the i th farmer is: The ML estimates of technical efficiency effects of the model were estimated using a software package FRONTIER VERSION 4.1 (Coelli, 1996) specifically designed for the estimation of efficiency. Definition of efficiency variables and hypothesis Based on previous studies and socio-economic conditions of the study area, the following factors were expected to determine technical efficiency differences among farmers. Age: is the age of the household head in years which is hypothesized to reflect the experience of the farmer in farming.The finding of Jwanya et al. (2014) showed that the experience of farmer in farming is the significant factor differentiating the technical efficiency of farmers.However, as the farmer gets older his managerial ability is expected to decrease.To see the diminishing effect of age on efficiency a quadratic functional form is specified in the inefficiency effects model.Hence, the age and the age square were hypothesized to have positive and negative effect on technical efficiency of potato production, respectively. Education: Formal education commonly measured in years of schooling of the farmer has received most of the attention in the frontier efficiency literature.From empirical studies reviewed education is one of the most recognized factors in determining efficiency level of farmers in many area of the world.In this study, education measured in years of schooling was hypothesized to determine TE positively.The results of different researchers in different area showed the same result confirming this hypothesis (Dolisca and jolly, 2008;Bonabana-Wabbi et al., 2012;Jwanya et al., 2014). Land ownership: this is a dummy variable taking a value of 1 if the household head was cultivating owned and/or hired land and 0 if it was sharecropped land.Land ownership is one of the variables that were considered in performance evaluation.Farmers may tend to be more efficient in managing those lands that are owned and hired than sharecropped lands.This is because; they tend to give priority to their own land in all aspects.They may do so because outputs that will be obtained from sharecropped lands are eventually shared between the owner and the operator farmer.Therefore, farmers who were managing either their own land or hired land were expected to be more efficient than those farmers who were managing sharecropped land. Farm size: Measured in terms of landholding size in hectares was expected to determine the efficiency differential of farmers in the study area.As farmers holding large farm size have the capacity to use compatible technologies that could increase the efficiency of the farmer, relatively farmers holding large farm size in the study area were expected to be more efficient. Extension contact: It is the frequency of contact between extension workers and potato producer.It influences the growth of agricultural by assisting the dissemination of new technologies to farmers as a way of increasing agricultural productivity.Therefore, farmers who have had more extension contact were expected to be more efficient than others.Abdullah et al. (2006) obtained the result where extension contact was the significant variable influencing the efficiency level of producers in the study area. Household size: It measured the size of households in terms of adult equivalent.In the rural areas, household members are an important source of labour supply used in production of crops.In addition, farmer who has large household size would manage crop plots on time.Thus, household size was hypothesized to determine efficiency level positively. Sex: this is a dummy variable taking a value of 1 if the household head is male and 0 otherwise.Bonabana-Wabbi et al. ( 2012) came up with the conclusion that sex of the household head is the important determinant efficiency where females were as more efficient than males.However, according to Abebaw (2003) and Abonesh (2006) male headed household are in a better position to pull labor force than female headed ones indicating more male efficiency.Thus, in this study the sign of sex of household head on efficiency was pre-indeterminate. Fragmentation: Fragmented lands are difficult for effective management of the crop.A farmer having more plots is expected to loss time by moving between plots.Farmers who have large numbers of plots in the same place would be expected to be more efficient than those farmers owning fragmented plots; because fragmentation of plot would make difficult to perform farming activities on time and effectively.Therefore, fragmentation measured in numbers of plots was hypothesized to determine efficiency negatively.Fekadu (2004) obtained the same result. Livestock: It refers to total number of livestock owned by the farm households measured in tropical livestock units (TLU).Livestock supplements the production of crops in various ways.The income obtained from livestock serves to invest on crop production especially to purchase inputs.Livestock manure could also be used to improve soil fertility.It is also the main sources of animal labour in crop production.Thus, livestock was hypothesized to determine efficiency positively.In line with this hypothesis, Temesgen and Ayalneh (2005) obtained similar result. Irrigation: this is a definition of dummy needed; It refers to the access of the farmers to irrigation scheme used to increase the production of potato in the study area.Farmers using irrigation are expected to be more efficient than those farmers producing without using irrigation.Thus, it is a dummy variable hypothesized to affect the efficiency level of farmers positively.Huynh and Yabe (2011) confirmed this hypothesis. Credit use: It refers to the amount of money borrowed from different credit sources.Credit use for the purpose of purchase of agricultural inputs like improved seed, chemical fertilizers, etc. are expected to improve efficiency level of the farmers.Consequently, households who are getting the amount of credit they required were expected to be more efficient than others.Dolisca and Jolly (2008) reported the amount of credit received is positively related with efficiency.Thus, following this finding the amount of credit received was hypothesized to be positively related with efficiency. Income from off/non-farm activities: It refers to the sum total of earnings generated in the survey year from activities outside farming like retail trading business, casual work on wage basis, etc.When income earned from crop production and sales of livestock and livestock products are inadequate, households often look for other income sources other than agriculture to finance their farming activities.Consequently, income earned from such activities enables households to increase their efficiency level.Jwanya et al. (2014) reported households earning higher off/non-farm income were more efficient.Therefore, in this study, in line with this finding, household who were earning higher off/non-farm income were expected to be more efficient. Enterprise cost analysis The summary of total variable cost of potato production consisting of cost of labor (both hired and family labor), cost of fertilizer, cost of chemicals, cost of seeds and cost of oxen labor are presented in Table 2.The opportunity costs were used to calculate the out-of-pocket expenses of some inputs.According to results, cost of seed, oxen, labor and fertilizers were the most important input which contributed significantly to the total variable cost of potato production.In contrast, the share of chemicals from the total cost of production was low.This was attributed to the fact that major activities in production of potato including land preparation, weeding and harvesting were undertaken by utilizing either more labor force or oxen labor, or both.Application of herbicide and pesticide was low and when weeding was necessary, it was mostly done by hand. Profitability assessment Results presented in Table 3 show that the net return that the farmers obtained from production of potato was about ETB 11,740.9 per year which implies that potato producers were making a profit at an average price.Returns to variable cost was about ETB 1.51 per year which implies that for each Birr invested in variable input used in production of potato the return would be ETB 1.5 per year. Seasonal effect On average, the potato price was ETB 294.28/quintal.The peak potato-harvesting season in the district occurs in October and December.Price analysis revealed a wide seasonal variation in potato prices between harvest and off-peak periods.Price margins of about ETB 500/quintal was observed.As expected, prices were highest during the off-peak periods and dropped during the peak harvesting periods.Potato prices varied from a low of ETB 100/quintal to ETB 650/quintals, corresponding to the peak harvest period and the off-peak seasons, respectively (Figure 1).In addition, there was also a wide variation in gross income and net income earned by surveyed households across seasons.According to results presented in Table 4, gross incomes and net returns were highest during the off-peak seasons and lowest at harvesting.These results highlight the importance of delaying harvesting seasons.In this regard, some farmers in this study area can delay the potato harvesting season by leaving potato products underground and planting other short period products on top for a given period. Tests of hypothesis In the first case, the functional form that better fit to the data at hand was tested by using likelihood ratio (LR).Results presented in Table 5 show that the computed LR value was 20.74 and is lower than the upper 5% critical value of the χ2 at 15d.f (It is the number of interaction terms and square specifications in the translog restricted to be zero in estimating the Cobb-Douglas functional form).This shows that the coefficients of the interaction terms and the square specifications of the input variables under the Translog specifications are not different from zero.As a result, the Cobb-Douglas functional form specified in the methodology was obtained as the best fits for the data.In the second case, the existence of inefficiency component of the total error term of the stochastic frontier specification (γ = 0 or γ > 0) was tested using LR statistics.The higher LR value revealed the existence of inefficiency or one-sided error component in the model.According to the results presented in Table 5, the null hypothesis stating that all coefficient of the inefficiency effect model are simultaneously equal to zero was rejected in favor of the alternative hypothesis which stated that all explanatory variables associated with inefficiency effects model were simultaneously different from zero.The discrepancy ratio (γ) calculated from the maximum likelihood estimation of the full frontier model was 0.940.The results indicate that 94 percent of the variability in potato output in the study area in the survey year was due to technical inefficiency effect, while the remaining 6 percent variation in output was due to random noise effect. Parameter estimates of SPF model In the estimation of the Cobb-Douglas production frontier, one stage estimation procedure was utilized in which both the determinants of the production frontier and inefficiency effect were included in the model.In this estimation process two variables including urea and irrigation were hypothesized as the important determinants of production frontier and inefficiency effects, respectively.However, these variables were dropped from the model because they were not used in the potato production under analysis.Farmer in the study area did not include urea as part of their potato production.Irrigation was used for other crops other than 6 show that area of the plot, seed, NPS fertilizer and labor were positive and significant input variables that affect potato production in the area. Estimation of farm level technical efficiency Given the functional form used, the results presented in Table 7 show that the mean efficiency level of the sampled farmers was 89%.This value shows that, on average, farmers can increase their current output level by 11% without increasing the existing levels of inputs. Conversely, farmers on average could decrease inputs (area, NPs fertilizer, and seed) by 11% to get the output they are currently getting if they use inputs efficiently.Moreover, according to results presented in Table 8, in the study area there was significant variation in efficiency level among the sampled farmers.However, given these variation in the efficiency level of the sampled farmers, most of the surveyed households achieved an efficiency level greater than their mean level.This indicates that, in the long run there is a need for introducing of new technology besides improving the current efficiency levels of the farmers to increase the output level of potato in the study area. Determinants of technical efficiency One-stage estimation technique was used in this study. The results of the estimation were presented in Table 9. In the next section, the effect of significant inefficiency variables on the technical efficiency of the farmers in the study area would be discussed by decomposing them TE level Percent 0.5-0.6 1.36 0.6-0.7 6.12 0.7-0.87.48 0.8-0.922.45 0.9-1 62.59 into three major groups. Demographic factors Age of the household head: This variable was found to be a significant variable in explaining the variation in technical efficiency among farmers considered.These indicate that older age positively affects technical efficiency in potato production, likely because older farmers tend to be more experienced in various timingrelated aspects of farm management until they reach (Fekadu, 2004;Kinde, 2005;Getachew and Bamlak, 2011), that farm management practices improve over the years as farmers become more experienced.Moreover; farmers may accumulate good command of resources such as labor, oxen and farm tools thus enhancing production efficiency: more farm resources, faster inputs application in crop production and improved farm efficiency (Getachew and Bamlak, 2011). Education: Statistically, educational level of the household head significantly affects the famer"s efficiency level.That is, farmers with more years of schooling were found more technically efficient than their counterparts. Reason being that, educated farmers may have relatively adequate knowledge to apply improved methods to agricultural activities and, consequently, be more technically efficient.This result agrees with the empirical findings of different studies (Getachew and Bamlak, 2011;Huynh and Yabe, 2011). Household size: Contrary to our expectation, the results showed that larger household size negatively affects efficiency in potato production (coefficient = 0.270, p≤0.05).This result is consistent with the finding of Ani et al. (2013) andFekadu (2004). Landownership: The result shows that ownership is positively significant in determining the efficiency level of farmers in producing potato (coefficient = -3.833,p≤0.05).That is, farmers are more efficient in managing their own land or hired land than farmers who manage sharecropped land.This is because farmers tend to prioritize their own land in all aspects.Fekadu (2004) also found similar results in his empirical study. Fragmentation: Contrary to expectation, number of plots positively affected the technical efficiency level of the farmers in the study area.Farmers who have large number of plots in different areas were more efficient than farmers who had large number of plots in the same area.This is because farmers who were cultivating their crops in different plots are not equally exposed to natural hazards such as frosts which are the most common threats to crops in the area.In other words, fragmentation is one strategy that farmers have to avert hazards to crops.This has an important policy implication in that increasing the number of plots would improve efficiency levels of farmers.The result of this study agrees with those of Kinde (2005) and Getachew and Bamlak (2011). The authors emphasized that farmers may benefit from fragmented plots since in different plots when strategically distributed may reduce the risks that weather variation pose to crops. Livestock: Livestock supplements the production of crops in various ways.For example, the income obtained from selling livestock can be invested in crop production, especially to purchase fertilizer.Livestock manure could also be used to improve soil fertility.Livestock is also the main sources of animal labor in crop production. Consequently, the results showed that farmer who have more livestock in TLU than their counterpart are more efficient (coefficient = 0.205, p ≤ 0.1).Our result contradicts Fekadu (2004) who reasoned that farmers who held higher livestock may give attention to livestock production; hence, they may not be as efficient in crop production.However, in the study area where off/nonfarm activities are meager and use of credit was less, livestock are an important additional source of income to farmers and help assess inputs of production. Institutional factors Extension contact: Farmers with more number of extension contacts were found more efficient than others.This implies that policies should include a greater intervention by extension workers as an important tool to promote more efficient technical support to farmers in the study area.Fekadu (2004), Haileselassie (2005) and Getachew and Bamlak (2011) found similar results that emphasized the paramount importance of increasing the frequency of development agent visits to improve the technical efficiency levels among farmers. CONCLUSION AND RECOMMENDATIONS Apart from difficulties in accurately measuring efficiency levels based on farmers" responses, the findings of this study revealed that there is a considerable variability in the technical efficiency of farmers in the production of potato in the study area.Therefore, to improve technical efficiency levels of farmers in the study area, some measures should be considered.First, sharing the experience of older farmers with those of different age groups could improve the level of efficiency at all levels, especially among youngsters.Incidentally, extension programs can intervene by arranging ways for the experience sharing.Simultaneously, there should be an intervention by governmental and non-governmental organizations to help older farmers by designing farm implements which are labor saving and can easily be handled.Financial constraints could be overcome by establishing and strengthening the religious practice of households by micro-finance institutions and agricultural cooperatives.Creation of off/non-farm job opportunities should also be emphasized, because, they could be a replacement for credit as a source of funds for the farmer, and consequently would improve the efficiency of farmers.More training should be provided to extension agent to improve their level of technical efficiency in helping farmers especially tailored to potato producers" conditions.In addition to strengthening the existing extension service provided to farmers, efforts should be made to provide long term training to farmers.Livestock provide plough power and additional income to households which can be converted into input to increase farm production.Consequently, livestock development packages must be introduced and promoted to increase their production and productivity.Fertilizer was the important determinants of potato production as revealed by SPF.There should be timely supply of fertilizer at a reasonable price to improve the efficiency of farmers in the production of potato and other crops.Therefore, the attention of policy makers to improve agricultural production should not revolve solely around the introduction and dissemination of new technology to increase yield, but also more attention should be given to improve the existing level of efficiency. Figure 11 . Figure 11.Seasonal price variation of potato of the study area . Estimate of the extent of efficiency also help in deciding whether to improve efficiency or to develop new technology to raise farm productivity.Consequently, this study was undertaken in Dinsho District of Bale Zone of Ethiopia to assess profitability and technical efficiency of potato production by: 1. Measuring the existing level of technical efficiency in the production of potato in the Dinsho District.2.Identifying the determinants of technical efficiency of potato production in the study area and; 3. Determining the profitability of potato production in the study area.METHODOLOGYDescription *Corresponding author.E-mail: aburushdakasim@gmail.com.Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License Table 1 . Distribution of sampled kebeles and households. Table 2 . Enterprise cost analysis. Table 3 . Gross margin analysis of potato production. Table 4 . Gross income analysis across seasons. Table 5 . Generalized likelihood-ratio test of hypotheses for parameters of SPF. Table 6 . Maximum-likelihood estimates of SPF model potato.Results presented in Table Table 7 . Estimated technical efficiencies of the sampled farmers Table 8 . Distribution of the sampled farmers by technical efficiency levels Table 9 . Maximum-likelihood estimates of the inefficiency variables.
2018-12-05T23:09:20.195Z
2018-07-31T00:00:00.000
{ "year": 2018, "sha1": "c805fbc8ceb7d77cdf9e8f2c2824dc9278e922b6", "oa_license": null, "oa_url": "https://academicjournals.org/journal/JDAE/article-full-text-pdf/B5C5F9457751.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c805fbc8ceb7d77cdf9e8f2c2824dc9278e922b6", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
233852601
pes2o/s2orc
v3-fos-license
Design method of measurement errors of MEMS-IMU in attitude capture clothing Attitude capture clothing is widely used in virtual anchor, virtual fitting and other fields, and MEMS-IMU is the key component to realize high cost-performance attitude capture clothing. However, a precision turntable is needed in order to estimate gyroscope’s static biasing, and measurement errors of MEMS-IMU are difficult to be obtained. Based on the problem, both attitude and gyroscope’s static biasing are estimated in field through a Kalman filter, a design method of measurement errors is proposed for improving attitude’s accuracy. In the proposed method, a mathematical model of measurement errors is given, and the optimal values of measurement errors are obtained by analysing the relationship between attitude’s accuracy and measurement errors. A MEMS-IMU with name of MPU9250 and size of 3 mm*3 mm* 1 mm is used for verifying the feasibility of the proposed method. Experimental results show pitch, roll and yaw deviate from the reference values by 0.008°, 0.006° and 0.6° respectively when the MEMS-IMU keeps movement at a constant speed in arbitrary trajectory, which is significant for promoting the application of MEMS-IMU in attitude capture clothing. Introduction Attitude capture clothing is widely used in virtual anchor, virtual fitting and other fields. A solution of attitude capture clothing is given in Figure 1 including clothing, a control unit, 15 micro-electromechanical systems inertial measurement units (MEMS-IMUs) and several conductors. The control unit includes a Bluetooth, a micro-controlled unit and a battery, in which Bluetooth transmit attitude to a remote server. Based on our experience, the proposed solution for attitude capture clothing is much more cost-performance compared with methods by cameras [1]. Unfortunately, MEMS-IMU's errors including gyroscope's static biasing and measurement errors would result in user's attitude presenting larger errors. Many researchers estimate gyroscope's static biasing by multi-position calibration [2][3][4], however, the rotation rate of earth is 0.1°/s, which is much less than gyroscope's noises. Then, a precision turntable is needed in order to calibrate gyroscope's static biasing. Moreover, measurement errors could be obtained based on the Allan Variance (AV) or the Generalized Method of Wavelets Moments (GMWM) [5]. Caused by complexity and large amount of computation, both the AV and the GMWM are hard to apply to mobile terminal. Usually, the user's pitch and roll are firstly estimated by using a MEMS-IMU, and then the user's yaw is estimated by using a magnetometer. However, the measurement results of magnetometer are easily affected by magnetic materials. In order to avoid using turntable and magnetometer, both the user's pitch, roll and the gyroscope's static biasing are estimated by MEMS-IMU, then the user's yaw is calculated based on gyroscope's measurement results without static biasing. MEMS-IMU's measurement errors is crucial for deciding attitude's accuracy, however, the existing works directly provide experimental results or empirical values, which result in the design of measurement errors lacking of theoretical and experimental guidance. Based on the problem, a design method of measurement errors is proposed, in which a mathematical model of measurement errors is given, and the optimal values of measurement errors are obtained by analysing the relationship between attitude's accuracy and measurement errors. Figure 2 shows a block diagram of virtual fitting, in which MEMS-IMU is utilized to detect user's attitude in 3D space. Then, user's pose could be synchronously displayed in virtual fitting system through the model of 2D-3D alignment. Therefore, attitude's accuracy is important for keeping user's pose consistent with avatar, in order to enhance the feasibility of clothes simulation. Figure 3 is on navigation frame, and user's frame on the right of Figure 3 is called body frame, then direction cosine matrix n b C from body frame to navigation frame is given for updating avatar's pose [6]. The update of user's attitude is divided into 3 steps as  state vector X , then it could be estimated as 1 i X   by using gyroscope's output , M  is the measurement error of g . Moreover, k is the impact factor of 1 i a  upon the measurement error of g . Finally, user's attitude including yaw  , pitch  and roll  at the moment i+1 is updated. The 1  , 1  and 1  are the initial yaw, pitch and roll respectively. The equations in Figure 4 are explained in detail in reference [7]. Design method Keep MEMS-IMU level and still for a period of time, the outputs of gyroscope and accelerometer are sampled as ,1 Moreover, taking the k as independent variable and the difference between the proposed attitude and the reference attitude as dependent variable, the optimal k could be obtained when dependent variable reaches to the minimum value. Thus, all of the measurement errors could be estimated based on the proposed method, and the estimated errors could be corrected by the iterative process of Kalman filter. Experimental results A MEMS-IMU with name of MPU9250 and size of 3 mm*3 mm*1 mm is adopt for verifying the proposed method, which includes a 3-axis gyroscope, a 3-axis accelerometer and a 3-axis magnetometer. In this experiment, the MEMS-IMU kept movement slowly at a constant velocity and in arbitrary trajectory. Thus, the user's attitude calculated by the accelerometer and the magnetometer could be regarded as reference attitude, which is used to evaluate the performance of the proposed attitude calculated by the gyroscope and the accelerometer through a Kalman filter. The sampling frequency and g are set to 100 Hz and 9.8 m/s 2 . The MEMS-IMU is firstly kept level and still for 1 minute, then kept slowly movement at a constant velocity and in arbitrary trajectory for 3 minutes, and finally restore to level and still again for 1 minute.   is as independent variable, in which j is changed from 0 to 40. The simulation results of relationship between attitude errors and k are shown in Figure 5. The yaw error has the minimum value 0.1° when j equals 2 and 5, the minimum error of pitch is -9.3e-4° when j equals 1 while the minimum error of roll is 0.0024° when j equals 17. As the variation ranges of both pitch and roll are much less than that of yaw, j is finally set to 5, namely, k equals 0.4. By using the estimated measurement errors, the proposed pitch, roll and yaw over the whole 5 minutes are shown in Figure 6, in which the reference pitch, roll and yaw are also presented used to evaluate the proposed attitude. Experimental results show the proposed attitude is highly consistent with the reference attitude. Besides, during the last level and still period, namely, the sampling points are changed from 25000 to 30000, the pitch, roll and yaw deviate from the reference values by 0.008°, 0.006° and 0.6° respectively, which demonstrate the feasibility of the proposed method. Conclusions A design method of MEMS-IMU's measurement errors is proposed in this paper, in order to improve the accuracy of user's attitude. In the proposed method, a mathematical model of measurement error is given, and the optimal values of measurement errors are obtained. Theoretical analysis and experimental results shown in the paper imply the feasibility of the proposed method.
2021-05-07T00:03:49.801Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "06a4e8795b91105bd536634ebf7e00c8fd2f8ada", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1827/1/012203", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9ac633fa491344125863a69e897b4e5909010ce0", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
174802397
pes2o/s2orc
v3-fos-license
Robust Neural Machine Translation with Doubly Adversarial Inputs Neural machine translation (NMT) often suffers from the vulnerability to noisy perturbations in the input. We propose an approach to improving the robustness of NMT models, which consists of two parts: (1) attack the translation model with adversarial source examples; (2) defend the translation model with adversarial target inputs to improve its robustness against the adversarial source inputs. For the generation of adversarial inputs, we propose a gradient-based method to craft adversarial examples informed by the translation loss over the clean inputs. Experimental results on Chinese-English and English-German translation tasks demonstrate that our approach achieves significant improvements (2.8 and 1.6 BLEU points) over Transformer on standard clean benchmarks as well as exhibiting higher robustness on noisy data. Introduction In recent years, neural machine translation (NMT) has achieved tremendous success in advancing the quality of machine translation Hieber et al., 2017). As an end-to-end sequence learning framework, NMT consists of two important components, the encoder and decoder, which are usually built on similar neural networks of different types, such as recurrent neural networks Bahdanau et al., 2015;, convolutional neural networks (Gehring et al., 2017), and more recently on transformer networks (Vaswani et al., 2017). To overcome the bottleneck of encoding the entire input sentence into a single vector, an attention mechanism was introduced, which further enhanced translation performance (Bahdanau et al., 2015). Deeper neural networks with increased model capacities in NMT have also been explored and shown promising results . 他(她)一个残疾人,我女儿身体好好地。 Original he is a handicapped person, my Output daughter is in good health. Perturbed one of her handicapped people, my Output daughter is in good health. × Table 1: An example of Transformer NMT translation result for an input and its perturbed input by replacing "他(he)" to "她(she)". Despite these successes, NMT models are still vulnerable to perturbations in the input sentences. For example, Belinkov and Bisk (2018) found that NMT models can be immensely brittle to small perturbations applied to the inputs. Even if these perturbations are not strong enough to alter the meaning of an input sentence, they can nevertheless result in different and often incorrect translations. Consider the example in Table 1, the Transformer model will generate a worse translation (revealing gender bias) for a minor change in the input from "he" to "she". Perturbations originate from two sources: (a) natural noise in the annotation and (b) artificial deviations generated by attack models. In this paper, we do not distinguish the source of a perturbation and term perturbed examples as adversarial examples. The presence of such adversarial examples can lead to significant degradation of the generalization performance of the NMT model. A few studies have been proposed in other natural language processing (NLP) tasks aiming to tackle this issue in classification tasks, e.g. in (Miyato et al., 2017;Alzantot et al., 2018;Ebrahimi et al., 2018b;Zhao et al., 2018). As for NMT, previous approaches relied on prior knowledge to generate adversarial examples to improve the robustness, neglecting specific downstream NMT models. For example, Belinkov and Bisk (2018) and Karpukhin et al. (2019) studied how to use some synthetic noise and/or natural noise. Cheng et al. (2018) proposed adversarial stability training to improve the robustness on arbitrary noise type including feature-level and word-level noise. examined the homophonic noise for Chinese translation. This paper studies learning a robust NMT model that is able to overcome small perturbations in the input sentences. Different from prior work, our work deals with the perturbed examples jointly generated by a white-box NMT model, which means that we have access to the parameters of the attacked model. To the best of our knowledge, the only previous work on this topic is from (Ebrahimi et al., 2018a) on character-level NMT. Overcoming adversarial examples in NMT is a challenging problem as the words in the input are represented as discrete variables, making them difficult to be switched by imperceptible perturbations. Moreover, the characteristics of sequence generation in NMT further intensify this difficulty. To tackle this problem, we propose a gradientbased method, AdvGen, to construct adversarial examples guided by the final translation loss from the clean inputs of a NMT model. AdvGen is applied to both encoding and decoding stages: (1) we attack a NMT model by generating adversarial source inputs that are sensitive to the training loss; (2) we then defend the NMT model with the adversarial target inputs, aiming at reducing the prediction errors for the corresponding adversarial source inputs. Our contribution is threefold: 1. A white-box method to generate adversarial examples is explored for NMT. Our method is a gradient-based approach guided by the translation loss. 2. We propose a new approach to improving the robustness of NMT with doubly adversarial inputs. The adversarial inputs in the encoder aim at attacking the NMT models, while those in the decoder are capable of defending the errors in predictions. 3. Our approach achieves significant improvements over the previous state-of-the-art Transformer model on two common translation benchmarks. Experimental results on the standard Chinese-English and English-German translation bench-marks show that our approach yields an improvement of 2.8 and 1.6 BLEU points over the state-of-the-art models including Transformer (Vaswani et al., 2017). This result substantiates that our model improves the generalization performance over the clean benchmark datasets. Further experiments on noisy text verify the ability of our approach to improving robustness. We also conduct ablation studies to gain further insight into which parts of our approach matter the most. Background Neural Machine Translation NMT is typically a neural network with an encoder-decoder architecture. It aims to maximize the likelihood of a parallel corpus S = {(x (s) , y (s) )} |S| s=1 . Different variants derived from this architecture have been proposed recently (Bahdanau et al., 2015;Gehring et al., 2017;Vaswani et al., 2017). This paper focuses on the recent Transformer model (Vaswani et al., 2017) due to its superior performance, although our approach seems applicable to other models, too. The encoder in NMT maps a source sentence x = x 1 , ..., x I to a sequence of I word embeddings e(x) = e(x 1 ), ..., e(x I ). Then the word embeddings are encoded to their corresponding continuous hidden representations h by the transformation layer. Similarly, the decoder maps its target input sentence z = z 1 , ..., z J to a sequence of J word embeddings. For clarity, we denote the input and output in the decoder as z and y. z is a shifted copy of y in the standard NMT model, i.e. z = sos , y 1 , · · · , y J−1 , where sos is a start symbol. Conditioned on the hidden representations h and the target input z, the decoder generates y as: where θ mt is a set of model parameters and z <j is a partial target input. The training loss on S is defined as: Adversarial Examples Generation An adversarial example is usually constructed by corrupting the original input with a small perturbation such that the difference to the original input remains less perceptible but dramatically distorts the model output. The adversarial examples can be generated by a white-box or black-box model, where the latter does not have access to the attacked models and often relies on prior knowledge. The former white-box examples are generated using the information of the attacked models. Formally, a set of adversarial examples Z(x, y) is generated with respect to a training sample (x, y) by solving an optimization problem: where J(·) measures the possibility of a sample being adversarial, and R(x ′ , x) captures the degree of imperceptibility for a perturbation. For example, in the classification task, J(·) is a function outputting the most possible target class y ′ (y ′ = y) when fed with the adversarial example x ′ . Although it is difficult to give a precise definition of the degree of imperceptibility R(x ′ , x), l ∞ norm is usually used to bound the perturbations in image classification (Goodfellow et al., 2015). Approach Our goal is to learn robust NMT models that can overcome small perturbations in the input sentences. As opposed to images, where small perturbations to pixels are imperceptible, even a single word change in natural languages can be perceived. NMT is a sequence generation model wherein each output word is conditioned on all previous predictions. Thus, one question is how to design meaningful perturbation operations for NMT. We propose a gradient-based approach, called AdvGen, to construct adversarial examples and use these examples to both attack as well as defend the NMT model. Our intuition is that an ideal model would generate similar translation results for similar input sentences despite any small difference caused by perturbations. The attack and defense are carried out in the end-to-end training of the NMT model. We first use AdvGen to construct an adversarial example x ′ from the original input x to attack the NMT model. We then use AdvGen to find an adversarial target input z ′ from the decoder input z to improve the NMT model robustness to adversarial perturbations in the source input x ′ . Thereby we hope the NMT model will be robust against both the source adversarial input x ′ and adversarial perturbations in target predictions z ′ . The rest of this section will discuss the attack and defense procedures in detail. Attack with Adversarial Source Inputs Following (Goodfellow et al., 2015;Miyato et al., 2017;Ebrahimi et al., 2018b), we study the whitebox method to generate adversarial examples tightly guided by the training loss. Given a parallel sentence pair (x, y), according to Eq. (3), we generate a set of adversarial examples A(x, y) specific to the NMT model by: where we use the negative log translation probability − log P (y|x ′ ; θ mt ) to estimate J(·) in Eq. (3). The formula constructs adversarial examples that are expected to distort the current prediction and retain semantic similarity bounded by R. It is intractable to obtain an exact solution for Eq. (4). We therefore resort to a greedy approach based on the gradient to circumvent it. For the original input x, we induce a possible adversarial word x ′ i for the word x i in x: where g x i is a gradient vector wrt. e(x i ), V x is the vocabulary for the source language, and sim(·, ·) denotes the similarity function by calculating the cosine distance between two vectors. Eq. (5) enumerates all words in V x incurring formidable computational cost. We hence substitute it with a dynamic set V x i that is specific for each word ) as the set of the n most probable words among the top n scores in terms of Q(x i , x), where n is a small constant integer and |V x i | ≪ |V x |. For the source, we estimate it from: Here, P lm is a bidirectional language model for the source language. The introduction of language model has three benefits. First, it enables a computationally feasible way to approximate Eq. (5). Second, the return s ′ language model can retain the semantic similarity between the original words and their adversarial counterparts to strengthen the constraint R in Eq. (4). Finally, it prevents word representations from being degenerative because replacements with adversarial words usually affect the context information around them. Algorithm 1 describes the function AdvGen for generating an adversarial sentence s ′ from an input sentence s. The function inputs are: Q is a likelihood function for the candidate set generation, and for the source, it is Q src from Eq. (7). D pos is a distribution over the word position {1, .., |x|} from which the adversarial word is sampled. For the source, we use the simple uniform distribution U . Following the constraint R, we want the output sentence not to deviate too much from the input sentence and thus only change a small fraction of its constituent words based on a hyper-parameter γ ∈ [0, 1]. Defense with Adversarial Target Inputs After generating an adversarial example x ′ , we treat (x ′ , y) as a new training data point to improve the model's robustness. These adversarial examples in the source tend to introduce errors which may accumulate and cause drastic changes to the decoder prediction. To defend the model from errors in the decoder predictions, we generate an adversarial target input by AdvGen, simi-lar to what we discussed in Section 3.1. The decoder trained with the adversarial target input is expected to be more robust to the small perturbations introduced in the source input. The ablation study results in Table 8 substantiate the benefit of this defense mechanism. Formally, let z be the decoder input for the sentence pair (x, y). We use the same AdvGen function to generate an adversarial target input z ′ from z by: Note that for the target, the translation loss in Eq. (6) is replaced by − log P (y|x ′ ). Q trg is the likelihood for selecting the target word candidate set V z . To compute it, we combine the NMT model prediction with a language model P lm (y; θ y lm ) as follow: where λ balances the importance between two models. D trg is a distribution for sampling positions for the target input. Different from the uniform distribution used in the source, in the target sentence we want to change those relevant words influenced by the perturbed words in the source input. To do so, we use the attention matrix M learned in the NMT model, obtained at the current mini-batch, to compute the distribution over (x, y, x ′ ) by: , j ∈ {1, .., |y|} (10) where M ij is the attention score between x i and y j and δ(x i , x ′ i ) is an indicator function that yields 1 if x i = x ′ i and 0 otherwise. Training Algorithm 2 details the entire procedure to calculate the robustness loss for a parallel sentence pair (x, y). We run AdvGen twice to obtain x ′ and z ′ . We do not backpropagate gradients over AdvGen when updating parameters, which just plays a role of data generator. In our implementation, this function incurs at most a 20% time overhead compared to the standard Transformer model. Accordingly, we compute the robustness loss on S as: Set D src as a uniform distribution; 5 x ′ ← AdvGen(x, Qsrc, Dsrc, − log P (y|x)); 6 Q trg is computed as Eq. (9); 7 D trg is computed as Eq. (10); 8 z ′ ← AdvGen(z, Qtrg, Dtrg, − log P (y|x ′ )); 9 loss ← − log P (y|x ′ , z ′ ; θ mt ) 10 return loss The final training objective L is a combination of four loss functions: where θ x lm and θ y lm are two sets of model parameters for source and target bidirectional language models, respectively. The word embeddings are shared between θ mt and θ x lm and likewise between θ mt and θ y lm . Setup We conducted experiments on Chinese-English and English-German translation tasks. The Chinese-English training set is from the LDC corpus that compromises 1.2M sentence pairs. We used the NIST 2006 dataset as the validation set for model selection and hyper-parameters tuning, and NIST 2002NIST , 2003NIST , 2004NIST , 2005NIST , 2008 as test sets. For the English-German translation task, we used the WMT'14 corpus consisting of 4.5M sentence pairs. The validation set is newstest2013, and the test set is newstest2014. In both translation tasks, we merged the source and target training sets and used byte pair encoding (BPE) (Sennrich et al., 2016c) to encode words through sub-word units. We built a shared vocabulary of 32K sub-words for English-German and created shared BPE codes with 60K operations for Chinese-English that induce two vocabularies with 46K Chinese sub-words and 30K English sub-words. We report casesensitive tokenized BLEU scores for English-German and case-insensitive tokenized BLEU scores for Chinese-English (Papineni et al., 2002). For a fair comparison, we did not average multiple checkpoints (Vaswani et al., 2017), and only report results on a single converged model. We implemented our approach based on the Transformer model (Vaswani et al., 2017). In AdvGen, We modified multiple positions in the source and target input sentences in parallel. The bidirectional language model used in AdvGen consists of left-to-right and right-to-left Transformer networks, a linear layer to combine final representations from these two networks, and a softmax layer to make predictions. The Transformer network was built using six transformation layers which keeps consistent with the encoder in the Transformer model. The hyperparameters in the Transformer model were set according to the default values described in (Vaswani et al., 2017). We denote the Transformer model with 512 hidden units as Trans.-Base and 1024 hidden units as Trans.-Big. We tuned the hyperparameters in our approach on the validation set via a grid search. Specifically, λ was set to 0.5. The n in top n to select word candidates was set to 10. The ratio pair (γ src , γ trg ) was set to (0.25, 0.50) with the exception of Trans.-Base on English-German where it was set to (0.15, 0.15). We treated the single part of parallel corpus as monolingual data to train bidirectional language models without introducing additional data. The model parameters in our approach were trained from scratch except for the parameters in language models initialized by the models pre-trained on the single part of parallel corpus. The parameters of language models were still updated during robustness training. Table 3 shows the BLEU scores on the NIST Chinese-English translation task. We first compare our approach with the Transformer model (Vaswani et al., 2017) on which our model is built. As we see, the introduction of our method to the standard backbone model (Trans.-Base) leads to substantial improvements across the validation and test sets. Specifically, our approach achieves an average gain of 2.25 BLEU points and up to 2.8 BLEU points on NIST03. Comparison to Baseline Methods To further verify our method, we compare to recent related techniques for robust NMT learning methods. For a fair comparison, we implemented all methods on the same Transformer backbone. Miyato et al. (2017) applied perturbations to word embeddings using adversarial learning in text classification tasks. We apply this method to the NMT model. Sennrich et al. (2016a) augmented the training data with word dropout. We follow their method to randomly set source word embeddings to zero with the probability of 0.1. This simple technique performs reasonably well on the Chinese-English translation. Wang et al. (2018) introduced a dataaugmentation method for NMT called SwitchOut to randomly replace words in both source and target sentences with other words. Cheng et al. (2018) employed adversarial stability training to improve the robustness of NMT. We cite their numbers reported in the paper for the RNN-based NMT backbone and implemented their method on the Transformer backbone. We consider two types of noisy perturbations in their method and use subscripts lex. and fea. to denote them. Sennrich et al. (2016b) is a common dataaugmentation method for NMT. The method backtranslates monolingual data by an inverse translation model. We sampled 1.2M English sentences from the Xinhua portion of the GIGAWORD corpus as monolingual data. We then back-translated them with an English-Chinese NMT model and re-trained the Chinese-English model using backtranslated data as well as original parallel data. Input & Noisy Input 这体现了中俄两国和两国议会间密切(紧密)的友好合作关系。 Reference this expressed the relationship of close friendship and cooperation between China and Russia and between our parliaments. Vaswani et al. this reflects the close friendship and cooperation between China and Russia on Input and between the parliaments of the two countries. Vaswani et al. this reflects the close friendship and cooperation between the two countries on Noisy Input and the two parliaments. Ours this reflects the close relations of friendship and cooperation between China on Input and Russia and between their parliaments. Ours this embodied the close relations of friendship and cooperation between China on Noisy Input and Russia and between their parliaments. Table 7: BLEU scores computed using the zero noise fraction output as a reference. Table 2 shows the comparisons to the above five baseline methods. Among all methods trained without extra corpora, our approach achieves the best result across datasets. After incorporating the back-translated corpus, our method yields an additional gain of 1-3 points over (Sennrich et al., 2016b) trained on the same back-translated corpus. Since all methods are built on top of the same backbone, the result substantiates the efficacy of our method on the standard benchmarks that contain natural noise. Compared to (Miyato et al., 2017), we found that continuous gradient-based perturbations to word embeddings can be absorbed quickly, often resulting in a worse BLEU score than the proposed discrete perturbations by word replacement. Results on Noisy Data We have shown improvements on the standard clean benchmarks. This subsection validates the robustness of the NMT models over artificial noise. To this end, we added synthetic noise to the clean validation set by randomly replacing a word with a relevant word according to the similarity of their word embeddings. We repeated the process in a sentence according to a pre-defined noise fraction where a noise level of 0.0 yields the original clean dataset while 1.0 provides an entirely altered set. For each sentence, we generated 100 noisy sentences. We then re-scored those sentences using a pre-trained bidirectional language model, and picked the best one as the noisy input. Table 6 shows results on artificial noisy inputs. BLEU scores were computed against the groundtruth translation result. As we see, our approach outperforms all baseline methods across all noise levels. The improvement is generally more evident when the noise fraction becomes larger. To further analyze the prediction stability, we compared the model outputs for clean and noisy inputs. To do so, we selected the output of a model on clean input (noise fraction equals 0.0) as a reference and computed the BLEU score against this reference. Table 7 presents the results where the second column 100 means that the output is exactly the same as the reference. The relative drop of our model, as the noise level grows, is smaller compared to other baseline methods. The results in Table 6 and Table 7 together suggest our model is more robust toward the input noise. ally the same meaning, where "密切" and "紧密" both mean "close" in Chinese. Our model retains very important words such as "China and Russia", which are missing in the Transformer results. Table 8 shows the importance of different components in our approach, which include L clean , L robust and L lm . As for L robust , it includes the source adversarial input, i.e. x ′ = x and the target source adversarial input, i.e. z ′ = z. In the fourth row with x ′ = x and z ′ = z, we randomly choose replacement positions of z since no changes in x leads not to form the distribution in Eq. (10). We can find removing any component leads to a notable decrease in BLEU. Among those, the adversarial target input (z ′ = z) shows the greatest decrease of 1.87 BLEU points, and removing language models have the least impact on the BLEU score. However, language models are still important in reducing the size of the candidate set, regularizing word embeddings and generating fluent sentences. Ablation Studies The hyper-parameters γ src and γ trg control the ratio of word replacement in the source and target inputs. Table 9 shows their sensitive study result where the row corresponds to γ src and the column is γ trg . As we see, the performance is relatively insensitive to the values of these hyper-parameters, and the best configuration on the Chinese-English validation set is obtained at γ src = 0.25 and γ trg = 0.50. We found that a non-zero γ trg always yields improvements when compared to the result of γ trg = 0. While γ src = 0.25 increases BLEU scores for all the values of γ trg , a larger γ src seems to be damaging. Related Work Robust Neural Machine Translation Improving robustness has been receiving increasing attention in NMT. For example, Belinkov and Bisk (2018) We noticed that Michel and Neubig (2018) proposed a dataset for testing the machine translation on noisy text. Meanwhile they adopt a domain adaptation method to first train a NMT model on a clean dataset and then finetune it on noisy data. This is different from our setting in which no noisy training data is available. Another difference is that one of our primary goals is to improve NMT models on the standard clean test data. This differs from Michel and Neubig (2018) whose goal is to improve models on noisy test data. We leave the extension to their setting for future work. Adversarial Examples Generation Our work is inspired by adversarial examples generation, a popular research area in computer vision, e.g. in (Szegedy et al., 2014;Goodfellow et al., 2015;Moosavi-Dezfooli et al., 2016). In NLP, many authors endeavored to apply similar ideas to a variety of NLP tasks, such as text classification (Miyato et al., 2017;Ebrahimi et al., 2018b), machine comprehension (Jia and Liang, 2017), dialogue generation (Li et al., 2017), machine translation (Belinkov and Bisk, 2018), etc. Closely related to (Miyato et al., 2017) which attacked the text classification models in the embedding space, ours generates adversarial examples based on discrete word replacements. The experiments show that ours achieve better performance on both clean and noisy data. Data Augmentation Our approach can be viewed as a data-augmentation technique using adversarial examples. In fact, incorporating monolingual corpora into NMT has been an important topic (Sennrich et al., 2016b;Cheng et al., 2016;Edunov et al., 2018). There are also papers augmenting a standard dataset based on the parallel corpora by dropping words (Sennrich et al., 2016a), replacing words (Wang et al., 2018), editing rare words (Fadaee et al., 2017), etc. Different from these about data-augmentation techniques, our approach is only trained on parallel corpora and outperforms a representative data-augmentation work (Sennrich et al., 2016b) trained with extra monolingual data. When monolingual data is included, our approach yields further improvements. Conclusion In this work, we have presented an approach to improving the robustness of the NMT models with doubly adversarial inputs. We have also introduced a white-box method to generate adversarial examples for NMT. Experimental results on Chinese-English and English-German translation tasks demonstrate the capability of our approach to improving both the translation performance and the robustness. In future work, we plan to explore the direction to generate more natural adversarial examples dispensing with word replacements and more advanced defense approaches such as curriculum learning (Jiang et al., 2018(Jiang et al., , 2015.
2019-06-06T07:02:04.000Z
2019-06-06T00:00:00.000
{ "year": 2019, "sha1": "e364a32064235297e6bcf7b86aeb73679527222c", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/P19-1425.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "f098060de964d239c8b7640581396acdfc226fc9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
3333508
pes2o/s2orc
v3-fos-license
Computational Modelling of Rectangular Sub-Boundary Layer Vortex Generators : Vortex generators (VGs) are increasingly used in the wind turbine manufacture industry as flow control devices to improve rotor blade aerodynamic performance. Nevertheless, VGs may produce excess residual drag in some applications. The so-called sub-boundary layer VGs can provide an effective flow-separation control with lower drag than the conventional VGs. The main objective of this study is to investigate how well the simulations can reproduce the physics of the flow of the primary vortex generated by rectangular sub-boundary layer VGs mounted on a flat plate with a negligible pressure gradient with an angle of attack of the vane to the oncoming flow of β = 18 ◦ . Three devices with aspect ratio values of 2, 2.5 and 3 are qualitatively and quantitatively compared. To that end, computational simulations have been carried out using the RANS (Reynolds averaged Navier–Stokes) method and at Reynolds number Re = 2600 based on the boundary layer momentum thickness θ at the VG position. The computational results show good agreement with the experimental data provided by the Advanced Aerodynamic Tools of Large Rotors (AVATAR) European project for the development and validation of aerodynamic models. Finally, the results indicate that the highest VG seems to be more suitable for separation control applications. Introduction In order to achieve the aim of 100% renewable energy consumption, wind energy, as a leader toward the way of renewable energy, is developing rapidly all over the world.To decrease the levelized cost of energy (LCOE), the size of a single wind turbine has been increased to 10 MW nowadays, and it will increase further in the near future.Large wind turbines and their related wind farms have many challenges in aerodynamics, aero-elasticity and aero-acoustics.Therefore, the introduction of new designs and upgrading methods will be also needed to make this possible [1,2]. According to Aramendia et al. [3,4] and Fernandez-Gamiz et al. [5], either the use of passive controllers (vortex generators (VGs), microtabs, spoilers, etc.) or active controllers (trailing-edge flaps, synthetic jets and air jet vortex generators) can be introduced.One of the best options is the introduction of VGs on the blades to improve the aerodynamic performance needed.The use of this passive flow device helps to reduce the load distribution, and the blade has to undergo and obtain the optimal creation of energy.Øye [6] compared the measured power curves with VGs and without on a 1-MW wind turbine.Although quite rough methods were used for the VG design optimization, the experiment showed that, for a stall-regulated wind turbine, power increased nearly 24% by using VGs through field tests.Furthermore, Sullivan [7] conducted an experiment on a 2.5-MW wind turbine to test the effects of adding VGs on the power conversion performance.An increase of 11% in the annual energy production was found. In the work of Fernandez-Gamiz et al. [8], Blade Element Momentum BEM-based computations were carried out on the National Renewable Energy Laboratory (NREL) 5-MW baseline wind turbine with and without flow control devices.The results obtained from the clean wind turbine without any device for flow controlling were compared with the ones obtained from the wind turbine equipped with vortex generators and Gurney flaps.A best configuration case is proposed, which has the largest increase of the average power output.In that case, increments on the average power output of 10.4% and 3.5% have been found at two different wind speed realizations VGs were first introduced by Taylor [9], and their principle of operation relies on the increased mixing between the external stream and the boundary layer due to longitudinal vortices produced by the VGs.Fluid particles with high momentum in the streamwise direction mix with the low-momentum viscous flow inside the boundary layer; therefore, the mean streamwise momentum of the fluid particles in the boundary layer is increased.The process provides a continuous source of momentum to counter the natural boundary layer momentum decrease and the growth of its thickness caused by viscous friction and adverse pressure gradients (Doerffer et al. [10] and Steijl et al. [11]).Vortex generators can reduce or eliminate flow separation in moderate adverse pressure gradient environments (Velte et al. [12,13]).Even when separation does occur for cases of a large adverse pressure gradient, the mixing action of trailing vortices will restrict the reversed flow region in the shear layer and help maintain some pressure recovery along the separated flow.Thus, the effects of separation may be minimized or even removed. Fernandez-Gamiz et al. [14] and Urkiola et al. [15] studied the behavior of a rectangular VG on a flat plane and the streamwise vortices produced by them to investigate how the physics of the wake behind VGs in a negligible streamwise pressure gradient flow can be reproduced in Computational Fluid Dynamics CFD simulations and their accuracy in comparison with experimental observations.In the work carried out by Gao et al. [16] and Baldacchino et al. [17] on a 30% thick DU97-W-300 airfoil (Delft University of Technology, Delft, The Nederlands), the maximum lift coefficient was substantially increased due to the implementation of passive VGs.When the angle of attack increases, both lift and drag coefficients rise up to values higher than the ones reached in steady state conditions. The concept of micro vortex generators was most probably first introduced by Keuthe [18].In that study, wave-type micro VGs with a height of 27% and 42% of the local boundary layer (BL) thickness were mounted on an airfoil to decrease trailing edge noise by suppressing the formation of a Karman vortex street and by reducing the velocity deficit in the airfoil wake.Since the late 1980s, these devices appeared in the literature under different names such as sub-boundary layer vortex generator (SBVG), presented by Holmes et al. [19], the submerged vortex generator of Lin et al. [20], the low-profile vortex generator Martinez-Filgueira et al. [21] and the micro vortex generator (Lin [22]).As stated in the review of Kenning et al. [23], the potential applications of VGs and SBVGs include control of leading edge separation, shock-induced separation and smooth surface separation. Vortex generators are applied on wind turbine blades with the major aim to delay or prevent the separation of the flow and to decrease the roughness sensitivity of the blade.They are usually mounted in a spanwise array on the suction side of the blade and have the advantage that they can be added as a post-production fix to blades that do not perform as expected.Therefore, adding VGs is a straightforward solution to improve the rotor performance (Bragg et al. [24]). The main contribution of this work is the computational study of the primary vortex generated by three different rectangular VGs, where it has been found that the highest vane is the most suitable actuator for flow control applications.The goal of this study is to investigate how well the simulations can reproduce the physics of the flow behind a rectangular VG to characterize the primary vortex generated by the vane mounted on a flat plate with three different device heights.The aspect ratio (AR) of the vane defined as the relation between the height and length of the VG will consequently vary, maintaining the angle of attack constant.The baseline VG has an aspect ratio of AR H = 2.5, and two height variations are also investigated, those corresponding to AR H2 = 2 and AR H1 = 3, as sketched in Figure 1.For this purpose, computational simulations have been carried out using the RANS (Reynolds averaged Navier-Stokes) method and at a Reynolds number Re = 2600.The case consists of a single vortex generator on a flat plate with the angle of attack of the vane to the oncoming flow β = 18 • .The flow over the flat plate without the VG has been previously simulated to calculate the boundary layer thickness at the VG position. Baseline Experimental Data The current study is based on Advanced Aerodynamic Tools of Large Rotors (AVATAR) European project, and the parameters used for the validation of the computational results are the following ones: • Streamwise velocity profiles at different locations (AVATAR Task 3.1 [25]) Velocity fields at streamwise planes 10H, 25H and 50H (AVATAR Task 3.1 [25]) Fields of the normalized vorticity at different streamwise planes (AVATAR Task 3.2, [26]) Turbulence kinetic energy fields at streamwise planes (AVATAR Task 3.2 [26]) Taking into account the previous configuration and in order to characterize the primary vortex generated by the three different VGs, the following parameters have been proposed: • Streamwise velocity profiles at different streamwise locations. • Primary vortex vertical and lateral positions to define the vortex trajectory. According to the experimental data available in the AVATAR project and to reproduce the experiments of the VG on a flat plate performed in that project, the numerical simulations have been carried out at a Reynolds number Re = 2600 based on local BL momentum thickness θ = 2.4 mm and with a free stream velocity of U∞ = 15m/s.A negligible pressure gradient on the plate is assumed.The angle of incidence of the vane to the oncoming flow is 18°.The calculation of the local BL momentum thickness θ was made by the application of Equation (1): where ux is the streamwise velocity component, U∞ the free stream velocity and y the vertical coordinate normal to the wall.In order to validate the numerical model of the VG, the results of the numerical simulations are compared to the experimental data of the AVATAR Project [25,26].The experiments were carried out in the Boundary Layer Wind Tunnel of the Delft University of Technology.The tunnel can attain a maximum speed of 38 m/s in the wide-walled test section of 1.5 × 0.25 m 2 .The large separation of the side walls (1.5 m) minimizes end effects on the flow region of interest along the centerline zone.An adjustable back wall allows adjustment of the pressure gradient, ensuring a truly null pressure Baseline Experimental Data The current study is based on Advanced Aerodynamic Tools of Large Rotors (AVATAR) European project, and the parameters used for the validation of the computational results are the following ones: • Streamwise velocity profiles at different locations (AVATAR Task 3.1 [25]) Turbulence kinetic energy fields at streamwise planes (AVATAR Task 3.2 [26]) Taking into account the previous configuration and in order to characterize the primary vortex generated by the three different VGs, the following parameters have been proposed: • Streamwise velocity profiles at different streamwise locations. • Primary vortex vertical and lateral positions to define the vortex trajectory. According to the experimental data available in the AVATAR project and to reproduce the experiments of the VG on a flat plate performed in that project, the numerical simulations have been carried out at a Reynolds number Re = 2600 based on local BL momentum thickness θ = 2.4 mm and with a free stream velocity of U ∞ = 15 m/s.A negligible pressure gradient on the plate is assumed.The angle of incidence of the vane to the oncoming flow is 18 • .The calculation of the local BL momentum thickness θ was made by the application of Equation (1): where u x is the streamwise velocity component, U ∞ the free stream velocity and y the vertical coordinate normal to the wall.In order to validate the numerical model of the VG, the results of the numerical simulations are compared to the experimental data of the AVATAR Project [25,26].The experiments were carried out in the Boundary Layer Wind Tunnel of the Delft University of Technology.The tunnel can attain a maximum speed of 38 m/s in the wide-walled test section of 1.5 × 0.25 m 2 .The large separation of the side walls (1.5 m) minimizes end effects on the flow region of interest along the centerline zone.An adjustable back wall allows adjustment of the pressure gradient, ensuring a truly null pressure gradient when desired.The turbulence level at maximum free stream velocity was determined as 0.5%.Vortex generator dimensions were designed according to the previous research made by m, where the rectangular vortex generators were designed according to the study of Godard et al. [28] and are summarized in Table 1.The placement of the VGs is such that flow impinges on the vanes in a divergent manner, producing counter-rotating, common down-flow embedded vortices.For each test case, 25 pairs of rectangular VGs were mounted in an array configuration, side by side, as shown in Figure 2, covering a spanwise distance of 0.75 m and centered on the tunnel centerline.This was done in order to minimize end-effects from the finite array.The vortex generator height is H = 5 mm and considering a negligible pressure gradient.gradient when desired.The turbulence level at maximum free stream velocity was determined as 0.5%.Vortex generator dimensions were designed according to the previous research made by Baldacchino et al. [27], where the rectangular vortex generators were designed according to the study of Godard et al. [28] and are summarized in Table 1.The placement of the VGs is such that flow impinges on the vanes in a divergent manner, producing counter-rotating, common down-flow embedded vortices.For each test case, 25 pairs of rectangular VGs were mounted in an array configuration, side by side, as shown in Figure 2, covering a spanwise distance of 0.75 m and centered on the tunnel centerline.This was done in order to minimize end-effects from the finite array.The vortex generator height is H = 5 mm and considering a negligible pressure gradient. Computational Setup The present study consists of a rectangular VG positioned on a flat plate with an incident angle to the oncoming flow.The height of the VG located at the flat plate in the simulations is going to be different, and the AR will consequently change, whereas the angle of attack will remain constant.The ARs of the three cases are ARH2 = 2, ARH = 2.5 and ARH1 = 3, and the heights are going to be H2 = 4.16, H = 5 and H1 = 6.25 mm, respectively.The computational domain dimensions are represented in Figure 3 normalized by the baseline VG case height H. Wake symmetry behind the VGs is assumed in the current simulations.Thus, the computational domain includes only one VG inclined to the oncoming flow instead of a pair.In that way, meshing and computational time can be considerably reduced. Computational Setup The present study consists of a rectangular VG positioned on a flat plate with an incident angle to the oncoming flow.The height of the VG located at the flat plate in the simulations is going to be different, and the AR will consequently change, whereas the angle of attack will remain constant.The ARs of the three cases are AR H2 = 2, AR H = 2.5 and AR H1 = 3, and the heights are going to be H 2 = 4.16, H = 5 and H 1 = 6.25 mm, respectively.The computational domain dimensions are represented in Figure 3 normalized by the baseline VG case height H. Wake symmetry behind the VGs is assumed in the current simulations.Thus, the computational domain includes only one VG inclined to the oncoming flow instead of a pair.In that way, meshing and computational time can be considerably reduced.Figure 2 illustrates in green color the spanwise locations of the domain consisting of only one vane.The symmetry assumption used in the present computations can be justified by the previous studies of Sorensen et al. [29] and Velte et al. [12].The computational domain is divided into 28 blocks.The mesh has been refined near the VG and in the corners of the vane where the velocity gradients are large.In regions far away from the VG and the wake, the mesh density is lower.Five blocks are located around the VG and six blocks downstream of the vane to capture the generated primary vortex.The same block-based meshing strategy as in the study of Urkiola et al. [15] has been followed.The total amount of cells is eight million, with a height of the first cell of Δy/H = 3.23 × 10 −6 , normalized according the baseline VG height.Around the vane, the mesh has 1.7 million cells, while the mesh downstream the VG for capturing the wake has approximately 2.4 million cells; see Figure 4.In order to resolve the boundary layer, cell clustering has been used close to the wall, and the wall dimensionless distance of the first layer of cells is less than y+ < 1.Each surface of the domain has been assigned to a type of boundary condition.The vane's four different patches and the bottom of the domain were defined as a wall with a non-slip condition.The roof of the domain and the two lateral surfaces were defined as slip surfaces (symmetrical The computational domain is divided into 28 blocks.The mesh has been refined near the VG and in the corners of the vane where the velocity gradients are large.In regions far away from the VG and the wake, the mesh density is lower.Five blocks are located around the VG and six blocks downstream of the vane to capture the generated primary vortex.The same block-based meshing strategy as in the study of Urkiola et al. [15] has been followed.The total amount of cells is eight million, with a height of the first cell of ∆y/H = 3.23 × 10 −6 , normalized according the baseline VG height.Around the vane, the mesh has 1.7 million cells, while the mesh downstream the VG for capturing the wake has approximately 2.4 million cells; see Figure 4.In order to resolve the boundary layer, cell clustering has been used close to the wall, and the wall dimensionless distance of the first layer of cells is less than y+ < 1.The computational domain is divided into 28 blocks.The mesh has been refined near the VG and in the corners of the vane where the velocity gradients are large.In regions far away from the VG and the wake, the mesh density is lower.Five blocks are located around the VG and six blocks downstream of the vane to capture the generated primary vortex.The same block-based meshing strategy as in the study of Urkiola et al. [15] has been followed.The total amount of cells is eight million, with a height of the first cell of Δy/H = 3.23 × 10 −6 , normalized according the baseline VG height.Around the vane, the mesh has 1.7 million cells, while the mesh downstream the VG for capturing the wake has approximately 2.4 million cells; see Figure 4.In order to resolve the boundary layer, cell clustering has been used close to the wall, and the wall dimensionless distance of the first layer of cells is less than y+ < 1.Each surface of the domain has been assigned to a type of boundary condition.The vane's four different patches and the bottom of the domain were defined as a wall with a non-slip condition.The roof of the domain and the two lateral surfaces were defined as slip surfaces (symmetrical Each surface of the domain has been assigned to a type of boundary condition.The vane's four different patches and the bottom of the domain were defined as a wall with a non-slip condition.The roof of the domain and the two lateral surfaces were defined as slip surfaces (symmetrical hypothesis).The velocity inlet was chosen for the entry of the fluid and pressure outlet for the exit of the fluid downstream the VG. The quality of the mesh was evaluated by five main indicators to analyze whether it can be classified as a high quality mesh; see Table 2.This set of parameters is a mix of industry standards, solver manuals and academia standards and should consequently not be regarded as the only choice of parameter selection.Equiangular skewness is an indicator of how optimal the cell shape is related to the corner angles.For hexahedral cells, skewness should not exceed 0.85 to obtain a fairly accurate solution.The cell aspect ratio of a cell is typically the ratio between the width and height.For critical flow areas, except those close to the wall, the cell aspect ratio should not exceed an average of 10. hypothesis).The velocity inlet was chosen for the entry of the fluid and pressure outlet for the exit of the fluid downstream the VG.The quality of the mesh was evaluated by five main indicators to analyze whether it can be classified as a high quality mesh; see Table 2.This set of parameters is a mix of industry standards, solver manuals and academia standards and should consequently not be regarded as the only choice of parameter selection.Equiangular skewness is an indicator of how optimal the cell shape is related to the corner angles.For hexahedral cells, skewness should not exceed 0.85 to obtain a fairly accurate solution.The cell aspect ratio of a cell is typically the ratio between the width and height.For critical flow areas, except those close to the wall, the cell aspect ratio should not exceed an average of 10. Figure 5 represents the wall y+ field distribution on the VG and bottom walls.For the current study, the maximum value of y+ corresponds to 0.366 and its average is 0.044.In this work, the open source code OpenFOAM (Version 2.4.0,The OpenFOAM Foundation Ltd., London, UK) [30] has been used for simulating a rectangular VG on a flat plate with negligible pressure gradient.This CFD code is an object-oriented library written in C++ to solve computational continuum mechanics problems.The solver potential Foam, which solves potential flows, was used to generate starting fields (initialization of fields) in order to speed up the convergence process.This solver is suitable to generate initial conditions for more advanced solvers such as the one used in the present work named simpleFoam.The simpleFoam solver has been applied for steady-state, incompressible and turbulent flows using the RANS (Reynolds averaged Navier-Stokes) equations.Second order discretization schemes were employed in the CFD simulations.The k-omega SST (shear stress transport) turbulence model developed by Menter [31] was used for all the computations.This turbulence model is a combination of two models: Wilcox's k-ω model for the near wall region and the k-ε model for the outer region and in free shear flows. Simulations were performed on a personal server-clustered parallel machine with Intel ® Core i7-6700 CPU 3.40 GHz × 8 cores and 32 GB RAM on 64-bit Linux.The domain was automatically divided into eight subdomains to solve in parallel and to reduce the simulation time.The computational cost was approximately 12 days per simulation. A mesh independency study has been done with the boundary layer velocity profile at the plane 10 H downstream of the VG and at z = 0 cross-wise position.Figure 6a shows a comparison between In this work, the open source code OpenFOAM (Version 2.4.0,The OpenFOAM Foundation Ltd., London, UK) [30] has been used for simulating a rectangular VG on a flat plate with negligible pressure gradient.This CFD code is an object-oriented library written in C++ to solve computational continuum mechanics problems.The solver potential Foam, which solves potential flows, was used to generate starting fields (initialization of fields) in order to speed up the convergence process.This solver is suitable to generate initial conditions for more advanced solvers such as the one used in the present work named simpleFoam.The simpleFoam solver has been applied for steady-state, incompressible and turbulent flows using the RANS (Reynolds averaged Navier-Stokes) equations.Second order discretization schemes were employed in the CFD simulations.The k-omega SST (shear stress transport) turbulence model developed by Menter [31] was used for all the computations.This turbulence model is a combination of two models: Wilcox's k-ω model for the near wall region and the k-ε model for the outer region and in free shear flows. Simulations were performed on a personal server-clustered parallel machine with Intel ® Core i7-6700 CPU 3.40 GHz × 8 cores and 32 GB RAM on 64-bit Linux.The domain was automatically divided into eight subdomains to solve in parallel and to reduce the simulation time.The computational cost was approximately 12 days per simulation. A mesh independency study has been done with the boundary layer velocity profile at the plane 10 H downstream of the VG and at z = 0 cross-wise position.Figure 6a shows a comparison between the CFD curves of three studied meshes: 2 million (coarse), 4 million (medium) and 8 million (fine) cells for the baseline case with a VG height of H = 5 mm. Figure 6b represents the BL velocity profiles of the fine mesh (blue) versus the experimental one (black) provided by [25]. Appl.Sci.2018, 8, 138 7 of 16 the CFD curves of three studied meshes: 2 million (coarse), 4 million (medium) and 8 million (fine) cells for the baseline case with a VG height of H = 5 mm. Figure 6b represents the BL velocity profiles of the fine mesh (blue) versus the experimental one (black) provided by [25].Verification of sufficient mesh resolution was performed based on the Richardson extrapolation method and applied to the normalized peak vorticity ωx/(U∞/H) at the plane position of 10H behind the vane.Table 3 shows the results of the mesh independency study.A monotonic convergence was achieved.The normalized streamwise velocity curve of the fine mesh and the experimental one presented in Figure 6b fit reasonably well.This higher resolution mesh has been used for the current computations and applied to the three low profile vanes H, H1 and H2. Results Numerical simulations of the vortex generating vane on a flat plate were performed.Three dimensions of a single low-profile vane H = 5 mm, H1 = 6.25 mm and H2 = 4.16 mm at an incident angle to the oncoming flow of β = 18° were computed and compared with experimental results.The extraction of the data from the computations was conducted in a similar way to the procedure described in [14], in planes normal to the wall downstream of the VG to capture the development of the vortex.Table 4 shows the height H of the baseline VG and the heights in the other simulated cases H1 and H2.The VG H1 has been designed with an aspect ratio AR = 2, which is the highest one, and the VG H2 with an aspect ratio AR = 3. Verification of sufficient mesh resolution was performed based on the Richardson extrapolation method and applied to the normalized peak vorticity ω x /(U ∞ /H) at the plane position of 10H behind the vane.Table 3 shows the results of the mesh independency study.A monotonic convergence was achieved.The normalized streamwise velocity curve of the fine mesh and the experimental one presented in Figure 6b fit reasonably well.This higher resolution mesh has been used for the current computations and applied to the three low profile vanes H, H 1 and H 2 . Results Numerical simulations of the vortex generating vane on a flat plate were performed.Three dimensions of a single low-profile vane H = 5 mm, H 1 = 6.25 mm and H 2 = 4.16 mm at an incident angle to the oncoming flow of β = 18 • were computed and compared with experimental results.The extraction of the data from the computations was conducted in a similar way to the procedure described in [14], in planes normal to the wall downstream of the VG to capture the development of the vortex.Table 4 shows the height H of the baseline VG and the heights in the other simulated cases H 1 and H 2 .The VG H 1 has been designed with an aspect ratio AR = 2, which is the highest one, and the VG H 2 with an aspect ratio AR = 3.First of all, a comparison between experimental and CFD streamwise velocity profiles has been carried out for the baseline case H at the planes 10H, 25H and 50H downstream of the vane; see Figure 7. Additionally, four different cross-wise locations are analyzed, which are at z = 0, z = TE (VG trailing edge), z = D/3 and z = D/2, where D = 30 mm.At the crosswise location of z = 0, the numerical results fit quite well to the experimental ones.As the flow moves away from the center z = 0 to the spanwise direction, some differences can be found between the experimental and the numerical streamwise velocity profiles.First of all, a comparison between experimental and CFD streamwise velocity profiles has been carried out for the baseline case H at the planes 10H, 25H and 50H downstream of the vane; see Figure 7. Additionally, four different cross-wise locations are analyzed, which are at z = 0, z = TE (VG trailing edge), z = D/3 and z = D/2, where D = 30 mm.At the crosswise location of z = 0, the numerical results fit quite well to the experimental ones.As the flow moves away from the center z = 0 to the spanwise direction, some differences can be found between the experimental and the numerical streamwise velocity profiles.According to the information available in the AVATAR project, three variables have been chosen for qualitative comparison between the numerical and the experimental results: the streamwise velocity, the normalized streamwise vorticity and the turbulence kinetic energy.The experimental fields were only available at the downstream plane positions from 10H-50H.According to the information available in the AVATAR project, three variables have been chosen for qualitative comparison between the numerical and the experimental results: the streamwise velocity, the normalized streamwise vorticity and the turbulence kinetic energy.The experimental fields were only available at the downstream plane positions from 10H-50H. Figure 8 shows the out-of-plane streamwise velocity fields on spanwise planes at three different locations 5H, 10H, 25H and 50H downstream of the vortex generator.The comparison includes the results of the simulations corresponding to the three different VG heights and the experimental data.The height of the vortex generated by the vane increases as the vortex is moving away from the flat plate in the streamwise direction.At the farthest plane position studied 50H, the two vortexes that are visible in the near wake plane positions seem to be unique.The roll-up process largely occurs before the 10H location.The wake evolution is typical of the induced flow field where the velocity deficit is clearly observed in the places nearer the core of the vortex.The vortex size and the velocity at its center increase as it moves away from the flat plate.The velocity in the middle of the vortex increases as it goes away, and the vortex increases, as can be seen by the reduction of the yellow. The normalized vorticity ωx/(U∞/H) fields at different streamwise planes are illustrated in Figure 9.The vorticity is a pseudovector field, which describes the local spinning motion of a continuum near some point (the tendency of something to rotate), as would be seen by an observer located at that point and traveling along with the flow.As shown in Figure 9, the experimental data and the CFD case of H = 5 mm are very similar, endorsing the accuracy of the current computations.As expected, the vorticity in the core of the vortex decreases as the distance increases, and that decrease is more significant at the planes 25H and 50H.The higher the vane, the larger is the vorticity in the vortex core.The height of the vortex generated by the vane increases as the vortex is moving away from the flat plate in the streamwise direction.At the farthest plane position studied 50H, the two vortexes that are visible in the near wake plane positions seem to be unique.The roll-up process largely occurs before the 10H location.The wake evolution is typical of the induced flow field where the velocity deficit is clearly observed in the places nearer the core of the vortex.The vortex size and the velocity at its center increase as it moves away from the flat plate.The velocity in the middle of the vortex increases as it goes away, and the vortex increases, as can be seen by the reduction of the yellow. The normalized vorticity ω x /(U ∞ /H) fields at different streamwise planes are illustrated in Figure 9.The vorticity is a pseudovector field, which describes the local spinning motion of a continuum near some point (the tendency of something to rotate), as would be seen by an observer located at that point and traveling along with the flow.As shown in Figure 9, the experimental data and the CFD case of H = 5 mm are very similar, endorsing the accuracy of the current computations.As expected, the vorticity in the core of the vortex decreases as the distance increases, and that decrease is more significant at the planes 25H and 50H.The higher the vane, the larger is the vorticity in the vortex core.The comparison of the turbulence kinetic energy fields is represented in Figure 10.The turbulence kinetic energy (TKE) is the mean kinetic energy per unit mass associated with eddies in turbulent flow and is a measure of the intensity of the turbulence.It is defined as follows: where ux, uy and uz are the velocity field components.As shown in Figure 10, the TKE field comparison between the experimental case and the simulations is very similar at the planes 25H and 50H, whereas at the plane 10H, the experimental case does not have a quick development as the simulation has.On the one hand, for the case of the largest vane height H1, the intensity of turbulence results in a higher y coordinate than the baseline H and then the shortest VG H2.That means that as a consequence of the height of the VG, the turbulent flow moves through at a higher elevation.On the other hand, as the flow moves away from the trailing edge of the actuator, the energy is dissipated to a greater extent in the smallest vane H2.Furthermore, it is clearly visible that for the case of H1 = 6.25 mm at the plane 5H behind the VG, the energy created as a consequence of the vortex is the maximum one in comparison with the other cases H and H2. Figure 11 shows the normalized streamwise velocity (ux/U∞) profiles for the three different VG heights H1 = 4.16 mm, H = 5 mm and H2 = 6.25 mm at the 10H, 25H and 50H downstream positions.As shown in Figure 11, at the downstream position 25H and spanwise position z = D/2 and at 50H and spanwise positions z = D/2 and z = D/3, the differences of the curves are notable. In order to analyze the vortex evolution, up to three parameters were previously identified in past studies made by Fernandez-Gamiz et al. [32] and Godard et al. [28] on the streamwise longitudinal vortex embedded in turbulent boundary layers.Therefore, the proposed parameters to characterize the primary vortex generated by a passive rectangular VG are the streamwise peak vorticity ωx,max , the vortex path and the wall shear stress (WSS).The peak vorticity is used to locate the center of the streamwise vortex in each plane position downstream of the VG and, subsequently, to determine the The comparison of the turbulence kinetic energy fields is represented in Figure 10.The turbulence kinetic energy (TKE) is the mean kinetic energy per unit mass associated with eddies in turbulent flow and is a measure of the intensity of the turbulence.It is defined as follows: where u x , u y and u z are the velocity field components.As shown in Figure 10, the TKE field comparison between the experimental case and the simulations is very similar at the planes 25H and 50H, whereas at the plane 10H, the experimental case does not have a quick development as the simulation has.On the one hand, for the case of the largest vane height H 1 , the intensity of turbulence results in a higher y coordinate than the baseline H and then the shortest VG H 2 .That means that as a consequence of the height of the VG, the turbulent flow moves through at a higher elevation.On the other hand, as the flow moves away from the trailing edge of the actuator, the energy is dissipated to a greater extent in the smallest vane H 2 .Furthermore, it is clearly visible that for the case of H 1 = 6.25 mm at the plane 5H behind the VG, the energy created as a consequence of the vortex is the maximum one in comparison with the other cases H and H 2 . Figure 11 shows the normalized streamwise velocity (u x /U ∞ ) profiles for the three different VG heights H 1 = 4.16 mm, H = 5 mm and H 2 = 6.25 mm at the 10H, 25H and 50H downstream positions.As shown in Figure 11, at the downstream position 25H and spanwise position z = D/2 and at 50H and spanwise positions z = D/2 and z = D/3, the differences of the curves are notable. In order to analyze the vortex evolution, up to three parameters were previously identified in past studies made by Fernandez-Gamiz et al. [32] and Godard et al. [28] on the streamwise longitudinal vortex embedded in turbulent boundary layers.Therefore, the proposed parameters to characterize the primary vortex generated by a passive rectangular VG are the streamwise peak vorticity ω x,max , the vortex path and the wall shear stress (WSS).The peak vorticity is used to locate the center of the streamwise vortex in each plane position downstream of the VG and, subsequently, to determine the vortex trajectory.In addition, the WSS is a parameter associated with the vortex induced rate of mixing of the outer flow with the boundary layer.The trajectory of the primary vortex generated by a vane-type VG plays a determining role in the mixing performance.vortex trajectory.In addition, the WSS is a parameter associated with the vortex induced rate of mixing of the outer flow with the boundary layer.The trajectory of the primary vortex generated by a vane-type VG plays a determining role in the mixing performance.Figure 12 shows the normalized peak vorticity (ωx,max•H)/U∞ development of the three different heights of the VGs, which in fact represents the decay of the vortex along the streamwise direction.As expected, the vortex decay seems to follow an exponential law in all cases.At the closest positions to the VG, the largest value corresponds to the lowest VG height of 4.16 mm, but that trend is turned around at the distance of 50H in which the highest value is that corresponding to the case with a 6.25-mm vane height.The values of the peak vorticity of the different VG cases almost overlap each other, and consequently, no significant variation was found. By observing the location of the vortex center as a function of the downstream axis x, one can determine the path of the primary vortex in both lateral (z) and vertical (y) directions.The calculation of the vortex center is made by finding the maximum streamwise vorticity point in the corresponding downstream plane position; see Fernandez-Gamiz et al. [32] and Martinez-Filgueira et al. [21].Figures 13 and 14 show a comparison between the trajectories described by the primary vortex generated by the different VG heights.Streamwise coordinates and both lateral and vertical coordinates are normalized by the baseline VG height H.As illustrated in Figure 13, the lateral path of the three cases tends to follow the direction in which the device is inclined, and the trajectories roughly collapse in the streamwise plane positions studied.The vortex paths in the vertical direction are represented in Figure 14.They collapse in the near wake, from the formation of the vortex up to approximately the downstream plane 18H.As the distance increases far away in the streamwise direction, the higher vortex trajectory corresponds clearly to the higher VG H1 = 6.25.In the far wake positions, a clear gap in the vertical paths is visible between the three VG cases, which does not occur in the lateral paths.Figure 12 shows the normalized peak vorticity (ω x,max •H)/U ∞ development of the three different heights of the VGs, which in fact represents the decay of the vortex along the streamwise direction.As expected, the vortex decay seems to follow an exponential law in all cases.At the closest positions to the VG, the largest value corresponds to the lowest VG height of 4.16 mm, but that trend is turned around at the distance of 50H in which the highest value is that corresponding to the case with a 6.25-mm vane height.The values of the peak vorticity of the different VG cases almost overlap each other, and consequently, no significant variation was found. By observing the location of the vortex center as a function of the downstream axis x, one can determine the path of the primary vortex in both lateral (z) and vertical (y) directions.The calculation of the vortex center is made by finding the maximum streamwise vorticity point in the corresponding downstream plane position; see Fernandez-Gamiz et al. [32] and Martinez-Filgueira et al. [21].Figures 13 and 14 show a comparison between the trajectories described by the primary vortex generated by the different VG heights.Streamwise coordinates and both lateral and vertical coordinates are normalized by the baseline VG height H.As illustrated in Figure 13, the lateral path of the three cases tends to follow the direction in which the device is inclined, and the trajectories roughly collapse in the streamwise plane positions studied.The vortex paths in the vertical direction are represented in Figure 14.They collapse in the near wake, from the formation of the vortex up to approximately the downstream plane 18H.As the distance increases far away in the streamwise direction, the higher vortex trajectory corresponds clearly to the higher VG H 1 = 6.25.In the far wake positions, a clear gap in the vertical paths is visible between the three VG cases, which does not occur in the lateral paths.The WSS is the result of friction within the fluid and between the fluid and the walls and is related to the fluid viscosity.The magnitude of the WSS is dependent on how fast the velocity increases when moving away from the wall.The wall shear stress, τω, can be determined by Equation (3): where μ is the dynamic viscosity, ux is the axial flow velocity and y is the distance normal to the wall.The units in IS are Pa. Figure 15 represents the comparison of the wall shear stress for the three analyzed VG geometries on a flat plate.Data were extracted from the VG trailing-edge up to the end of the domain and at a spanwise coordinate of z = 0.A dependency between the vane height and the wall shear stress produced in the VG wake is clearly seen in Figure 15.The largest value of the WSS is reached by the highest VG H1 = 6.25 mm.According to Gad-el-Hak [33], transferring momentum towards the near wall region means that the velocity is increased, which leads to an increase in the wall shear stress.In the study of Lin et al. [34] on an airfoil with micro-VGs, it was demonstrated that an increase of the WSS resulted in a lift increasing.The WSS is the result of friction within the fluid and between the fluid and the walls and is related to the fluid viscosity.The magnitude of the WSS is dependent on how fast the velocity increases when moving away from the wall.The wall shear stress, τω, can be determined by Equation (3): where μ is the dynamic viscosity, ux is the axial flow velocity and y is the distance normal to the wall.The units in IS are Pa. Figure 15 represents the comparison of the wall shear stress for the three analyzed VG geometries on a flat plate.Data were extracted from the VG trailing-edge up to the end of the domain and at a spanwise coordinate of z = 0.A dependency between the vane height and the wall shear stress produced in the VG wake is clearly seen in Figure 15.The largest value of the WSS is reached by the highest VG H1 = 6.25 mm.According to Gad-el-Hak [33], transferring momentum towards the near wall region means that the velocity is increased, which leads to an increase in the wall shear stress.In the study of Lin et al. [34] on an airfoil with micro-VGs, it was demonstrated that an increase of the WSS resulted in a lift increasing.The WSS is the result of friction within the fluid and between the fluid and the walls and is related to the fluid viscosity.The magnitude of the WSS is dependent on how fast the velocity increases when moving away from the wall.The wall shear stress, τ ω , can be determined by Equation (3): where µ is the dynamic viscosity, u x is the axial flow velocity and y is the distance normal to the wall.The units in IS are Pa. Figure 15 represents the comparison of the wall shear stress for the three analyzed VG geometries on a flat plate.Data were extracted from the VG trailing-edge up to the end of the domain and at a spanwise coordinate of z = 0.A dependency between the vane height and the wall shear stress produced in the VG wake is clearly seen in Figure 15.The largest value of the WSS is reached by the highest VG H 1 = 6.25 mm.According to Gad-el-Hak [33], transferring momentum towards the near wall region means that the velocity is increased, which leads to an increase in the wall shear stress.In the study of Lin et al. [34] on an airfoil with micro-VGs, it was demonstrated that an increase of the WSS resulted in a lift increasing. Conclusions Vortices generated by a passive rectangular vane-type VG on a flat plate have been studied.Numerical simulations at a Reynolds number Re = 2600 based on the local BL momentum thickness θ = 2.4 mm and free stream velocity U∞ = 15 ms −1 have been carried out using the RANS method. Computational fluid dynamics simulations are able to reproduce the physics of the primary vortex generated by a rectangular passive VG with reasonably good reliability.As numerical methods are the most frequently-used ones applied in optimizing VGs on wind turbine blades, the extensive information presented in the current study can be used as guidance for successful design and implementation of VGs on wind turbine blades and to carry out parametric dependencies of the VGs on different flow conditions.Because of the large range of parameters inherent in the problem (incident angle, interspacing between vanes, device orientation, vane height and length, etc.), an in-depth understanding of the fluid flow is needed to be correctly applied. Furthermore, a qualitative comparison of the spanwise velocity, normalized vorticity and turbulence kinetic energy fields at different plane positions downstream of the VG has been introduced to see how the vortex is developed.It is verified that when using an appropriate mesh, fields such as axial velocity, vorticity and TKE are determined with reasonable accuracy. A correlation study and comparison of the vortex parameters such as vortex trajectory, peak vorticity and streamwise BL velocity profiles with three different VG heights has also been carried out.It should be noted that, in the development of the center vortex trajectory, the points overlap each other in the lateral path for the three different VG heights, but not in the vertical path. It has been demonstrated for the three analyzed geometries that, for the same level of vorticity, the vertical path of the primary vortex and the wall shear stress downstream of the vane are higher as the actuator height increases.The wall shear stress has a direct influence on the quantity of energy transfer.Then, if a delayed stall is desired, the best geometry will be the one with the best wall shear stress increment.Thus, it could be concluded that the highest VG H1 = 6.25 mm is more suitable for separation control applications.Country UPV/EHU through the SAIOTEK (S-PE11UN112) and EHU12/26 research programs, respectively, is gratefully acknowledged. Author Contributions: Unai Fernandez-Gamiz, Iñigo Errasti and Ruben Gutierrez prepared and ran the numerical part.They also wrote the manuscript.The post-processing was done by Ruben Gutierrez.Ana Boyano and Oscar Barambones provided constructive instructions in the process of preparing the paper. Conclusions Vortices generated by a passive rectangular vane-type VG on a flat plate have been studied.Numerical simulations at a Reynolds number Re = 2600 based on the local BL momentum thickness θ = 2.4 mm and free stream velocity U ∞ = 15 ms −1 have been carried out using the RANS method. Computational fluid dynamics simulations are able to reproduce the physics of the primary vortex generated by a rectangular passive VG with reasonably good reliability.As numerical methods are the most frequently-used ones applied in optimizing VGs on wind turbine blades, the extensive information presented in the current study can be used as guidance for successful design and implementation of VGs on wind turbine blades and to carry out parametric dependencies of the VGs on different flow conditions.Because of the large range of parameters inherent in the problem (incident angle, interspacing between vanes, device orientation, vane height and length, etc.), an in-depth understanding of the fluid flow is needed to be correctly applied. Furthermore, a qualitative comparison of the spanwise velocity, normalized vorticity and turbulence kinetic energy fields at different plane positions downstream of the VG has been introduced to see how the vortex is developed.It is verified that when using an appropriate mesh, fields such as axial velocity, vorticity and TKE are determined with reasonable accuracy. A correlation study and comparison of the vortex parameters such as vortex trajectory, peak vorticity and streamwise BL velocity profiles with three different VG heights has also been carried out.It should be noted that, in the development of the center vortex trajectory, the points overlap each other in the lateral path for the three different VG heights, but not in the vertical path. It has been demonstrated for the three analyzed geometries that, for the same level of vorticity, the vertical path of the primary vortex and the wall shear stress downstream of the vane are higher as the actuator height increases.The wall shear stress has a direct influence on the quantity of energy transfer.Then, if a delayed stall is desired, the best geometry will be the one with the best wall shear stress increment.Thus, it could be concluded that the highest VG H 1 = 6.25 mm is more suitable for separation control applications. Appl.Sci.2018, 8, 138 3 of 16 two height variations are also investigated, those corresponding to ARH2 = 2 and ARH1 = 3, as sketched in Figure 1.For this purpose, computational simulations have been carried out using the RANS (Reynolds averaged Navier-Stokes) method and at a Reynolds number Re = 2600.The case consists of a single vortex generator on a flat plate with the angle of attack of the vane to the oncoming flow β = 18°.The flow over the flat plate without the VG has been previously simulated to calculate the boundary layer thickness at the VG position. Figure 1 . Figure 1.Sketch of the vortex generator (VG) dimensions with respect to the local boundary layer (not to scale). Figure 1 . Figure 1.Sketch of the vortex generator (VG) dimensions with respect to the local boundary layer (not to scale). Figure 2 illustrates in green color the spanwise locations of the domain consisting of only one vane.The symmetry assumption used in the present computations can be justified by the previous studies of Sorensen et al. [29] and Velte et al. [12]. Figure 3 . Figure 3. Sketch of the computational domain.Dimensions are scaled by the baseline VG height H (not to scale). Figure 4 . Figure 4. View of the meshing around the rectangular VG. Figure 3 . Figure 3. Sketch of the computational domain.Dimensions are scaled by the baseline VG height H (not to scale). Figure 3 . Figure 3. Sketch of the computational domain.Dimensions are scaled by the baseline VG height H (not to scale). Figure 4 . Figure 4. View of the meshing around the rectangular VG. Figure 4 . Figure 4. View of the meshing around the rectangular VG. Figure 5 Figure5represents the wall y+ field distribution on the VG and bottom walls.For the current study, the maximum value of y+ corresponds to 0.366 and its average is 0.044. Figure 5 . Figure 5. Wall y+ field distribution on the VG and bottom walls. Figure 5 . Figure 5. Wall y+ field distribution on the VG and bottom walls. Figure 6 . Figure 6.Comparison of the normalized streamwise velocity profile at the 10H downstream plane position and spanwise position of z = 0 corresponding to the baseline case of H = 5 mm.(a) CFD curves for three different meshes: coarse (green), medium (red) and fine (blue); (b) Boundary layer (BL) velocity profiles of the fine mesh (blue) versus the experimental one (black). Figure 6 . Figure 6.Comparison of the normalized streamwise velocity profile at the 10H downstream plane position and spanwise position of z = 0 corresponding to the baseline case of H = 5 mm.(a) CFD curves for three different meshes: coarse (green), medium (red) and fine (blue); (b) Boundary layer (BL) velocity profiles of the fine mesh (blue) versus the experimental one (black). Figure 7 . Figure 7. Normalized streamwise velocity profiles () at different streamwise locations 10H (top), 25H (middle) and 50H (bottom) behind the VG.Solid black lines (-) correspond to the experimental data and the blue ones (-) to numerical profiles. Figure 7 . Figure 7. Normalized streamwise velocity profiles () at different streamwise locations 10H (top), 25H (middle) and 50H (bottom) behind the VG.Solid black lines (-) correspond to the experimental data and the blue ones (-) to numerical profiles. Figure 8 Figure 8 shows the out-of-plane streamwise velocity fields on spanwise planes at three different locations 5H, 10H, 25H and 50H downstream of the vortex generator.The comparison includes the results of the simulations corresponding to the three different VG heights and the experimental data. Figure 8 . Figure 8.Comparison of the out-of-plane streamwise velocity fields ux between experimental and numerical simulations of the three cases H, H1 and H2.Streamwise plane positions from top to bottom: 5H, 10H, 25H, 50H. Figure 8 . Figure 8.Comparison of the out-of-plane streamwise velocity fields u x between experimental and numerical simulations of the three cases H, H 1 and H 2 .Streamwise plane positions from top to bottom: 5H, 10H, 25H, 50H. Figure 12 . Figure 12.The normalized peak vorticity development for the different VG heights. Figure 12 . Figure 12.The normalized peak vorticity development for the different VG heights. Figure 12 . Figure 12.The normalized peak vorticity development for the different VG heights. Figure 13 . Figure 13.The development of the vortex center in the horizontal direction. Figure 14 . Figure 14.The development of the vortex center in the vertical direction. Figure 13 . Figure 13.The development of the vortex center in the horizontal direction. Figure 13 . Figure 13.The development of the vortex center in the horizontal direction. Figure 14 . Figure 14.The development of the vortex center in the vertical direction. Figure 14 . Figure 14.The development of the vortex center in the vertical direction. Figure 15 . Figure 15.Wall shear stress comparison in the streamwise direction from the VG trailing-edge to 50H (spanwise distance z = 0) of the three different VG heights and without any vane on the wall. Figure 15 . Figure 15.Wall shear stress comparison in the streamwise direction from the VG trailing-edge to 50H (spanwise distance z = 0) of the three different VG heights and without any vane on the wall. Table 1 . Vortex generator geometry in the Advanced Aerodynamic Tools of Large Rotors (AVATAR) project. Table 1 . Vortex generator geometry in the Advanced Aerodynamic Tools of Large Rotors (AVATAR) project. Table 3 . Results of the mesh independency study 1 . 1RE represents the extrapolated solution, R the ratio of errors and p the order of accuracy. Table 3 . Results of the mesh independency study 1 . 1 RE represents the extrapolated solution, R the ratio of errors and p the order of accuracy.
2018-02-18T04:25:44.651Z
2018-01-19T00:00:00.000
{ "year": 2018, "sha1": "b71d520c6d7280a70cb142ff3e1ba08bd08855db", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/8/1/138/pdf?version=1516375794", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a4595ba9d4ec992739f14fb8325b95771609a332", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
12076702
pes2o/s2orc
v3-fos-license
On Graded Bialgebra Deformations We introduce the graded bialgebra deformations, which explain Andruskiewitsch-Schneider's liftings method. We also relate this graded bialgebra deformation with the corresponding graded bialgebra cohomology groups, which is the graded version of the one due to Gerstenhaber-Schack. Introduction The classification of finite-dimensional pointed Hopf algebras is a basic problem in the theory of Hopf algebras. It is well-known that any pointed Hopf algebra H has a coradical filtration, with respect to which one associates a coradically-graded Hopf algebra grH. Following Andruskiewitsch and Schneider, the classification problem can be divided into two parts. One is the classification of all coradically-graded pointed Hopf algebras. The other is to find all possible pointed Hopf algebras H with grH isomorphic to a given coradically-graded pointed Hopf algebra. The second part is just the lifting method in [1] and [2]. One of our motivations is to relate the lifting method with certain bialgebra deformation theory. The deformation theory for algebras is initiated by Gerstenhaber in [4], and its analogue for bialgebras appeared first in [5] (also see [6] and [10]). Inspired by the graded algebra deformation theory in [11] and [3], we develop in this paper the theory of graded bialgebra deformations and their corresponding cohomology groups. Moreover this deformation theory can be used to explain Andruskiewitsch-Schneider's lifting method. The paper is organized as follows. In section 2, first we recall the notion of liftings and introduce the graded bialgebra deformations, and we show that the lifting is just the same as the graded bialgebra deformation in the sense of Theorem 2.2. The graded-rigid bialgebras are also studied, see Corollary 2.3 and Corollary 2.4. In section 3, we introduce the notion of graded "hat" bialgebra cohomology groups for graded bialgebras, which controls the graded bialgebra deformations, see Theorem 3.3. Liftings and graded bialgebra deformations We will work on a base field K. All unadorned tensors are over K. We refer the notion of graded bialgebras and filtered bialgebras to [13], the notion of graded linear maps to [9] and [7]. 2.1. Let us recall Andruskiewitsch-Schneider's liftings method, for more details, see [2]. Note that the lifting defined here is a slight generalization. Throughout, B = ⊕ i≥0 B (i) will be a graded bialgebra over K, with identity element 1 B , multiplication map m, counit ε, and comultiplication ∆. Then B has a natural bialgebra filtration where B i = ⊕ j≤i B (j) for any n ≥ 0. A lifting of the graded bialgebra B is a filtered bialgebra structure, denoted by U , on the underlying filtered vector space B with the above filtration such that grU = B as graded bialgebras, where grU is the graded bialgebra associated to the filtered bialgebra U ( [13], p.226). (By grU = B, we use the natural identification of the underlying space grU with B, that is grU For any lifting U of the graded bialgebra B, it follows from the definition that U and B have the same identity element and the counit. Therefore, to give a lifting U , we just need to define the multiplication m U and comultiplication ∆ U . Two liftings U , V of the graded bialgebra B are said to be equivalent, if there is filtered bialgebra isomorphism θ : U −→ V such that grθ = Id B , where grθ is the graded morphism associated to θ, and here again we use the identifications grU = B and grV = B (as graded bialgebras). Lift(B) the set of equivalent classes of all the liftings of the graded bialgebra B. 2.2. In this subsection, we will study graded bialgebra deformations of the graded bialgebra B = ⊕ i≥0 B (i) . Let l ∈ N ∪ {+∞}. Consider the space B[t]/(t l+1 ), which is viewed as a free module over K[t]/(t l+1 ), and also a graded K-space with degt = 1 and degb = n, if b ∈ B (n) . If l = +∞, then An l-th level graded bialgebra deformation of B consists of and which are K[t]/(t l+1 )-linear and homogeneous maps of degree zero such that (i ) B[t]/(t l+1 ) is a bialgebra over K[t]/(t l+1 ) with identity element 1 B , multiplication m l t , counit ε l t and comultiplication ∆ l t , where the counit ε l t : where m and ∆ are the multiplication and comultiplication of B, respectively. From now on, we will abbreviate l-th level graded bialgebra deformations as l-deformations, and +∞-deformations will be referred simply as deformations. Denote by E l (B) the set of all l-deformations of the graded bialgebra B, and E +∞ (B) is written as E(B). Elements of E(B) will be written as (B[t], m t , ∆ t ). such that φ is homogeneous of degree zero and Denote by the set of isoclasses of l-deformations (resp. deformations) of the graded bialgebra B, for l ∈ N. 2.3. Use the notation as above. Consider an element ( where a, b, c ∈ B, and m s : It is easy to check that the associativity of m l t , the compatibility of m l t and ∆ l t , and the coassociativity of ∆ l t are equivalent to the following identities, respectively, for each 1 ≤ n ≤ l, where we use Sweedler's notation ∆(a) = a (1) ⊗ a (2) , a ∈ B, and in the second identity we use the notation ∆ n (a) = a l ⊗ a r and ∆ n (b) = b l ⊗ b r , and the map τ 23 is the canonical flip map at the second and third positions. be two l-deformations with the maps m s , ∆ s and m ′ s , ∆ ′ s as in (2.1) and (2.2). An isomorphism φ between these deformations is given by The fact that φ is a morphism of K[t]/(t l+1 )-bialgebras implies φ preserves the identity element 1 B and the counit ε l t , and it satisfies, for each 1 ≤ n ≤ l, for all a, b, c ∈ B. Note that above discussion works for all l ∈ N ∪ {+∞}. The analogue of the following lemma is well-known in classical deformation theory. Lemma 2.1. There exist restriction maps r l,l ′ : E l (B) −→ E l ′ (B) for every l > l ′ ∈ N, and maps r l : Proof. The restriction map r l,l ′ is given as follows: given The map r l is defined in a similar way, and then the result is obvious. has only one element, i.e., any deformation of B is isomorphic to the trivial one. 2.4. We have the following observation, which says that the graded bialgebra deformations coincide with the liftings. Proof. We will construct a map F : Lif t(B) −→ isoE(B). Given a lifting U of B. Denote by m U and ∆ U the multiplication and comultiplication maps of U . Since U is a filtered bialgebra, we have Therefore, for any s ≥ 0, there uniquely exist homogeneous maps of degree −s, say m s : B ⊗ B −→ B and ∆ s : By grU = B as graded bialgebras, we have m 0 = m and ∆ 0 = ∆. It is direct to check that F (U ) is a deformation. F is well-defined, i.e., it maps equivalent liftings to isomorphic deformations. In fact, for given liftings U and V , an equivalence θ of U and V is a filtered isomorphism, hence for any s ≥ 0, there determines a unique homogeneous map φ s : On the other hand, by seeing (2.1) and (2.2), one obtains that F is a bijection. This completes the proof. An immediate consequence of Theorem 2.2 is Corollary 2.3. Let B = ⊕ i≥0 B (i) be a graded bialgebra. Then B is gradedrigid implies that, for any filtered bialgebra U such that grU ≃ B as graded bialgebras, we have U ≃ B as bialgebras. If we assume that B is coradically-graded, the converse is also true. Proof. By Theorem 2.2, B is graded-rigid if and only if Lif t(B) is a single element set, i.e., every lifting of B is trivial. For the first statement, such a filtered bialgebra U with grU ≃ B gives rise to a lifting on B, denoted by U ′ , such that U ≃ U ′ (as bialgebras). Since B is graded-rigid, we get U ′ ≃ B, thus we are done. For the second one, assume B is coradically-graded. Let U be a lifting of B. Thus by the assumption, there exists an isomorphism θ : U ≃ B. Note that θ preserves the coradical filtration, thus grθ can be viewed as a graded automorphism of B. Thus take θ ′ = (grθ) −1 • θ : U ≃ B. So θ ′ realizes an equivalence between the lifting U and the trivial lifting. This proves that B is graded-rigid. 2.5. In this subsection, we assume that the base field K is algebraically closed of characteristic zero. One can define the variety Bialg n of the bialgebra structures on n-dimensional spaces, which carries a natural GL n (K)action by base changes, see [12] and [8]. Recall that a bialgebra B is called rigid if GL n (K)-orbit of Bialg n containing B is Zariski open. In fact, we have Corollary 2.4. Let K be an algebraically closed field of characteristic zero, B = ⊕ i≥0 B (i) a finite dimensional graded bialgebra over K. If B is rigid and coradically-graded, then B is graded-rigid in the sense of 2.3. Proof. By Corollary 2.3, we only need to show that every filtered bialgebra U with grU ≃ B is isomorphic to B. Assume the dimension of B is n. By Theorem 3.4 in [8], B is a degeneration of U , i.e., lies the closure of the orbit of U (in the variety Bialg n ). However the GL n (K)-orbit of B is open, we obtain that B and U belong to the same GL n (K)-orbit, i.e., B ≃ U as bialgebras, finishing the proof. Graded bialgebra cohomology In this section we will relate the graded bialgebra deformations with corresponding cohomology groups, which will be a graded (and normalized) version of "hat" bialgebra cohomology groups introduced in [5] (also see [10]). Let us recall the bicomplex in [5] or [10], p.619. For this end, we need the following maps, where p, q ≥ 1 and all b's are in B, λ p : B ⊗p+1 −→ B ⊗p and ρ p : B ⊗p+1 −→ B ⊗p are given by . Dually, the maps σ q : B ⊗q −→ B ⊗q+1 and τ q : B ⊗q −→ B ⊗q+1 are given by 2) ). In addition, we need ∆ p i : B ⊗p −→ B ⊗p+1 and µ q j : B ⊗q+1 −→ B ⊗q , 1 ≤ i ≤ p and 1 ≤ j ≤ q, which are given by Let C p,q = Hom K (B ⊗q , B ⊗p ), p, q ≥ 1. Define δ p,q h : C p,q −→ C p,q+1 and δ p,q c : C p,q −→ C p+1,q which are given by where Id denotes the identity map of B. It is direct to check that (C p,q , δ p,q h , δ p,q c ) is a bicomplex (see [10], p.619), i.e., We will introduce a sub-bicomplex of the above bicomplex. Let m = Kerε. Denote by i : m −→ B the inclusion map, and π : B −→ m is given by We have the following observation Lemma 3.1. Use the above notation. Then δ p,q h (D p,q ) ⊆ D p,q+1 and δ p,q c (D p,q ) ⊆ D p+1,q . Proof. Just note that f ∈ C p,q lies in D p,q if and only if ( for any 1 ≤ i ≤ q, 1 ≤ j ≤ p and any b i ∈ B. Then the lemma follows from the definition of δ p,q h and δ p,q c immediately. 3.2. From now on B = ⊕ i≥0 B (i) will be a graded bialgebra. In this case m ⊆ B is a graded subspace. Consider D p,q (l) := Hom K (m ⊗q , m ⊗p ) (l) , l ∈ Z, whose elements are homogeneous maps from m ⊗q to m ⊗p of degree l. Note that D p,q (l) ⊆ D p,q ֒→ C p,q . We have the following and δ p,q c (D p,q (l) ) ⊆ D p+1,q (l) for each l ∈ Z, p, q ≥ 1. Proof. Set C p,q (l) = Hom K (B ⊗q , B ⊗p ) (l) . Clearly, D p,q (l) = D p,q ∩ C p,q (l) . From the definition of δ p,q h and δ p,q c , one sees that they preserve the degrees, i.e., δ p,q h (C p,q (l) ) ⊆ C p,q+1 and δ p,q c (C p,q (l) ) ⊆ C p+1,q . Now the result follows from Lemma 3.1. Denote by δ p,q h,(l) (resp. δ p,q c,(l) ) the restriction of the maps δ p,q h (resp. δ p,q c ) to the subspace D p,q (l) . Thus by Lemma 3.2, we get a bicomplex (D p,q (l) , δ p,q h,(l) , δ p,q c,(l) ) for each l ∈ Z. There is a canonical way to construct a complex from a given bicomplex. SetD := δ p,q h,(l) + (−1) q δ p,q c,(l) , 1 ≤ q ≤ n. Hence, for each l ∈ Z, we get a complex We define the n-th cohomology group of the above complex to be the n-th graded "hat" bialgebra cohomology of degree l of the graded bialgebra B, which will be denoted byĥ n b (B) (l) , n ≥ 1, l ∈ Z. It is very useful to write outĥ 2 b (B) (l) andĥ 3 b (B) (l) explicitly from the definition. In what follows, we will use the maps δ p,q h and δ p,q c , instead of δ p,q h,(l) and δ p,q c,(l) for simplicity. We have the following facts. c (f ) + δ 2,1 h (g) = 0, δ 2,1 c (g) = 0, i.e., for any a, b, c ∈ m, we have Two pairs (f, g) = (f ′ , g ′ ) inĥ 2 b (B) (l) if and only if there exists a homogeneous map θ : m −→ m of degree l such that, for any a, b, c ∈ m, (2) . for some homogeneous map θ : B −→ B of degree −1 (note that the map θ may be viewed as a map from m to m). Now it is direct to check that θ realizes an equivalence of (f, g) and (f ′ , g ′ ) inĥ 2 b (B) (−1) . Now we have obtained a map from E 1 (B) toĥ 2 b (B) (−1) , sending (B[t]/(t 2 ), m 1 t , ∆ 1 t ) to (f, g). One can easily see that the correspondence is bijective, as required. (2). To prove that B is graded-rigid, we just need to show that isoE(B) is a single-element set. Take Now consider the following deformation , whose coefficients of t and t 2 vanishes. In other words, Similarly, we may view that (m 3 ′′ , ∆ 3 ′′ ) lies inĥ 2 b (B) (−3) . By assumption and comparing (3.1-3), we have a homogeneous map θ 3 : m −→ m such that Thus we get the following deformation , whose coefficients of t, t 2 and t 3 vanishes. Now one can define θ 4 and φ 4 , and so on. Finally, define the infinite composition · · · φ 3 • φ 2 • φ 1 to be φ. Note that the K[t]-linear isomorphism φ : B[t] −→ B[t] is well-defined on every a ∈ B, which preserves the identity 1 B and the counit ε t . (In fact, φ s (a) = a + θ s (a)t s where θ s : m −→ m is homogeneous of degree −s, hence, for each fixed a ∈ B (i) , φ s (a) = a for s ≥ i. Consequently, φ(a) has only nonzero coeffecients of t s for 0 ≤ s ≤ i.) By the construction of each map φ s , we obtain that the deformation ( is trivial, which is also equivalent to the given deformation. Thus we have proved (2).
2014-10-01T00:00:00.000Z
2005-06-02T00:00:00.000
{ "year": 2005, "sha1": "0b361243992ead1d90b1c7d1b963fddee4a9acfe", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0506030v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0b361243992ead1d90b1c7d1b963fddee4a9acfe", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
259655724
pes2o/s2orc
v3-fos-license
Recent advances in small molecules for improving mitochondrial disorders Mitochondrial disorders are observed in various human diseases, including rare genetic disorders and complex acquired pathologies. Recent advances in molecular biological techniques have dramatically expanded the understanding of multiple pathomechanisms involving mitochondrial disorders. However, the therapeutic methods for mitochondrial disorders are limited. For this reason, there is increasing interest in identifying safe and effective strategies to mitigate mitochondrial impairments. Small-molecule therapies hold promise for improving mitochondrial performance. This review focuses on the latest advances in developing bioactive compounds for treating mitochondrial disease, aiming to provide a broader perspective of fundamental studies that have been carried out to evaluate the effects of small molecules in regulating mitochondrial function. Novel-designed small molecules ameliorating mitochondrial functions are urgent for further research. Introduction Mitochondria are vital organelles that play essential roles in the life and death of the cell. They play a crucial role in energy metabolism and control of stress responses and are a hub for biosynthetic processes. 1,2 Mitochondrial diseases are genetically determined metabolic disorders characterized by defects in oxidative phosphorylation (OXPHOS) and caused by mutations in genes in nuclear DNA (nDNA) and mitochondrial DNA (mtDNA) that encode structural mitochondrial proteins or proteins involved in mitochondrial function. 3 The mitochondria OXPHOS system is embedded in the mitochondrial inner membrane (MIM) (Fig. 1). It represents the nal step in converting nutrients into energy by forming ATP. MTTL1 C3254T Ptosis, muscle weakness MTT1 T4274C, T4285C, G4298A, G4309A MTTA T5628C MTTN T5692C MTTN G5698A MTTN G5703G MTTK G8342A MTTL2 G12294A, A12308G, T12311C, G12325A MTND4 T11232C Neuropathy, ataxia, and retinitis pigmentosa (NARP) MTATP6 T8993C, T8993G Blindness, cerebellar ataxia, seizures, cognitive impairment, and peripheral neuropathy Leigh syndrome(LS) MTTV C1624T Lactic acidosis, failure to thrive, myopathy, bilateral symmetrical lesions in the subcortical brain MTND3 T10158C MTND4 C11777A MTND5 T12706C MTATP6 T9176C, T9176G, T9185C, T9191C, T8993C 14 subunits (3 mtDNA, 11 nDNA), and complex V of 19 subunits (2 mtDNA, 17 nDNA). 6,7 Mutations in mitochondrial DNA (mtDNA) and (or) nuclear-encoded mitochondrial genes (nDNA) that affect OXPHOS function efficiently lead to a diverse group of debilitating conditions (Table 1). Besides, mitochondria are a major source of reactive oxygen species, such as superoxide, because electrons at complex I and III of the respiratory chain are oen offloaded to molecular oxygen. Increased ROS production in mitochondrial diseases can result in protein, lipid, and DNA damage, potentially leading to further cellular damage and dysfunction. 8,9 Mitochondrial dysfunction contributes to numerous health problems, including neurological and muscular degeneration, cardiomyopathies, cancer, diabetes, and aging pathologies ( Table 2). Mitochondrial diseases are clinically heterogeneous and can occur at any age. Treatment of mitochondrial disorders has been challenging since multi-organ involvement in various mitochondrial diseases. 10 Small molecules play an essential role in drug development. 11,12 In this review, we will present small molecules that are benecial to enhance mitochondrial function and improve primary mitochondrial diseases. We propose that rapid preliminary screening of potential therapeutic compounds in individual patients' broblasts could direct and advance personalized medical treatment. Furthermore, noveldesigned small molecules ameliorating mitochondrial functions are urgent for further research. Molecules bypass mitochondrial complex I deciency Mitochondrial complex I deciency is the most prevalent defect in the respiratory chain in pediatric patients and oen leads to severe or fatal neurological symptoms, such as Leigh syndrome. Succinate is a mitochondrial substrate that is metabolized through complex II. However, it is not cell membrane-permeable and challenging to be uptaken into cells. Ehinger group reported that several cell membrane-permeable prodrugs 1-3 ( Fig. 2) of the complex II substrate succinate increased ATP-linked mitochondrial oxygen consumption in complex I-decient human blood cells, broblasts, and heart bers. This therapy strategy provided a potential future intervention for patients with metabolic decompensation due to complex I dysfunction. 13 Idebenone is a well-known compound, developed in the early 1980s by Takeda Pharmaceuticals against cognitive decline/ dementia (4, Fig. 2), which has been evaluated in several mitochondrial and neurodegenerative diseases. 14,15 Idebenone has the potential to act as an electron carrier in the respiratory chain and as an antioxidant against membrane damage caused by lipid peroxidation. The antioxidant function of idebenone is attributed to the redox cycling between hydroquinone and quinone. NAD(P)H: quinone oxidoreductase 1 (NQO1) and mitochondrial complex III were identied as the major enzymes involved in idebenone activity. 16 It has been approved provisionally in Canada for the treatment of Friedreich's ataxia, while withdrawn from the Canadian market in 2013 by Santhera Pharmaceuticals due to lack of efficacy. Leber's hereditary optic neuropathy (LHON), a rare genetic mitochondrial disease that causes rapid and progressive bilateral vision loss, is the only mitochondrial disease for which IDE has been approved by the European Medicine Agency to treat visual impairment in adolescents and adults. Several new insights into the mode action of idebenone were discovered, which may provide a novel indication for this drug that might not have been considered previously. 17 Bromodomain-containing protein 4 (BRD4) is a member of the bromodomain and extra terminal domain (BET) family of proteins comprising BRD2-BRD4 and BRDT. BRD4 is a chromatin-bound transcriptional regulator linked to the expression of genes associated with different biological processes, including tumor progression or inammation. 18 A recent study demonstrates that I-BET 525762A (5, Fig. 2), an inhibitor of bromodomain, could remodel the mitochondrial proteome to increase the levels and activity of OXPHOS protein complexes, increase and utilize FADH2, leading to the rescue of the bioenergetic defects and cell death caused by mutations or chemical inhibition of mitochondrial complex I. 19 Agents enhancing electron transfer chain function Coenzyme Q10 (CoQ10, 6, Fig. 3) is a naturally occurring fatsoluble vitamin-like quinone, which plays a crucial role in mitochondrial oxidative phosphorylation and ATP production. 20 CoQ10 is an endogenous antioxidant and a potent free radical scavenger in mitochondrial membranes. CoQ10 exhibits potentially neuroprotective effects in neurodegenerative diseases with excess oxidative stress. However, about 50 clinical studies of CoQ10 revealed a marginal but factual treatment effect. 20 Riboavin (7, Fig. 3), a water-soluble vitamin B, is part of the functional group of avin mononucleotide (FMN) and avin adenine dinucleotide (FAD) cofactors and is required for numerous avoprotein-catalyzed reactions. Riboavin shows critical antioxidant properties essential for correct cell functioning. 21 Riboavin deciency has been demonstrated to impair the oxidative state of the body, especially in the nervous system. Riboavin supplementation treats migraine, Brown-Vialetto-Van Laere syndrome, Fazio-Londe disease, and some mitochondrial diseases. 22,23 In the future, riboavin may be a potential therapeutic intervention in many other neurological disorders. Dichloroacetate (DCA, 8, Fig. 3) is an analog of pyruvate. DCA activates the E1 (pyruvate decarboxylase) subunit of the PDHC by inhibiting the PDH kinase, which usually phosphorylates and inhibits the enzyme, thus locking the enzyme in the active conformation and promoting the ux of pyruvate into the citric acid cycle. 24 It is an investigational drug for the treatment of mitochondrial genetic diseases. Although it can effectively alleviate lactic acidosis in mitochondrial disorders, 25 it can also cause peripheral neuropathy in individuals with MELAS syndrome. 26 Thiamine (vitamin B1, 9, Fig. 3) can enhance pyruvate dehydrogenase activity, thus increasing the oxidative decomposition of pyruvate and reducing cofactors (NADH and FADH2) generation. Thiamine has been used in mitochondrial diseases individually or with other agents. Supplementation with thiamine in a family with MELAS syndrome and thiamine deciency can improve the symptoms of myopathy and lactic acidosis and myopathy. 27 Combining thiamine with CoQ10, carnitine, and vitamins C and E can improve the clinical symptoms of adult patients with Leigh syndrome with subacute severe brainstem encephalopathy. 28 Agents as antioxidants Sonlicromanol (KH176, 10, Fig. 4), a chemical entity derivative of the water-soluble form of vitamin E, is a blood-brain barrier permeable ROS-redox modulator. Sonlicromanol hydrochloride is used in the study for mitochondrial disorders. Sonlicromanol hydrochloride maintains microstructural coherence in the brain of Ndufs4 −/− mice. 29 A clinical research is conducted to evaluate the effect of KH176 in various cognitive domains and the impact of different doses of KH176. Lipoic acid (a-LA, 11, Fig. 4) is a natural molecule showing excellent antioxidant and anti-inammatory properties. It is a coenzyme of pyruvate dehydrogenase and a-ketoglutarate dehydrogenase, which plays several roles in the pathogenesis of neurodegenerative diseases. 30 One research suggests that the combination of lipoic acid, CoQ10, and creatine monohydrate effectively reduces plasma lactic acid content and oxidative stress markers in urine and improves the symptoms of muscle strength in patients with mitochondrial diseases, which is a benecial therapeutic strategy for some mitochondrial disorders. 31 Glutathione (L-g-glutamyl-L-cysteinylglycine, 12, Fig. 4) is a crucial intracellular antioxidant that can protect the cell from reactive oxygen species (ROS). Loss of GSH is associated with several mitochondrial diseases. Thus, supplementation with cysteine donors (13, Fig. 4), which can enhance the synthesis starting material of glutathione, can potentially restore glutathione levels and eliminate excessive ROS in mitochondrial diseases. 32,33 A study revealed that N-acetylcysteine (14, Fig. 4) could also enhance muscle cysteine and glutathione availability and attenuate fatigue during prolonged exercise in endurancetrained individuals. 34 Based on the noticeable results obtained with CoQ10 and idebenone, a novel para-benzoquinone compound EPI-743 (15, Fig. 4) was designed and tested. EPI-743 is 1000 to 10 000 times more potent than CoQ10 or idebenone in patient broblast assays modeling the effects of mitochondrial disease. EPI-743 is now in clinical trials to treat Leigh syndrome and other inherited mitochondrial disorders. 35 Agents enhancing mitochondrial biogenesis Induction of mitochondrial biogenesis through transgenic overexpression of PGC-1a is being developed as a potential treatment for mitochondrial disorders. Bezabrate (16, Fig. 5) is a pharmacological ligand for the transcriptional cofactor PGC-1a. It shows the most extensive pre-clinical evidence of efficacy in animal models and patient cell lines. 36,37 Now, it is in clinical trials to evaluate the safety of inducing mitochondrial biogenesis in patients with the m.3243A>G MTTL1 mutation. 38 (−)-Epicatechin (17, Fig. 5) is the main avonoid present in dark chocolate. Relevant research conrmed that (−)-epicatechin could enhance fatigue resistance and oxidative capacity in mouse muscle, which benets clinical populations experiencing muscle fatigue. 39 Omaveloxolone (RTA-408, 18, Fig. 5) is a novel synthetic oleanane triterpenoid analog. It shows signicant cytoprotective effects due to its ability to activate the Nrf2 pathway. 40 Studies also proved that the neuroprotective effects of RTA-408 could be attributed to the Keap1 inhibition. 41 FDA has approved omaveloxolone as the rst treatment for Friedreich's ataxia, a rare, inherited, degenerative disease that damages the nervous system, characterized by impaired coordination and walking. 42 Resveratrol (19, Fig. 5), a natural plant polyphenol, increased AMPK and PGC-1a activity, increased mitochondrial number, and improved motor function, which is benecial for an overall improvement in health and survival. 43 As a dietary supplement, it is in clinical studies in patients with mitochondrial myopathies and skeletal muscle fatty acid oxidation disorders. Metformin (20, Fig. 5) is widely used for treating type 2 diabetes. Studies show that metformin exerts its anti-diabetic effects by inhibiting complex I of the mitochondrial respiratory chain. 44 Metformin is also a potential activator of AMPK and the stress-induced transcription factor SKN-1 nuclear factor erythroid 2-related factor 2 (Nrf2), which shows excellent potential in aging-related diseases such as neurodegenerative disease and cancer in humans. 45 Agents regulating NADH/NAD + ratio Dysfunction of the mitochondrial oxidative phosphorylation causes an increase in the NADH/NAD + ratio, which impairs the activity of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) in the glycolysis pathway. Treatment with pyruvate (21, Fig. 6) is expected to decrease the ratio and restore glycolysis. Therefore, it is a promising approach for treating mitochondrial diseases. 46,47 Phase II clinical trial of sodium pyruvate on lactic acidosis associated with mitochondrial disorders was conducted. AMP-activated protein kinase (AMPK) is crucial in regulating energy homeostasis. AMPK regulates energy expenditure by modulating NAD + -dependent-type III deacetylase SIRT1. 48 5-Aminoimidazole-4-carboxamide ribotide (22, AICAR, Fig. 6) is a pharmacological activator of AMPK. AICAR could improve growth and ATP content while decreasing ROS production and also increase mitochondrial biogenesis without altering mitochondrial membrane potential. 49 Nicotinamide riboside (23, NR, Fig. 6), a vitamin B3 and NAD + precursor, was previously reported to increase NAD + levels in mice and induce mitochondrial biogenesis. 50 In the mitochondrial myopathy mice model, NR robustly induced mitochondrial mass and function, cured structural abnormalities of mitochondria, and delayed the accumulation of mitochondrial DNA mutations, suggesting a promising treatment strategy for mitochondrial myopathy. 51 Nicotinamide mononucleotide (24, NMN, Fig. 6), an NAD + precursor, increased lifespan by normalizing NAD + redox imbalance and lowering HIF1a accumulation in Ndufs4-KO skeletal muscle without affecting the brain, and attenuated lactic acidosis in Ndufs4-KO mice. 52 Acipimox (25, Fig. 6), a nicotinic acid analog used to treat hyperlipidemia, has been shown to have a direct effect of acipimox on NAD + levels, mitonuclear protein imbalance, and mitochondrial oxidative capacity and also demonstrates that acipimox can also directly affect skeletal muscle mitochondrial function in humans. 53 Now, a randomized, double-blinded, placebo-controlled, adaptive design trial of the efficacy of acipimox in adult patients with mitochondrial myopathy is conducted. 54 Agents restoring nitric oxide production Nitric monoxide (NO) exerts various physiological functions in the central nervous system. There is growing evidence that NO deciency in mitochondrial disease can complicate disease pathogenesis, such as MELAS (mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes). 55 NO deciency can potentially play a signicant role in the mechanism of stroke-like episodes observed in MELAS syndrome. 56 Both amino acids arginine (26, Fig. 7) and citrulline (27, Fig. 7) potentially act as NO precursors. Their administration may increase NO availability and hence can have therapeutic effects in stroke-like episodes in MELAS syndrome. 55 Currently, a clinical study is being conducted to assess if giving arginine or citrulline will increase the formation of nitric oxide in individuals with MELAS. Therefore, if arginine and/or citrulline are shown to increase the formation of nitric oxide, they could be used to prevent or treat strokes in patients with MELAS syndrome. 57 Agents regulating autophagy Rapamycin (28, Fig. 8) is a mechanistic target of rapamycin kinase (mTOR) inhibitors. It is demonstrated that inhibition of mTOR improves survival and health in the Ndufs4 −/− model of Leigh syndrome, which may offer therapeutic benets to patients with Leigh syndrome and potentially other mitochondrial disorders. 58 However, as a promising compound, the clinical use of rapamycin is limited by concerns about the side effects associated with the drug. Urolithin A (UA, 29, Fig. 8), a rst-in-class natural food metabolite, has been shown to stimulate mitophagy and improve muscle health in old animals and pre-clinical aging models. 59 The results of a rst-in-human clinical trial show that supplementation with UA as a nutritional intervention is safe, assist in managing the declining mitochondrial function accompanying aging, and promotes healthy muscle function throughout life. 60 Agents as cardiolipin protector Cardiolipin is a unique phospholipid exclusively expressed on the inner mitochondrial membrane. It plays an essential structural role in cristae formation and the organization of the respiratory complexes into supercomplexes for optimal oxidative phosphorylation. The interaction between cardiolipin and cytochrome c determines whether cytochrome c acts as an electron carrier or peroxidase. 61 Cardiolipin has been identied as a target for drug development associated with energy deciency. Elamipretide (Bendavia, MTP-131, SS-31, 30, Fig. 9) is an aromatic-cationic, cell-permeable tetrapeptide in a new class of mitochondrial-targeted drugs. SS-31 binds selectively to cardiolipin via electrostatic and hydrophobic interactions. By interacting with cardiolipin, SS-31 prevents cardiolipin from converting cytochrome c into a peroxidase while protecting its electron-carrying function. 61 Treatment of explanted human hearts with SS-31 improves cardiac mitochondrial function. 62 Agents as an energy buffer Creatine (31, Fig. 10) is a naturally occurring bioenergetic compound. Creatine stabilizes the mitochondrial transition pore and is vital in mitochondrial ATP production. Creatine also plays an essential role in shuttling Pi from the mitochondria into the cytosol to form phosphocreatine (32, Fig. 10) to help maintain cellular bioenergetics. 63 A neuroprotective effect of oral creatine was found in several animal models of neurodegenerative diseases. Creatine monohydrate supplementation can improve exercise capacity in some individuals with mitochondrial myopathies. 64 Taurine (33, Fig. 10) is a naturally occurring sulfurcontaining amino acid found abundantly in excitatory tissues, such as the heart, brain, retina, and skeletal muscle. It plays a crucial role in developmental processes, such as brain development, cardiac muscle regulation, and inammation. Taurine shows protective activity in different neurodegenerative disease models, such as Parkinson's, Alzheimer's, and Huntington's diseases. [65][66][67] Now, it is in a clinical study to treat mitochondrial encephalomyopathy. Conclusion Overall, there have been several interesting new approaches in the potential development of new drugs to treat mitochondrial diseases. Proteolysis targeting chimera (PROTAC) technology is a novel strategy to develop new drugs with small molecules that can make protein degradation more efficient and specic, thus creating new opportunities in drug development. PROTAC is a small molecule that simultaneously binds a diseaseassociated protein and a ubiquitin-ligase complex, which uses the ubiquitin-protease system to eliminate mutated, denatured, and harmful cell proteins. It can effectively target and degrade proteins, including proteins that are difficult to identify and bind. Therefore, it has signicant implications for drug development and treating mitochondrial diseases. In summary, the current challenges and future goals for treating mitochondrial disease revolve around improving diagnosis, developing targeted therapies, discovering biomarkers, modifying disease progression, and exploring innovative approaches like mitochondrial replacement techniques. Continued research efforts and collaboration among scientists, clinicians, and patients are essential to overcome these challenges and achieve these goals. Encouragingly, there has been remarkable progress in mitochondrial disease over the past decade. The increasing number of clinical trials in mitochondrial disorders aim for more specic and effective therapies. More importantly, the unmet clinical need for treating patients with mitochondrial diseases has stimulated academic and commercial interest in developing new treatments, as has an awareness of mitochondrial involvement in more common diseases. In this review, we discuss the bioactive compounds for treating mitochondrial disorders and focus on different pathways. The efforts in this eld to provide a more targeted approach are encouraging. Author contributions Liying Meng wrote the manuscript and drew the pictures. Guanzhao Wu is fully responsible for the study design, research elds, draing, and nalizing of the paper. Conflicts of interest The authors declare no other conicts of interest.
2023-07-12T05:08:43.627Z
2023-07-07T00:00:00.000
{ "year": 2023, "sha1": "a226527ec21432eb63bc3ebad3c3f24e37795e37", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "a226527ec21432eb63bc3ebad3c3f24e37795e37", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
220835425
pes2o/s2orc
v3-fos-license
Evidence for the medicinal value of Squama Manitis (pangolin scale): A systematic review Background Squama Manitis (pangolin scale) has been used in traditional Chinese medicine for thousands of years. However, its efficacy has not been systematically reviewed. This review aims to fill the gap. Methods We searched six electronic databases including PubMed, Embase, Cochrane Library, China National Knowledge Infrastructure Database (CNKI), WanFang Database and SinoMed from inception to May 1, 2020. Search terms included “pangolin”, “Squama Manitis”, “Manis crassicaudata”, “Manis javanica”, “Malayan pangolins”, “Manis pentadactyla”, “Ling Li”, “Chuan Shan Jia”, “Shan Jia”, “Pao Jia Zhu”, “Jia Pian” and “Pao Shan Jia”. The Cochrane Risk of Bias (RoB) assessment tool and Newcastle-Ottawa Scale (NOS) were used to evaluate the risk of bias of the included randomized controlled trials (RCTs) and case control studies (CCSs). Results After screening, 15 articles that met the inclusion criteria were finally included. There were 4 randomized controlled trials, 1 case control study, 3 case series and 7 case reports. A total of 15 different diseases were reported in these studies, thus the data could not be merged to generate powerful results. Two RCTs suggested that Squama Manitis combined with herbal decoction or antibiotics could bring additional benifit for treating postpartum hypogalactia and mesenteric lymphadenitis. However, this result was not reliable due to low methodological quality and irrational outcomes. The other two RCTs generated negative results. All the non-RCTs did not add any valuable evidence to the efficacy of Squama Manitis beacause of small samples, incomplete records, non-standardized outcome detection. In general, currently available evidence cannot support the clinical use of Squama Manitis. Conclusion There is no reliable evidence that Squama Manitis has special medicinal value. The removal of Squama Manitis from Pharmacopoeia is rational. Introduction Coronavirus Disease 2019 , caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), has become a global pandemic. 1 Its origin remains unclear. Several studies suggested that bats might be the natural host for SARS-CoV-2, 2,3 while snakes and pangolins were potential intermediate hosts of SARS-CoV-2 transmission to humans. [4][5][6][7][8] Various types of virus hosted in wild animals can threaten human health and safety. The Squama Manitis (pangolin scale) is a medicinal material in traditional Chinese medicine (TCM), commonly used to promote lactation in women and reduce swelling. 9 Due to the exaggeration of its medicinal value, pangolins were facing extinction caused by excessive killing. 10 In 2019, pangolin has been listed in the International Union for Conservation of Nature (IUCN) Red List of threatened species. In order to enhance the protection of pangolin, China officially proclaimed pangolin to be a national level protected wildlife animal on June 3, 2020. 11 In Chinese Pharmacopoeia(2020 edition), Squama Manitis has been removed. Till now, the clinical value of Squama Manitis has not been well assessed. Whether Squama Manitis has special clinical effect? Whether researchers need to find substitutes? Answers to these questions should be based on clinical evidence. This review aimed to summarize all the clinical studies to generate reliable evidence for decision-making on the medicinal value of Squama Manitis. Eligibility criteria We included all types of clinical studies in this review, consisting of randomized controlled trials (RCTs), case control studies (CCSs), case series and case reports. No restriction was applied on age, gender or ethnicity. For controlled studies, trials that applied Squama Manitis at any dosage for the intervention groups were included, either alone or in combination with herbal or western medicine. Routine treatment used in both intervention and control groups were the same. As for case series and case reports, we only included studies which used Squama Manitis alone. There were no restrictions on the outcomes of included studies. Data selection and data extraction Two reviewers (KW, NL) screened all included studies by titles and abstracts according to the eligibility criteria. Any disagreement in the study selection was resolved by consulting a third reviewer (WZ). Eligible studies were reviewed by two researchers (KW, NL) and data was extracted independently. Data was extracted using a pre-designed excel sheet including authors, years of publication, interventions, diseases and outcomes. After cross-check, any inconformity in data extraction was resolved by discussing or consulting with a third researcher (WZ). Assessment of risk of bias The quality of RCTs was assessed by Cochrane Risk of Bias (RoB) assessment tool, 12 which included six aspects: (1) Random sequence generation, (2) Allocation concealment, (3) Blinding of participants and personnel, (4) Incomplete outcome data, (5) Selective reporting, (6) Other bias. The assessment results were presented as "low risk", "high risk" or "unclear risk". The quality of CCSs was assessed by Newcastle-Ottawa Scale (NOS). 13 The NOS contained three components with eight questions: (1) selection of the groups of the study, (2) comparability, (3) assessment of the outcome. A scoring system of 9 points was used to assess the quality of CCSs; higher score means better quality. Two researchers carried out the quality assessment independently. Disagreement was resolved after consensus or consulting a third researcher. Data analysis Review Manager (RevMan) version 5.3 was used to perform statistical analysis. Risk ratio (RR) with a 95% confidence interval (CI) was adopted for dichotomous outcomes. Mean difference (MD) with a 95% CI was used for continuous outcomes. Cochrane's Q and I 2 were used to explore the heterogeneity among studies. Due to different diseases, interventions and outcomes reported in the included studies, no meta-analysis was performed to merge data. Results A total of 4640 records were identified as potentially eligible for inclusion. After excluding duplicates, 3026 studies were screened by their titles and abstracts. According to the eligibility criteria, 151 articles were preliminarily selected for further assessment and 136 of them were excluded, which were review articles, animal experiments, pharmaceutical researches and theory analysis. Finally, 15 articles were included for analyses ( Fig. 1). Risk of bias The risk of bias assessment on each included RCT was presented in Figs. 2 and 3. Among the included RCTs, two were assessed with a high risk of selection bias due to their incorrect utilization of randomization methods. 14,15 The other two RCTs did not clearly report how to generate random sequence. One RCT of Squama Manitis in the treatment of postpartum hypogalactia was assessed with a high risk of selection bias due to its incorrect usage of allocation concealment methods 14 and the remaining RCTs had unclear allocation concealment. There was also a lack of clear report in blinding of participants and personnel and blinding of outcomes assessment, resulting in an unclear assessment for performance and detection bias. High attrition bias was assessed for one RCT using Squama Manitis in the treatment of mesenteric lymphadenitis in children, with a loss of 4 patients in the control group during follow up. 17 The other 3 RCTs were assessed with a low risk of attribution bias. Reporting bias for all included RCTs was assessed unclear due to the lack of pre-registered study protocols. One CCS's quality was assessed by NOS. 18 Though the study reported the selection of controls (both groups of patients were from the same hospital and the same department), the rest items of NOS were lack of clarity, hence the study was scored as 1 point. Clinical efficacy The results of the 4 RCTs were described as follows. Liang (2018) tested the efficacy rate of Squama Manitis for treating postpartum hypogalactia. 14 They randomly allocated 243 patients treated with either Squama Manitis powder combined with herbal decoction (n = 123) or herbal decoction alone (n = 120). The results showed that Squama Manitis powder combined with herbal decoction has a higher efficacy rate than using herbal decoction alone (RR 1.21, 95% CI 1.11-1.32, P < 0.00001). Jiang (2007) tested the efficacy rate of Squama Manitis for treating breast hyperplasia. 15 They randomly allocated 100 patients into Squama Manitis powder combined with herbal decoction (n = 50) and herbal decoction (n = 50). The result showed no statistical difference in efficacy rate between the two groups (RR 1.07, 95% CI 0.96-1.19, P = 0.24). Zhang (2011) tested the efficacy rate of Squama Manitis for treating acute mastitis. 16 They randomly allocated 96 patients into Squama Manitis powder combined with Cefuroxime Sodium for Table 1 Characteristics of the included RCTs. First author Injection (n = 48) and Cefuroxime Sodium for Injection (n = 48). The result was inconclusive because there was no statistical significant difference between the two groups (RR 1.13, 95% CI 0.95-1.35, P = 0.16). Zhu (2013) tested the efficacy rate of Squama Manitis for treating mesenteric lymphadenitis. 17 They randomly allocated 86 patients into Squama Manitis powder combined with Ceftriaxone Sodium for Injection (n = 43) and Ceftriaxone Sodium for Injection (n = 43). The result showed that the effect of Squama Manitis powder combined with Ceftriaxone Sodium for Injection was better than Ceftriaxone Sodium for Injection alone (RR 1.51, 95% CI 1.09-2.09, P = 0.01). The CCS recruited 13 patients in Squama Manitis group and 40 patients in control group. 18 Number of leukocytes in both groups before and after treatment were reported. The total number of leukocytes in the treatment group was maintained at about 6 × 10 9 /L, and that in the control group was maintained at 5 × 10 9 /L. There were 3 case series 19-21 about prostatic hyperplasia, paronychia and, hyperlipidemia, with the sample size of 42, 100 and 62 respectively. According to the results, the effeicacy rate of Squama Manitis powder for prostatic hyperplasia was 95%. 19 Squama Manitis powder (topical usage) combined with conventional disinfection had 100% efficacy rate for treating paronychia. 20 The efficacy rate of Squama Manitis powder were 74% and 65.5% respectively for lowering cholesterol and triglyceride. 21 There were 7 case reports, [14][15][16][17][18][19][20] which covered diseases of chronic leg ulcer, periarteritis nodosa, verruca plana, neurodermatitis, parkinsonian disorders, glomerulonephritis and periarthritis of the shoulder. The results of these case reports showed that Squama Manitis had effect on above diseases. However, these case reports and case series could not provide any relaible evidence for the efficacy of Squama Manitis. Discussion This review evaluated the current clinical evidence of Squama Manitis. There were 4 RCTs, 1 CCS, 3 case series and 7 case reports included in this review. The results of RCTs suggested that Squama Manitis combined with herbal decoction or antibiotics could bring additional benifit for treating postpartum hypogalactia and mesenteric lymphadenitis. While for breast hyperplasia and acute mastitis, the results were inconclusive. Results from non-RCTs showed that Squama Manitis might be effective in treating leukopenia, prostatic hyperplasia, paronychia, hyperlipidemia, chronic leg ulcer, periarteritis nodosa, verruca plana, neurodermatitis, parkinsonian disorders, glomerulonephritis and periarthritis of the shoulder. Most of the diseases mentioned above have conventional effective treatments. It is deemed unnecessary to use Squama Manitis. Further more, results from above studies should be interpreted with caution due to low methodological quality which would lead to significant biases. The 4 RCTs included in this review have small sample sizes, and focused on different diseases. As a consequence, no meta-analysis was performed to generate powerful results. Moreover, there was a lack of rational and valuable outcome measures in the included studies. And the reporting quality of included case reports was low due to the incompleteness in the records of Shoulder pain were significantly alleviated after 1 week, and healed after 1 month without recurrence for 3 years diagnosis and treatment and unstandardization in the outcomes measurement for efficacy. Furthermore, case reports without control could not test the efficacy of Squama Manitis. In general, all the non-RCTs did not add any valuable evidence to the efficacy of Squama Manitis. There were also several limitations in this review. A protocol was not registered or published before conducting this systematic review. Although a comprehensive literature search was performed in commonly used databases, there was also potential omission of grey literatures due to the lack of alternative search methods. Clinical studies using Squama Manitis or combined with other drugs were reviewed, while clinical studies using TCM prescriptions containing Squama Manitis were not included in this review. Hence, it was impossible to evaluate the efficacy of Squama Manitis in TCM prescriptions in this review. Based on above data-analyse, there was no reliable evidence for the clinical value of Squama Manitis. Furthermore, there might be a risk of experiencing anaphylaxis in the consumption of Squama Manitis. Several studies reported that the use of Squama Manitis might trigger allergic reactions such as dizziness, chills, shortness of breath, sweating, pruritus, skin redness and rashes. [29][30][31] Pangolin is one kind of wildlife on the verge of extinction. There is no reason to use Squama Manitis in clinical practice. Wild pangolins may carry pathogens that can cause infections in human beings. 6 Hence, it is highly recommended to restrict the use of Squama Manitis as medicine or food in order to protect endangered wildlife species and human safety. 32 In the recent edition of the Chinese Pharmacopoeia, Squama Manitis has been removed out. There are still several Chinese patent medicines containing Squama Manitis listed in the Chinese Pharmacopoeia (2015 edition), such as compound Tongbi capsule, Guilingji, Huixiangjuhe pill, Kangsuan (antithrombotic) Zaizao pill, Tongru granule and Jinpu capsule being used clinically. Therefore, it is necessary to carry out studies on substitutes for these medicines. Several studies have shown that there are similarities in the composition of pig's hoof nails and Squama Manitis in regards to the trace elements and inorganic contents through thin layer, column and spectrum chromatography analysis. [33][34][35][36][37][38] In conclusion, there is no reliable evidence for the clinical value of Squama Manitis. It is rational to prohibit the use of Squama Manitis for treatment or healthcare purpose.
2020-07-29T13:07:02.124Z
2020-07-28T00:00:00.000
{ "year": 2020, "sha1": "c9c79ac3a213307b00df260224e9978337ecfabc", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.imr.2020.100486", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "11e5ca7b5e0a5de55c87e8a0e9249647afc60f43", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245013982
pes2o/s2orc
v3-fos-license
Inter-Specific Variation in the Potential for Upland Rush Management Advocated by Agri-Environment Schemes to Increase Breeding Wader Densities Encroachment of rush Juncus spp. in the United Kingdom uplands poses a threat to declining wader populations due to taller, denser swards that can limit foraging and breeding habitat quality for some species. Rush management via cutting, implemented through agri-environment schemes (AESs), could thus increase wader abundance, but there is insufficient assessment and understanding of how rush management influences upland waders. Across two upland regions of England [South West Peak (SWP) and Geltsdale nature reserve, Cumbria], we surveyed waders over four visits in fields where rush was managed according to AES prescriptions (treatment; n = 21) and fields without rush management that were otherwise ecologically similar (control; n = 22) to assess how the densities of breeding wader pairs respond to rush management in the short-term. We find evidence for regional variation in the response of waders to rush management, with densities of Common Snipe Gallinago gallinago significantly higher in treatment than control fields in the SWP, but not Geltsdale. There were no statistically significant responses to treatment on densities of Eurasian Curlew Numenius arquata or Northern Lapwing Vanellus vanellus. The 95% confidence intervals for the treatment parameter estimates suggest that this may be due to limited statistical power in the case of Lapwing. For Curlew, however, any potential increases in densities are negligible. There was no evidence that variation in rush cover, which ranged from 10 to 70%, influenced densities of any of our three focal species. Our results suggest that rush management through AES prescriptions delivered in isolation of other interventions may not lead to general increases in breeding wader densities in the short-term, but benefits may arise in some situations due to regional and inter-specific variation in effectiveness. Rush management supported with interventions that improve soil conditions and thus food availability, or reduce predation pressure, may enable AES rush management to generate benefits. Additional research is required to maximise the potential benefits of rush management for each species through the development of prescriptions that tailor to individual species’ optimum sward structure. INTRODUCTION Waders are one of several taxonomic groups of farmland breeding birds that have undergone severe Europe-wide declines in recent decades (PECBMS, 2020), with species threatened at both the national level (e.g., Common Snipe Gallinago gallinago; Amber-listed, United Kingdom; Eaton et al., 2015) and international level (e.g., Eurasian Curlew Numenius arquata and Northern Lapwing Vanellus vanellus; globally Near Threatened; IUCN, 2020). In the United Kingdom, wader population declines are occurring throughout the lowlands and uplands, with the latter containing the majority of remaining grassland-breeding wader populations Balmer et al., 2013;Siriwardena et al., 2017). Poor nest and chick survival, primarily attributable to predation and habitat degradation, are thought to be the major drivers of these declines (Roodbergen et al., 2012;Franks et al., 2018;Roos et al., 2018). Land use change including agricultural intensification, and to a lesser extent land abandonment, is driving degradation of wader habitat in United Kingdom upland regions (Baines, 1988;Fuller and Gough, 1999;Amar et al., 2011;Silcock et al., 2012;Douglas et al., 2017;Johnstone et al., 2017). To prevent further degradation of grasslands, agrienvironment schemes (AESs) encourage farming practices that benefit breeding waders through improved habitat quality (O'Brien and Wilson, 2011;Smart et al., 2013Smart et al., , 2014Franks et al., 2018). Such AES prescriptions have exhibited mixed success; population trends of some wader species have been reversed at the local level, yet nationally wader population declines continue (O'Brien and Wilson, 2011;Smart et al., 2013Smart et al., , 2014Siriwardena et al., 2017;Franks et al., 2018). One mechanism used in these AES prescriptions is the manipulation of vegetation structure (e.g., Natural England, 2018) with the aim of creating a mosaic of short and tall vegetation that is beneficial for foraging and breeding waders . Due to the substantial inter-specific variation in wader breeding habitat requirements, creation of a mosaic aims to simultaneously provide suitable habitat for several wader species. Lapwing, for example, favour short swards with a few tussocks (Baines, 1988;Milsom et al., 2000;Durant et al., 2008), whereas Curlew and Snipe may be more tolerant of a range of sward structures with a greater preference for taller vegetation (Baines, 1988;Pearce-Higgins and Grant, 2006;Hoodless et al., 2007;Durant et al., 2008). A recent threat to the maintenance of structurally heterogeneous grasslands in the United Kingdom uplands is encroachment by Juncus spp. (hereafter termed "rush"), with rush frequency approximately doubling between 2005 and 2018; multiple factors relating to agricultural grassland management and changes in climate have likely driven this encroachment (Ashby et al., 2020). Rush encroachment could significantly contribute to wader population declines by creating expanses of tall, dense, rush-dominated swards. It could consequently restrict physical access to the soil for foraging (Devereux et al., 2004;Robson and Allcorn, 2006), and reduce waders' ability to detect predators and thus their willingness to breed and forage in such locations (Whittingham and Evans, 2004;Robson and Allcorn, 2006). Although, the taller, denser vegetation generated by increased rush cover could provide nests and chicks with greater concealment from predators (Valkama et al., 1998;Kelly et al., 2021). Rush management prescriptions within AES have been developed to address the adverse impacts of rush encroachment (Natural England, 2018). These prescriptions, which typically comprise a long-term aim to reduce the extent of dense rush swards within a field to <30%, involve mowing, aftermath grazing, and occasionally herbicide application (precise prescriptions deviate slightly between United Kingdom countries; Natural England, 2012; Welsh Government, 2017;Shellswell and Humpidge, 2018). In the short-term, rush management opens up the sward and reduces vegetation height and density (Kelly et al., 2021). There is, however, little published data supporting the assumed beneficial impacts of these changes in sward structure on breeding waders. Whilst previous studies suggest that targeted rush management, or cutting of rank vegetation including rush, can increase wader abundance, these studies do not experimentally compare areas with and without rush management and in some cases are combined with additional interventions (Holton and Allcorn, 2006;Robson and Allcorn, 2006;Douglas et al., 2017). Consequently, there is insufficient assessment and understanding of how rush management influences upland waders, despite the importance of evaluating the effectiveness of AES prescriptions (Kleijn and Sutherland, 2003). Here, we assess how the number of breeding wader pairs responds to rush management in the short-term by surveying waders in treatment fields (where rush is managed according to AES prescriptions) and control fields (without rush management) across two upland regions of England. We first test whether field size and environmental conditions that could influence wader abundance [rush cover, Holton and Allcorn (2006); Robson and Allcorn (2006); soil conditions (pH, moisture, and penetration resistance), Smart et al. (2006Smart et al. ( , 2008, Hoodless et al. (2007), McCallum et al. (2016; and woodland distance, , Wilson et al. (2014)] are similar between control and treatment fields. We then test whether the density of breeding wader pairs differs between treatment and control fields whilst accounting for environmental conditions and region, and test if the effects of rush management vary between regions and with the amount of rush cover. Study Areas This study was conducted during the wader breeding season (April-June 2019) in the south-west of the Peak District National Park (South West Peak, hereafter "SWP"), and Geltsdale nature reserve in Cumbria (hereafter "Geltsdale"; Figure 1). Both regions support important breeding wader populations including Curlew, Lapwing, and Snipe (Carr, 2009;Balmer et al., 2013;Douglas et al., 2017). Temperatures during the winter period preceding our surveys were not below long-term averages and thus the densities of waders at our focal sites are unlikely to have been unduly influenced by cold weatherrelated mortality. Survey fields within the two regions were characteristic of United Kingdom upland farmed landscapes and were mostly semi-improved pasture with a smaller number of unimproved pasture, hay meadow and "white moor" fields (rough grassland with rush and Molinia). The dominant rush species was Juncus effusus with smaller amounts of other species present at some sites, particularly Juncus acutiflorus and Juncus conglomeratus at Geltsdale. The study design is described in full by Kelly et al. (2021). Field selection was performed without prior knowledge on wader use of the selected fields, and was based on fields meeting our criteria on rush management, spatial configuration and obtaining permission from landowners to conduct the research. Treatment fields were selected if fields had received rush management between autumn 2018 and spring 2019 (fields may also have received management in previous years) following the EK4 and EL4 rush cutting prescriptions in Entry Level Stewardship (Natural England, 2012). These are standard AES prescriptions that are applicable to any field with at least one-third rush cover, including those on nature reserves. Rush management in treatment fields involved cutting one-third of the rush present once or twice annually on rotation (Supplementary Table 1 provides more information on the AES prescriptions). Control fields were selected if fields had not undergone rush management in the previous 2 years, had a similar extent of rush cover to treatment fields (mean rush cover ± standard error, treatment = 46.70 ± 3.67%, control = 40.00 ± 4.21%; Mann-Whitney test: W = 186, P = 0.255) and were in close proximity to treatment fields (mean distance = 90 ± 34 m standard error; Figure 1 and Supplementary Table 2). Information on field rush cover was provided by the landowners prior to selection of our survey fields. For both treatment and control fields, we only selected those that had greater than one-third rush cover so that all survey fields, regardless of treatment, qualified for the AES rush management prescriptions. Control fields were not deliberately selected to contain different levels of rush cover than treatment fields and thus both control and treatment fields constituted a representative sample of the rush cover in fields with and without AES rush management. Rush cover in our survey fields was subsequently assessed during fieldwork and varied across fields from 10 to 70% (three control fields contained less than 30% rush cover). In the SWP, there were 12 treatment and 13 control fields (one treatment field that had initially been selected was excluded as insufficient rush cutting had been conducted) and at Geltsdale, there were 9 treatment and 9 control fields, giving a total sample size of 21 treatment and 22 control fields (Figure 1). Wader Surveys We estimated the number of breeding wader pairs using a modified version of the standard field-by-field survey method of O' Brien and Smith (1992). Four visits were made to each survey field -two visits in the early breeding season (SWP: 16th-28th April; Geltsdale: 5th-18th May) and two visits in the late breeding season (SWP: 28th May-18th June; Geltsdale: 21st-25th June). Successive visits within the early or late breeding season were on average 7 days apart in the early breeding season and 6 days apart in the late breeding season. All visits were conducted by one researcher to ensure consistency of survey estimates and thus both regions could not be surveyed concurrently. As Geltsdale is at a higher latitude than the SWP (Figure 1), the wader breeding season commences slightly later in the former region. Survey fields in the SWP were thus visited first in both the early and late breeding seasons. Moreover, surveys were not undertaken during the first hour after sunrise or last hour before sunset, or in heavy rain, fog (<250 m visibility) or wind greater than Beaufort Force 5. Within each field, observations were made along a survey route that started 50 m from the field edge and took the observer to within 50 m of every part of the field. All individual waders were marked on a field map with symbols to note behaviour that indicates breeding status. Surveys recorded Lapwing, Curlew, Snipe, and Common Redshank Tringa totanus, but the latter was only detected in three fields at Geltsdale (two control and one treatment) and is not considered further. An index of the number of breeding pairs of each species per field was calculated using standard species-specific criteria. For all species, groups of more than four individuals were excluded as these may represent non-breeding flocks (following Sim et al., 2005;Douglas et al., 2021). For Lapwing, we divided the maximum number of individuals across the two early breeding season visits by two (detectability of Lapwing is high and this approach follows O' Brien and Smith, 1992;Bolton et al., 2011;O'Brien and Wilson, 2011;Smart et al., 2014). For Curlew and Snipe, two birds together, or a single (detectability of these species is expected to be lower than that of Lapwing), either in a field or associating with the field (displaying or mobbing birds above the field) were treated as a pair (following O'Brien and Smith, 1992;Henderson et al., 2002;Hoodless et al., 2006;Pearce-Higgins and Grant, 2006). The number of pairs were then estimated as the maximum per-visit number across all four visits (following Green, 1985;Smart et al., 2008;O'Brien and Wilson, 2011;Buchanan et al., 2017;Douglas et al., 2017). The restricted date range of visits used for calculating Lapwing pairs, compared to Snipe and Curlew, follows standard protocols (O'Brien and Smith, 1992;Bolton et al., 2011;O'Brien and Wilson, 2011;Smart et al., 2014). Estimates of breeding Snipe pairs from diurnal records of Snipe heard (drumming or chipping) or seen, rather than crepuscular surveys of Snipe heard drumming, are likely to be reliable in regions such as our survey locations where Snipe do not occur at very high densities, as suggested by Hoodless et al. (2006). Environmental Variables Rush cover was estimated once per field to the nearest 10% from multiple vantage points during the early breeding season when more accurate estimates can be obtained due to lower vegetation height [note that whilst rush grows tall, it typically spreads relatively slowly in horizontal extent (Ashby et al., 2020) and thus any spread in extent of rush cover within a field is negligible during the survey period]. Field size (ha) was measured in ArcMap TM (v10.4.1; Esri, Redlands, CA, United States) using 1:25,000 Ordnance Survey maps (Ordnance Survey, 2019). Straight-line distance (km) from the centroid of each survey field to the nearest block of woodland (defined as areas with >20% tree cover, from Land Cover Map 2015; Rowland et al., 2017) was measured using the "Near (Analysis)" tool as woodland proximity can influence breeding wader distributions and abundance Wilson et al., 2014). Soil conditions (penetration resistance, moisture content, and pH) were measured during one early, and one late, breeding season visit to account for potential seasonal variation. Soil penetration resistance (kgF) and soil moisture content (%) were recorded at three locations within each field (field centre and two randomly selected locations toward opposite ends of the field) and at two separate points (approximately 15 cm apart) at each of these three locations -giving six measurements per field on each of the two visits. Soil penetration resistance was measured, following Green (1988), using a soil penetrometer with a 5 mm diameter metal pressure rod (20 kg Pesola macro-line spring scale and pressure set, NHBS, Devon, England). Soil moisture content (%) was measured using a soil moisture sensor and readout meter (SM150T soil moisture sensor and HH150 readout meter, Delta-T Devices, Cambridge, England). This sensor had a maximum measurement threshold of 85% and when this threshold was exceeded, we used a value of 92.5% (the mid-point between this threshold and 100%). Soil pH was recorded, using a direct soil pH meter (HI-12922 HALO wireless soil pH electrode, Hanna Instruments, Woonsocket, RI, United States), at one of the points at each of the three locations -giving three measurements per field on each visit and six measurements per field overall. Mean soil penetration resistance, soil moisture, and soil pH values were calculated per field for the early breeding season visit (for use in models of the number of Lapwing pairs as these are estimated using data from the early breeding season only), and across the overall breeding season (for use in Curlew and Snipe models as these use data from all site visits). We note, however, that early breeding season and overall breeding season soil conditions were highly correlated (soil penetration resistance: r = 0.958; soil moisture: r = 0.931; soil pH: r = 0.919; P < 0.001 and n = 43 in all cases). Statistical Analyses All analyses were conducted in R version 3.6.3 (R Core Team, 2020). Environmental Conditions in Treatment and Control Fields We tested whether treatment and control fields had similar environmental conditions. We fitted generalised linear models (GLMs) with a Gaussian error structure and identity link that modelled each environmental variable [rush cover (%), soil penetration resistance (kgF), soil pH, soil moisture (%), woodland distance (km), and field size (ha; natural logarithm transformed prior to inclusion in the models to remove the influence of outliers due to its skewed distribution)] as a function of treatment (treatment or control field) whilst accounting for region (SWP or Geltsdale). Wader Responses to Rush Management We modelled the density of breeding waders for each species by constructing GLMs with a response variable of the estimated number of breeding pairs per field with a Poisson error structure (log link) and field size (ha; natural logarithm transformed) as an offset in all models. This offset variable converts wader pairs into densities and ensures that field size is accounted for within the models. McFadden's pseudo-R 2 was calculated to represent model fit. We first ran preliminary checks for simple non-linear effects of our environmental variables [rush cover (%), soil penetration resistance (kgF), soil moisture (%), soil pH, and woodland distance (km)] by modelling each species' density as a function of the selected environmental variable linear term (linear models), and linear and quadratic terms (quadratic models), whilst including region as a fixed factor and field size (ha; natural logarithm transformed) as an offset. There was no strong evidence for non-linear associations, defined as Akaike information criterion value corrected for small sample sizes (AICc) of a quadratic model being two points or more lower than that of a linear model (Supplementary Table 3) and all subsequent modelling thus used only linear terms. Following these preliminary checks, we followed Whittingham et al. (2006) and constructed a full model of the main effects to test the prediction that rush management increased wader density, i.e., that there were significantly higher densities in treatment than control fields, whilst accounting for other environmental variables (Supplementary Table 4). For each species, we modelled estimated breeding pairs as a function of treatment (treatment or control field), region (SWP or Geltsdale), rush cover, soil moisture, soil pH, soil penetration resistance, and woodland distance, with field size (ha; natural logarithm transformed) as an offset. Early breeding season soil conditions were included in the Lapwing models, and overall breeding season soil conditions were included in the Curlew and Snipe models to match the survey dates that were used to estimate the number of breeding pairs of these species (see wader surveys section above). In addition to the main effects full model, we constructed two extra models that also included (1) the interaction between rush cover and treatment/control field (to test if the effects of rush management varied across different amounts of rush cover), or (2) the interaction between region and treatment/control field (to test if rush management effects differed between regions; which could be the case if the factors regulating population size or the capacity of populations to respond to management vary regionally). For each species, we compared the three full model types (main effects only, main effects plus treatment and rush cover interaction, and main effects plus treatment and region interaction) using each model's AICc, and when interaction terms were present their statistical significance (using a P < 0.05 threshold) (Supplementary Table 5). For Curlew and Lapwing, the main effects only models had the lowest AICc values and interaction terms were not significant; inference is thus based solely on the main effects only model as there is no evidence that the effects of treatment varied with region or rush cover. For Snipe, the model with the lowest AICc value was that with the treatment and region interaction [ AICc relative to model with the next lowest AICc (main effects only model) = 3.605, interaction term P = 0.009]. The interaction term's parameter estimate did, however, have a very large standard error (SE = 3621.325) demonstrating uncertainty in its effect size and we thus also report the results from the full model that only contains the main effects ( Table 1). Environmental Conditions in Treatment and Control Fields Environmental conditions (rush cover, soil penetration resistance, soil moisture, woodland distance, and field size) did not differ significantly between treatment and control fields, except for soil pH (Supplementary Tables 2, 6). In both the early and overall breeding season metrics, treatment fields had slightly more alkaline soil (mean soil pH ± standard error; early breeding season: treatment = 5.36 ± 0.15, control = 4.87 ± 0.13; overall breeding season: treatment = 5.34 ± 0.15, control = 4.88 ± 0.14). Effects of Rush Management on Breeding Wader Pair Densities Models that took region, rush cover, woodland distance, and soil conditions into account found no evidence that the density of Curlew pairs varied between treatment and control fields [Figures 2A,B and Table 1; profile 95% confidence interval (CI) for treatment parameter estimate = −0.73 to 1.12]. Rush cover, which varied from 10 to 70% (Supplementary Table 4), was not associated with breeding Curlew densities (Table 1). Similarly, there was no evidence that Lapwing pair densities differed between control and treatment fields or were influenced by rush cover (Figures 2C,D and Table 1; profile 95% CI for treatment parameter estimate = −0.76 to 1.91) -although it is important to note that Lapwings were extremely rare in the SWP survey fields, being observed in just a single control field (Supplementary Table 7). For Snipe, when treatment was modelled as an interaction with region, there were higher Snipe densities in treatment fields than control fields in the SWP but similar densities in the treatment and control fields at Geltsdale, and no evidence that rush cover was associated with Snipe densities (Figure 2G and Table 1). When the interaction between treatment and region was excluded from the model there was no evidence that Snipe pair densities differed between treatment and control fields or were influenced by rush cover (Figures 2E,F 1 | Generalised linear models of breeding wader pair density for Curlew, Lapwing, and Snipe -treatment, region, rush cover, soil conditions (moisture, pH, and penetration resistance), and woodland distance were included as predictor variables, with field size (natural logarithm transformed) included as an offset. Following preliminary tests, Snipe densities were modelled with and without the interaction term between treatment and region; the densities of other species were modelled with the main effects only. Parameter estimates (β) and profile 95% confidence intervals (CIs; in brackets) are presented, with significant effects highlighted with an asterisk. CIs cannot be generated for the Snipe model with the interaction term and thus standard errors are presented for this model. Geltsdale is the reference level for region. Control fields is the reference level for treatment. McFadden's pseudo-R 2 are presented for each model. *P< 0.05. Effects of Woodland Distance, Soil Conditions, and Region on Breeding Wader Pair Densities There were trends, albeit only marginally non-significant ones, for higher densities of Curlew and Snipe in fields with more alkaline soil conditions ( Table 1). Snipe densities were also higher in fields with wetter soils and in the SWP than Geltsdale (Table 1, Figure 2, and Supplementary Figures 1, 2). No other environmental variables influenced breeding wader pair densities ( Table 1). DISCUSSION Our results reveal potential regional variation in the short-term response of Snipe breeding densities to AES rush management prescriptions, with benefits arising from rush management in the SWP but not Geltsdale. Certainty around the strength of this effect is, however, limited by the large standard error around the interaction term's parameter estimate. When regional variation is omitted, there is no firm evidence for benefits of rush management. Breeding Lapwing densities also did not appear to be significantly influenced in the short-term by rush management. Yet, positive impacts on breeding densities cannot be excluded for either Snipe or Lapwing as the 95% CIs for the treatment parameter estimate suggest that the largest plausible values are approximately two (Snipe = 2.09; Lapwing = 1.91). For Curlew, we found negligible evidence for positive effects of rush management on breeding densities in the short-term (no significant effect; 95% CIs indicate that the largest plausible treatment parameter estimate is 1.12). Whilst our findings are not indicative of strong, and regionally uniform increases in Lapwing and Snipe breeding densities arising from rush management, they do suggest that these species are more likely to respond positively than Curlew, especially in the case of Snipe. This is perhaps logical given (1) the preference of nesting Lapwing for short, open vegetation (Baines, 1988;Milsom et al., 2000;Durant et al., 2008) that is generated by rush cutting (Robson and Allcorn, 2006;Kelly et al., 2021), and (2) smaller-and medium-bodied species (Snipe and Lapwing, respectively) may be particularly negatively impacted by taller and denser swards that will obscure their view and thus ability to detect predators (limiting their willingness to forage and nest in such habitats) to a greater extent than larger species such as Curlew (Devereux et al., 2004;Whittingham and Evans, 2004;. There is evidence for regional variation in Snipe responses to AES rush management, with Snipe densities being higher in treatment than control fields in the SWP but not Geltsdale. Such situations are expected to arise if there is regional variation in the extent to which habitat availability regulates Snipe populations. Snipe densities were significantly higher in the SWP than Geltsdale. This situation could arise if most of the habitat with structurally suitable vegetation is occupied in the SWP, whilst other regulating factors limit the Geltsdale population and prevent it from occupying all suitable habitat, including that created through AES rush management. Indeed, the general lack of strong evidence for beneficial impacts of rush management FIGURE 2 | Poisson model (main effects only models) predicted breeding wader pair densities in control fields and treatment fields within Geltsdale [left hand column; n = 18 fields (9 control and 9 treatment)] and the SWP [right hand column; n = 25 fields (13 control and 12 treatment)] when taking into account rush cover, woodland distance, and soil conditions (moisture, pH, and penetration resistance) for Curlew (A,B), Lapwing (C,D), and Snipe (E,F). Bars represent model predicted densities, and errors represent model predicted 95% confidence intervals. The best fitting model (judged by AICc values) for Snipe densities included an interaction between region and treatment with model predicted densities (G) being represented by triangles (SWP) and circles (Geltsdale); error bars again represent 95% confidence intervals but note that for Snipe densities in SWP control fields these are infinite due to singularity issues with the model as no Snipe were observed in such fields. could highlight that habitat improvements alone will not enable breeding densities to increase because other factors are regulating population sizes (Smart et al., 2013). This links to the buffer effect, through which there is higher likelihood that high quality habitat remains unoccupied (Kluyver and Tinbergen, 1954;Brown, 1969;Gunnarsson et al., 2005). Thus, our results will be most applicable to wader populations at similar or lower densities to those at our study sites and we cannot exclude the possibility that rush management impacts would be greater in populations whose size is regulated by availability of fields with suitable vegetation structure. Given that increased nest and chick predation rates are a key driver of wader declines (Roodbergen et al., 2012;Roos et al., 2018), management may be required that simultaneously tackles rush encroachment and predation pressure to enable wader populations to recover and respond positively to AES rush management -especially for Curlew which exhibited negligible evidence for increased densities in response to rush management. Alternatively, rush management may not be creating sufficiently optimal conditions for some of our focal wader species to generate consistent and detectable increases in breeding densities. Some AES prescriptions aim to reduce rush cover within a field to <30% (e.g., Natural England, 2018), yet all treatment fields had >30% rush cover due to the study design. Nevertheless, we found no evidence that rush cover (which ranged between 30 and 70% in all survey fields barring three control fields with 10-30% rush cover) influenced wader densities. As our study spanned a single breeding season, results are most applicable to the influence of rush management on breeding waders through its short-term impact on vegetation structure (i.e., vegetation height and density). Given that Lapwing favour short grass swards with sparse tussocks comprising rush and grass (Baines, 1988;Milsom et al., 2000), rush management may need to ensure a large proportion of short vegetation is retained throughout the breeding season to generate substantial increases in breeding Lapwing densities, which in the long-term could be achieved by reducing rush cover to lower than 30%. Curlew require a heterogenous sward structure for breeding (Pearce-Higgins and Grant, 2006;Durant et al., 2008), with taller areas to provide chicks with concealment from predators (Valkama et al., 1998), more open areas for foraging (Robson and Allcorn, 2006;Fisher and Walker, 2015), and a range of vegetation heights for nesting (Valkama et al., 1998;Fisher and Walker, 2015;Zielonka et al., 2019). It is plausible that current AES rush management prescriptions are not delivering sufficient within-field heterogeneity in sward structure to provide the complex habitat matrix required by Curlew. Such a situation could arise either because the current prescriptions to cut onethird of the rush within a field on an annual basis are insufficient, or because such prescriptions are too difficult for farmers to follow as they feel that they should cut a larger proportion of the field when they are able to access fields for rush cutting (this is often difficult in winter due to waterlogged conditions). It is also important to note that rush cutting through AES prescriptions has been found to increase the risk of artificial wader nest predation (Kelly et al., 2021) and thus birds may be avoiding nesting in such fields due to perceived, or realised, increases in nest predation risk. Implications for Management of Wader Breeding Habitat Accounting for variation in field size via inclusion as an offset in the models, our results suggest that rush management through AES prescriptions can increase Snipe breeding densities in some but not all regions, and such benefits could also arise for Lapwing (although low population sizes, especially in the SWP, limited our ability to detect such effects). In contrast, we found evidence that Curlew are, out of our focal species, the least likely to respond to the implementation of AES rush management prescriptions. Whilst ideally surveys would be repeated in subsequent years due to the potential for environmental conditions to vary between years, this was not possible due to logistical constraints. Our study of two distinct upland regions, however, enables testing of rush management across different environmental conditions and population densities of our focal species (see Figure 2). Moreover, our results provide a snapshot of wader densities in fields with and without rush management, with results revealing the potential for rush management to increase densities of Snipe and Lapwing in the short-term. Our study thus advocates further research exploring both the short-and long-term impacts of AES rush management prescriptions on upland breeding waders. We also found evidence that more alkaline soils were associated with higher Curlew and Snipe breeding densities, which for Curlew is consistent with previous research showing lower densities where soil organic carbon (assumed to be more acidic, peaty soils) is higher (Franks et al., 2017). These patterns are presumably due to higher pH increasing abundance of soil invertebrates such as earthworms (McCallum et al., 2016), and wetter soils also increased Snipe densities. Combining rush management with additional interventions to improve habitat quality may thus be beneficial, such as installation of wetter depressions or flushes and blocking of drainage ditches (Smart et al., 2006;Douglas et al., 2021), or liming of more acidic soils (but with targeted use; McCallum et al., 2016). Rush management prescriptions may, however, benefit from potential revision to increase their efficacy. Potentially beneficial changes that merit further investigation include researching the optimal total area and spatial configuration of cut and uncut rush within fields, thus ensuring heterogeneity in sward structure, perhaps particularly for Curlew (as shown by beneficial mosaic grassland management for Black-tailed Godwits Limosa limosa in The Netherlands; Schekkerman et al., 2008), and in the case of Lapwing contrastingly ensuring rush cover is below 30% (yet retaining some taller vegetation patches; Laidlaw et al., 2017) which will limit heterogeneity in the sward. In addition, where additional drivers to habitat degradation, such as increased nest and chick predation risk, are suppressing wader populations, interventions that focus solely on habitat improvements are unlikely to fully meet their potential to reverse population declines (Smart et al., 2013). FUNDING The National Lottery Heritage Fund provided funding for this research.
2021-12-11T14:08:44.201Z
2021-12-10T00:00:00.000
{ "year": 2021, "sha1": "a6d9e73be5b7af66bcebfe14cde0e9b411a567b5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fevo.2021.660513/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "a6d9e73be5b7af66bcebfe14cde0e9b411a567b5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
1186125
pes2o/s2orc
v3-fos-license
Bilateral Reduction Mammaplasty as an Oncoplastic Technique for the Management of Early-Stage Breast Cancer in Women with Macromastia. OBJECTIVE Lumpectomy may result in contour deformities or breast asymmetry in women with breast cancer and macromastia. This study investigates the use of bilateral reduction mammaplasty, with the tumor and margins included within the reduction specimen. METHODS Twenty-four patients who underwent lumpectomy with immediate bilateral reduction mammaplasty for unilateral breast cancer were included. Patient medical records were reviewed for demographic, oncological, and surgical characteristics. RESULTS Mean patient age was 57 years, and mean body mass index was 32.2 kg/m(2). Mean tumor size was 1.7 cm. All tumor margins were free of neoplastic involvement. No difference was noted between the ipsilateral and contralateral resection weights (P = .81). Adjuvant radiation therapy was delivered to 21 patients (88%). There were no significant differences in postoperative total (P = .36), major (P = .44), or minor (P = .71) complications between the tumor and nontumor sides. Only 1 patient required additional revision surgery following the initial lumpectomy with bilateral reduction mammaplasty. CONCLUSION Lumpectomy with bilateral reduction mammaplasty did not compromise surgical margins. Lumpectomy with bilateral reduction mammaplasty may allow for adequate surgical treatment of breast cancer while avoiding significant breast asymmetry in women with macromastia. For women with a diagnosis of early-stage breast cancer, breast conservation therapy (BCT)-defined as lumpectomy, followed by adjuvant radiation therapy-is the preferred treatment method. 1 As the treatment of breast cancer has evolved, the guidelines for BCT have become more inclusive of larger tumors, more advanced disease, and anatomical tumor locations that were previously contraindications for BCT. 2 Compared with other women, treatment with lumpectomy in women with macromastia can result in greater degrees of breast asymmetry, contour irregularities, and poor aesthetic outcomes that often require additional reconstructive procedures to correct. Furthermore, it is difficult to achieve radiation dose homogeneity in women with large, pendulous breasts, yielding higher rates of radiation fibrosis, chronic pain, and radiation skin changes that further exacerbate any preexisting breast asymmetry and deformity. [3][4][5][6] Given the factors confounding lumpectomy outcomes in women with macromastia, the use of lumpectomy and immediate bilateral reduction mammaplasty has been proposed as an oncoplastic method to traditional BCT. 3,[6][7][8][9][10][11][12] In this coordinated approach, the tumor resection occurs within the expected glandular resection of the reduction mammaplasty. Following tumor resection, the reduction mammaplasty is carried out on the ipsilateral breast and the unaffected, contralateral breast. The utilization of lumpectomy and immediate bilateral reduction mammaplasty has been slow to gain acceptance due to concerns regarding oncological safety and the potential for postoperative complications. Spear et al 7 and Losken et al 11,13 have previously shown that this technique is safe and feasible at a large academic center and that it does not negatively affect radiation delivery. However, there remains a relative paucity of data describing the oncoplastic reduction technique when compared with the amount of surgical literature surrounding traditional techniques such as mastectomy or lumpectomy. The need for more studies describing oncoplastic surgery following lumpectomy is underscored by the fact that more women undergo lumpectomy for breast cancer treatment than undergo mastectomy. Finally, most of the studies describing oncoplastic surgery have been based on the experiences of large academic institutions, thereby limiting applicability to smaller or "community" institutions. 4,7,8,11 In this study, we present our institution's experience with lumpectomy and immediate bilateral reduction mammaplasty for the treatment of breast cancer in women with macromastia. Data analysis An institutional review board-approved (HUM: R-12-1455) retrospective medical record review was conducted on all patients undergoing lumpectomy and immediate bilateral reduction mammaplasty. Patients were identified from the senior surgeons' (P.H.I., R.J.B., D.G.S.) breast reconstruction practices between 1995 and 2012. All patients were treated at St Joseph Mercy Hospital in Ann Arbor, Mich. St Joseph Mercy hospital is a 537-bed community hospital that serves the greater southeastern Michigan region. Standard criteria were utilized by the referring surgical oncologists in selecting patients for BCT. All patients were initially referred for separate discussions regarding breast oncology management and breast reconstruction following a biopsy-proven primary diagnosis of breast cancer. Only patients with breast cancer localized to a single breast quadrant or central region of the breast as determined on preoperative mammographic imaging were included. Patients with multicentric disease, bilateral breast cancer, or recurrent breast cancer or patients with a prior lumpectomy presenting for delayed reduction mammaplasty or mastopexy were ePlasty VOLUME 16 excluded. Medical record review of patients meeting inclusion and exclusion criteria was completed evaluating patient demographics, medical comorbidities, tumor size, tumor histology, nodal status, tumor location, skin resection pattern, pedicle type, glandular resection weights, intraoperative alterations to the predetermined reconstructive plan, and perioperative complication rates. Intraoperative breast resection weights from the ipsilateral and contralateral breasts were compared using the Student pairwise t test. Perioperative complication rates (total, major/operative, and minor/nonoperative) were analyzed using the χ 2 test. Surgical method All patients underwent coordinated preoperative planning by the surgical oncologist and senior plastic surgery staff. In all cases, a keyhole-pattern skin resection was marked preoperatively by the senior reconstructive surgeon (P.H.I., RJ.B, or D.G.S.). The tumor resection was first performed by the surgical oncologist through a keyhole-pattern skin marking. Patients undergoing sentinel node biopsy at the time of lumpectomy and reduction mammaplasty received preoperative injection of a radionucleotide tracer and intraoperative injection of isosulfan blue dye. Sentinel lymph node biopsy was carried out following tumor resection. Frozen sections were utilized following tumor resection to verify margin-free status of the primary specimen. After the completion of the cancer resection, the reconstructive surgery team completed the reduction mammaplasty of the ipsilateral breast. Pedicle type for the ipsilateral breast was selected on the basis of tumor size and location such that the glandular resection would coincide with the tumor. The glandular resection was dictated by the planned area of tumor resection, not vice versa. Pedicle type on the contralateral breast was selected on the basis of plastic surgeon preference and degree of preoperative breast ptosis. Pedicle types included inferior (n = 12), superomedial (n = 7), central (n = 1), bipedicled (n = 1), or medial (n = 3) for the ipsilateral breast and inferior (n = 15), superomedial (n = 7), central (n = 1), or medial (n = 1) for the contralateral breast. Modifications to the glandular resection and pedicle orientation were performed as needed on the basis of the size and location of the tumor resection to prevent compromise of the skin flaps or nipple. All additional glandular tissue removed from the ipsilateral breast was oriented and sent for pathological evaluation. As a final step, the contralateral reduction mammaplasty was completed in the standard fashion and both breasts were evaluated for symmetry on the operating table. Patient characteristics Twenty-four patients underwent lumpectomy and immediate bilateral reduction mammaplasty for the treatment of primary breast cancer. The mean age at the time of surgery was 57 (SD = 9.6) years. Preoperative breast cup size ranged from C to G as determined on preoperative assessment. The mean body mass index of the group was 32.2 (SD = 6.4) kg/m 2 . No patients were carriers of the BRCA gene mutation (Table 1). Ipsilateral breast findings On the basis of preoperative mammographic imaging, there were 13 tumors of the right breast and 11 of the left (Fig 1). Tumor size ranged from 0.1 to 4.1 cm; all specimens were confirmed to have uninvolved margins on intraoperative frozen sections. Complete tumor resection was confirmed on permanent pathology in all cases-no patients required reoperation for oncological management. Tumor histology included ductal carcinoma in situ (10/24), invasive ductal carcinoma (12/24), and invasive lobular carcinoma (2/24) ( Table 1). Postoperative findings Postoperative complications including hematoma, partial nipple necrosis, seroma, fat necrosis, and delayed wound healing were recorded (Fig 3). Complications were dichotomized into major complications, which required operative intervention, and minor complications, which were managed without surgical intervention. There was no statistically significant difference in total, major, or minor complications between the ipsilateral and contralateral breasts (Fig 4). Adjuvant radiation therapy was utilized in 21 of 24 patients. Of the 3 patients who did not receive adjuvant radiation therapy, 2 patients had a low-risk oncotype and received hormonal therapy alone and 1 patient deferred adjuvant treatment despite the recommendations from the treating surgical oncologist. Of the 21 patients who received radiation therapy, 8 patients developed observable breast changes related to radiation therapy as assessed by the treating senior plastic surgeon. None of these patients required operative intervention to address any secondary breast deformity related to radiation therapy. One patient required reoperation due to postoperative asymmetry. No patients experienced a delay in receiving adjuvant radiation or chemotherapy after undergoing lumpectomy and immediate bilateral reduction mammaplasty. One patient required an additional revision procedure to improve symmetry between both breasts. DISCUSSION In this study, we demonstrate that lumpectomy with immediate bilateral breast reduction can be safely performed in patients with macromastia to address a primary breast cancer. No patients had positive margins on final pathology or required re-resection. In addition, the combination of lumpectomy with bilateral breast reduction did not alter the timing or course of radiation therapy or chemotherapy as part of the oncological management. ePlasty VOLUME 16 Patients did not experience increased postoperative complications of the ipsilateral breast when compared with the contralateral breast. We noted similar resection volumes of the ipsilateral and contralateral breasts intraoperatively upon pairwise analysis, suggesting that tumor resection did not significantly alter the reduction mammaplasty strategy. The use of BCT in women with macromastia may lead to suboptimal results secondary to breast asymmetry, poor aesthetic outcomes, and difficulty with adjuvant radiation dose delivery to large breast size. 3,6,7,9 In the absence of a collaborative approach to surgical planning, patients with macromastia may be offered mastectomy instead. Patients may then undergo reconstruction with additional revision procedures of the ipsilateral and/or contralateral breasts including reduction mammaplasty. The decision to offer patients oncoplastic surgery requires a collaborative approach between breast surgeons and plastic surgeons during the initial discussions regarding surgical management to identify appropriate candidates and inform patients about coordinated techniques for the management of breast cancer. 14 The need for close coordination between surgical teams also extends into the perioperative and intraoperative periods for the planning of surgical incisions, expected tumor resection, and intraoperative modification technique to ensure optimal outcomes for patients. Although mastectomy with immediate breast reconstruction has certainly increased intraoperative collaboration between surgical oncologists and plastic surgeons, these 2 remain separate entities, both clinically and operatively. However, oncoplastic surgery should be viewed not as 2 separate phases performed by 2 separate surgeons but rather a continuous process designed and executed by a single team. 11 have previously demonstrated that lumpectomy with bilateral breast reduction can be safely performed at a large academic institution. 10 Recently, Egro et al 15 have demonstrated that patients who undergo reduction mammaplasty at the time of lumpectomy experience fewer complications and obtain improved symmetry when compared with patients who undergo a delayed procedure. It should be noted, however, that 10% to 15% of patients with breast cancer, and an even smaller proportion with early-stage breast cancer, are treated at academic centers. 16 Therefore, there is a need to demonstrate that oncoplastic surgery can effectively be performed at community hospitals providing care to a significant proportion of patients with breast cancer. Here, we describe the experience of 3 reconstructive surgeons at a 537-bed community hospital. Although this institution does have a loose affiliation with a larger academic medical center, it remains a separate institution with separate staff surgeons. The collaborative approach between the breast surgeons and reconstructive surgeons serves as a model that can be replicated at community hospitals, with results consistent with those reported by others. Spear et al 7 and Losken et al Kronowitz et al 8 have similarly demonstrated lower complication rates associated with immediate reconstruction of partial mastectomy defects. However, they do not advocate simultaneous reduction of the contralateral breast due to concern for changes in ipsilateral breast appearance after radiation therapy. 17 In our experience, patients were generally satisfied after a single procedure to address both breasts. Only 1 patient sought further revision to obtain improved symmetry at a separate procedure. Typically, patients with macromastia who undergo mastectomy may require tissue expander reconstruction, followed by implant placement, and subsequent revision surgery to improve the symmetry between the larger contralateral breast and the reconstructed breast-this would yield at least 3 surgical procedures in addition to the initial mastectomy. By performing reduction mammaplasty at the time of lumpectomy, we are able to avoid multiple procedures required for traditional reconstruction and subsequent revision procedures. Consistent with our results, Imahiyerobo et al 4 have shown that oncoplastic reduction does not result in a higher rate of intraoperative complications than that by traditional breast reduction for symptomatic macromastia. By planning the tumor resection to coincide with the glandular component of the reduction mammaplasty specimen, additional breast tissue can be obtained, allowing for even wider margins than with a traditional lumpectomy. In our series, all patients (24/24) achieved tumor-free margins based on frozen sections and final permanent histological evaluation. Use of frozen section has allowed us to proceed with immediate reconstruction, confident that there will be a low risk for positive margins. During preoperative consultation, all patients were informed that they could require reexcision lumpectomy or a completion mastectomy if final pathological margins were positive, however. Chang et al 3 have previously reported on the use of lumpectomy with bilateral reduction mammaplasty for patients with unilateral breast cancer, noting that 1 patient did require completion mastectomy due to positive margins on final pathology. However, in their experience, the excised tumor from the ipsilateral breast underwent only gross examination intraoperatively, did not undergo intraoperative frozen section evaluations, and was excised en bloc with the breast reduction specimen. In our reported experience, the tumor was resected separately from the breast reduction specimen and, all resected tumor specimens underwent intraoperative frozen section evaluations. From an aesthetic standpoint, patients benefit by achieving symmetrically smaller breasts during a single oncoplastic procedure. Although symmetry is determined by what ePlasty VOLUME 16 remains, not by what is resected, the fact that the resection weights of the ipsilateral and contralateral breasts were similar in each patient suggests that the surgeon was comfortable obtaining symmetry in the patient. Furthermore, patients at follow-up were satisfied with their operative outcome based on discussion between the patient and the treating plastic surgeon and only 1 patient required reoperation to improve symmetry. The reduction of breast size in patients with macromastia may also improve upon the known functional sequelae of macromastia including neck pain, shoulder pain and grooving, and poorly fitting clothing. In addition, adjuvant radiation therapy is facilitated by smaller breast size, with the potential for improved radiation dose homogeneity in the ipsilateral breast. 18 Two patients (8%) were found to have a synchronous lobular carcinoma in situ of the contralateral breast, which was contained within the reduction mammaplasty resection. Neither patient required further intervention for the previously undiagnosed lobular carcinoma in situ. Chang et al 3 found a similar rate (5.5%) of synchronous tumors in women undergoing concurrent lumpectomy with reduction mammaplasty. Although this may provide further intervention to examine additional breast tissue, we do not advocate contralateral breast reduction as a method for contralateral breast screening. Fat necrosis was the most common complication, followed by delayed wound healing and cellulitis. Nipple necrosis and hematoma were less frequently occurring once each. We report a relatively high total complication rate of 54.2%, which is likely due to our inclusion of minor complications such as all forms of delayed wound healing, fat necrosis, and cellulitis, which did not require operative intervention. Upon pairwise comparison of the ipsilateral and contralateral breasts for complications, we found no statistically significant difference in the total (P = .36), major (P = .43), or minor complications (P = .71) between the ipsilateral and contralateral breasts. This suggests that tumor resection at the time of reduction does not increase the complication risk. However, we note that the power of our study is limited, as we include only 24 patients. Nonetheless, the ability to combine cancer resection with immediate reconstruction may additionally mitigate the morbidity associated with having multiple operations. In our series, 21 of 24 patients (88%) underwent adjuvant radiation therapy after combined lumpectomy and reduction mammaplasty. Eight patients were noted to have significant changes in their breast skin and contour following radiation therapy, but none required operative intervention to address these changes. Postradiation sequelae have been demonstrated in previous studies, 3,7 occurring with a frequency of up to 53%. 10 However, no patients in previous series required further operative intervention related to radiation changes, which is consistent with our experience. No patients experienced delays in time to radiation or chemotherapy in our study. Our study has the usual limitations associated with a retrospective review at a single institution. Although we demonstrate no significant differences between the contralateral and ipsilateral breasts in terms of complication rates, we note that this may be due to a small sample size. However, we demonstrate that oncoplastic surgery in the form of lumpectomy with immediate bilateral reduction is an oncologically safe technique in women with macromastia. No women had positive margins or required reexcision in our experience. Only 1 patient required additional revision surgery, in contrast to the number of operations required for patients undergoing traditional reconstruction following mastectomy. This technique is feasible not only at large academic institutions, as has been previously demonstrated, but also at hospitals in the community setting. As patient care and even legislation demand a collaborative approach between plastic surgeons and surgical oncologists, this technique deserves further evaluation to encourage broader implementation.
2016-05-04T20:20:58.661Z
2016-01-14T00:00:00.000
{ "year": 2016, "sha1": "32e5facfce16700230726b2776891c7de334f9c2", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "32e5facfce16700230726b2776891c7de334f9c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51846869
pes2o/s2orc
v3-fos-license
Efficient continuous wave noise spectroscopy beyond weak coupling The optimization of quantum control for physical qubits relies on accurate noise characterization. Probing the spectral density $S(\omega)$ of semi-classical phase noise using a spin interacting with a continuous-wave (CW) resonant excitation field has recently gained attention. CW noise spectroscopy protocols have been based on the generalized Bloch equations (GBE) or the filter function formalism, assuming weak coupling to a Markovian bath. However, this standard protocol can substantially underestimate $S(\omega)$ at low frequencies when the CW pulse amplitude becomes comparable to $S(\omega)$. Here, we derive the coherence decay function more generally by extending it to higher orders in the noise strength and discarding the Markov approximation. Numerical simulations show that this provides a more accurate description of the spin dynamics compared to a simple exponential decay, especially on short timescales. Exploiting these results, we devise a protocol that uses an experiment at a single CW pulse amplitude to extend the spectral range over which $S(\omega)$ can be reliably determined to $\omega=0$. I. INTRODUCTION The problem of a qubit interacting with a noisy environment (bath) is of fundamental importance in the field of quantum information processing. Choosing the optimal strategy to fight decoherence depends on the noise characteristics of a particular qubit implementation. For many solid-state qubits, single-axis phase noise is dominant, and treating the environment in a stochastic semi-classical approximation suffices to describe the dephasing process. For example, a system-environment Hamiltonian of the form H SE = σ (s) z i λ i σ (i) z /4, where σ (s) (σ (i) ) is the Pauli operator for the system (environment) and λ i is the coupling strength, is approximated as the semi-classical H sc (t) = f (t)σ z /2 by tracing over the environmental degrees of freedom [1]. In the limit of many environment qubits forming a spin bath, with intrabath couplings strong compared to λ i , f (t) can be treated as a stationary, Gaussian-distributed function with zero mean, i.e. f (t) = 0. These properties will be assumed throughout the remainder of the paper. In the context outlined above, knowledge of the spectral density function S(ω), the Fourier transform of the two-point correlation function for f (t), can be used to optimize quantum control, such as dynamical decoupling (DD) and dynamically corrected gates [2]. The spectral information can also be used in decoherence suppression techniques such as hole-burning [3]. One way to estimate S(ω) is to monitor the response of the qubit as it undergoes DD pulse sequences with certain spectral properties [4][5][6][7][8][9][10][11][12]. This can be understood intuitively using the overlap integral approach [1,[12][13][14][15][16][17][18][19][20][21]. * Contact: baugh@uwaterloo.ca For example, under a series of equally spaced, instantaneous π pulses, the bath-traced Hamiltonian becomes H sc (t) = y(t)f (t)σ z /2 where y(t) alternates between +1 and −1 at a period corresponding to the pulse spacing τ , and the frequency of the decoupling cycle is Ω = π/τ . In this case, an exponential decay of qubit coherence is predicted, σ x (T ) = σ x (0) e −χ(T ) , where the timedependent decay rate is determined by the overlap integral of the noise spectral density and the frequencydomain filter function (the Fourier transform of the timedomain filter y(t)), |F (ω, T )| 2 : In other words, the external control sequence acts as a bandpass filter and can be tailored such that the qubit is most sensitive to certain spectral bands of the noise power. The same formalism can be applied in quantum sensing such as spin-based magnetometry for oscillating fields using nitrogen-vacancy centers in diamond [22][23][24][25][26][27], further motivating the development of accurate DDbased spectroscopy over an extended bandwidth. If the function |F (ω, T )| 2 is spectrally broad or peaked at many frequencies, extracting S(ω) becomes challenging, particularly if the functional form of S(ω) is not known a priori. |F (ω, T )| 2 can be made spectrally narrow if there are sufficiently many decoupling cycles, i.e., T >> 2τ where 2τ is the decoupling period. The filter function then approaches a delta-function at the decoupling cycle frequency and its harmonics kΩ, where k is a nonzero integer. When T is much longer than the time scale of the bath correlation decay, S(ω) can be regarded as constant within the probed spectral width, allowing equation 1 to be written in a discrete form. A protocol for estimating S(ω) using the discrete form of equation 1 and by taking the harmonics into account was designed and im-plemented experimentally in Ref. [5]. The pulsed method becomes disadvantageous at high probe frequencies such that finite pulse width effects cannot be ignored and limit the minimum pulse spacing. Moreover, the lowest frequency (given by the maximum pulse delay) dictates the frequency resolution with which the spectral density function is probed [5]. This makes the protocol inefficient in certain situations. For example, probing an unknown noise spectrum over a wide frequency window requires a very large number of experiments. An alternative approach is to monitor coherence decay under a continuous wave (CW) "spin-locking" pulse, also known in the NMR literature as a T 1ρ measurement. In this case, the qubit dynamics can only be studied perturbatively due to the non-commutativity of the effective Hamiltonian. T 1ρ experiments have been used in NMR to probe slow atomic motions that give rise to fluctuations in the dipolar field [28][29][30][31]. The NMR literature, however, has not directly addressed the problem of extracting an unknown and arbitrary S(ω) from a series of T 1ρ measurements. This was first addressed in the context of the generalized Bloch equations (GBE) formalism [8,32]. The generalized Bloch equations were derived to describe the relaxation dynamics of a system simultaneously interacting with a heat bath and an arbitrarily strong excitation field. The derivation is based on the following assumptions: (1) the system and the bath are weakly coupled, and are initially in a product state; (2) the time scale of the relaxation of the system is much slower than that associated with the decay of the bath correlation functions and the period of the driving field; (3) the bath-induced coherent system dynamics are negligible compared to that induced by the system Hamiltonian; (4) the rotating wave approximation (RWA). The weak coupling assumption means that we keep terms only up to second order in the system-bath coupling strength f (t). The second assumption leads to the aforementioned delta-function approximation. For the noise model described earlier, the GBE predicts an exponential decay of coherence σ x in the T 1ρ experiment. The decay rate is directly proportional to S(Ω) where Ω is the pulse amplitude (Rabi frequency). The steady-state coherence is negligible as long as the high temperature limit k B T >> Ω is satisfied. Note that the decay rate here is time independent and thus cannot capture non-Markovian dynamics. The CW approach can often perform well to higher frequencies than the pulsed method, since finite pulse width effects tend to appear before the maximum excitation power is reached or before the RWA is violated. Moreover, the CW protocol can be more efficient than pulsed methods since a single coherence decay measurement yields the spectral density of noise at the target frequency. T 1ρ noise spectroscopy was demonstrated experimentally in Ref. [6] for optically-trapped ultracold atoms coupled to a collisional bath, and in Ref. [10] in the context of superconducting qubit decoherence. In the latter case, the analysis was based on the GBE but included more general noise (relaxation) than considered here. Neither the CW or pulsed methods can probe to arbitrarily low frequencies using the standard analyses above. In these analyses, the number of drive field periods (decoupling cycles) should be large enough to justify the delta function approximation, i. e. ΩT /(2π) >> 1. Since the signal decay timescale is T ∼ 1/S(Ω), the minimum probe frequency is limited by the condition Ω >> 2πS(Ω) (where Ω = π/τ in the pulsed method). The main goal of this paper is to study spin dynamics under CW excitation beyond approximations (1) and (2) above, so that the signal decay at low frequencies Ω ∼ 2πS(Ω) can be better modeled. This information is then used to increase the spectral range over which noise spectroscopy produces valid results. We describe the state evolution in the Liouville representation [33] and apply the cumulant expansion method [34,35] to calculate the ensemble average, finding the functional form of the coherence decay up to fourth order in f (t) (or second order in S(ω)). The resulting equations are derived without any assumptions about the CW pulse length or the bath correlation time, in order to capture non-Markovian behaviour. These results are used to design a CW noise spectroscopy protocol that extends the range for which S(ω) can be accurately determined down to ω = 0. The remainder of the paper is organized as follows. Section II derives the coherence decay function in the T 1ρ experiment up to fourth order in f (t) (i.e., second order in S(ω)). Section III compares our results to the standard exponential decay function and shows that our model predicts the signal decay significantly better in the short time regime. Section IV A presents the noise spectroscopy protocol exploiting the coherence decay function derived in Sec. II, and the improved accuracy in the S(ω) determination is demonstrated in Sec. IV B via numerical simulations. Section V concludes. II. COHERENCE DECAY FUNCTION In this section we derive the coherence decay function of a system under CW driving as a function of the spectral density, S(ω), of Gaussian, zero-mean semi-classical phase noise as introduced above. In the interaction frame of the CW pulse of amplitude Ω along σ x in the rotating frame, the semi-classical stochastic Hamiltonian transforms in time t as The derivation of the coherence decay function under this Hamiltonian must involve perturbation series, since [H sc (t 1 ),H sc (t 2 )] = 0]. Using stochastic Liouville theory and super operator formalism [33], the ensemble averaged qubit evolution can be described as ρ (T ) = Λ(T ) ρ (0), where ρ(T ) is the density matrix decribing the qubit at time T , · denotes ensembles averaging over noise realizations,· denotes vectorization that stacks the rows of a d × d matrix into a d 2 × 1 vector, and where 1l is the unit matrix, and T is the Dyson time ordering operator [34]. The ensemble average of the noisy operator Λ(T ) can be evaluated with the cumulant expansion where K(T ) is called the cumulant function and k n is called the nth cumulant [34,35]. By Taylor-expanding and comparing equations 3 and 4, the cumulants can be found. For the Hamiltonian in equation 2, the powers of L(t) are linear combinations of the operators from following set: Moreover, k n is proportional to the nth power of L(t). As a result, k n is a linear combination of the operators in N . Thus, Λ(T ) can be expressed as where N m is the mth element in N . In the spin-locking experiment, the normalized signal is where · i,j is the element of an operator at row i and column j. Combining equations 6 and 7, It is convenient to write above equation as where n a m,n = a m , and the index n indicates that the contribution is linked to the nth cumulant. The above equation has several significant implications. First, the average signal in the T 1ρ measurement is an exponential function whose argument (decay rate) is given as a perturbation series. Second, the nth order decay rate is proportional to k n , and therefore proportional to the ensemble average of products of Liouvillians, n j=1 L(t j ) , and hence proportional to the average over products of the classical Gaussian distributed random variable, n j=1 f (t j ) ). Third, the nth order decay rate can simply be calculated from the coefficients of the 1l⊗1l, σ x ⊗σ x , σ y ⊗σ y , and σ z ⊗σ z terms corresponding to the nth order cumulant, independently from other cumulants. Moreover, since f (t) is Gaussian-distributed with zero mean, for an integer n, f (t 1 ) . . . f (t 2n−1 ) = 0 according to Isserlis Gaussian moment theorem [36]. Subsequently, only even order cumulants are non-zero, and the first few terms are In the following, we present analytical calculation of the first two non-zero decay terms. We also show that in the limit of T → ∞, the 2nd order decay rate is identical to the GBE result. To go beyond the GBE-based analysis, we omit the large T approximation, and describe the qubit dynamics including the 4th order decay rate. A. Second-order decay rate By expanding equation 12 (the 2nd order cumulant) using the operators in equation 5, we find a m,2 . Then from equation 11 we acquire the decay rate attributed to the 2nd order cumulant where The imaginary part of F 2 (ω, Ω, T ) is an odd function with respect to ω, and S(ω) is an even function, thus equation 16 can be further simplified as Figure 1 shows the real parts of F 2 (ω, Ω, T ) for fixed values of (a) Ω and (b) the number of Rabi cycles l = ΩT /2π. The even function Re (F 2 (ω , Ω, T )) behaves like δ(ω ± Ω) as l → ∞ (i.e., as T → ∞). Thus, for a fixed value of Ω and in the limit of T ≫ 1, the 2nd order decay rate can be approximated as Therefore, the normalized spin signal attributed to the 2nd order cumulant in the limit of large T is a simple exponential decay: This expression is equal to the result derived from the GBE in the high temperature limit k B T >> Ω. B. Fourth-order decay rate Following the same steps as for the 2nd order term evaluation, the 4th order cumulant k 4 can be expressed in terms of the operators in equation 5, and the coefficients a m,4 can be found. The decay rate attributed to the 4th order cumulant can be expressed as We can use Isserlis Gaussian moment theorem again, and write the 4th order correlation function as the products of the 2nd order correlation functions: . The product of the 2nd order correlation function can be ex-pressed in terms of the spectral density, S(ω) as before. Then, the 4th order decay rate can be rewritten compactly as where The above time integration can be carried out analytically. Symmetries in the filter function F 4 result in the imaginary component of the integral going to zero as was the case for 2nd order decay rate. Finally, using the symmetry of Re(F 4 ) the expression for the 4th order decay rate can be expressed as wherẽ F4(ω1, ω2, Ω, T ) = Re (F4(ω1, ω2, Ω, T ) + F4(ω1, −ω2, Ω, T )) . III. ACCURACY OF COHERENCE DECAY To evaluate the accuracy of the cumulant expansion decays, we simulate a set of experiments with known input noise spectra, S input (ω). The simulated signal decays are generated by time-discretized unitary evolution of an initial state density matrix, using N = 10, 000 randomly generated noise realizations. A cosine series representation, as described in reference [37], is used to generate the noise realizations, f (t) in equation 2. This simulation method accurately represents stationary, Gaussian noise matching the input noise spectrum, and converges to the correct spectrum at a rate of 1/N, where N is number of noise samples used. For all noise spectra in this paper, S input (ω) plateaus below ω = 1 rad/s, i.e. S input (ω ′ ) = S input (1 rad/s) for ω ′ < 1 rad/s. These simulated signal decays are used in the following sections to represent experimental data. Since the number of noise realizations N is finite, we recalculate S in (ω) based on the simulated signal decays, and this is the S in (ω) that appears in the plots below. Using the simple exponential expression from equation 19, S(ω) can be accurately determined when the assumptions listed in section 1 are valid, i.e. when the relaxation timescale is long compared to the drive field period (Ω/2π ≫ S(Ω)). When this condition is not met, the signal decay can be non-exponential and the standard analysis can give inaccurate values for S(ω). Figure 2a shows the result of applying a leastsquares exponential fit and using equation 19 to determine S(ω) for a 1/f input noise spectrum. In this example, S input (ω) = 30 Hz 2 /ω and S 0 (ω) was obtained by fitting simulated CW noise spectroscopy experiments at ω = 1, 2, 4, The non-exponential signal decay displayed in the inset of figure 2a can be understood based on the shape of the filter function with respect to S(ω) when the δfunction approximation no longer holds. Figure 2d shows a non-exponential signal decay, with χ 2 displayed in insets at certain time points in the decay, along with the noise spectrum S(ω). The finite width of the main filter function peak, as well as its satellite peaks, overlaps with large values of S(ω) at low frequencies, producing oscillations in the signal decay. Figure 3 shows the error, as a function of decay time and drive field amplitude, between the simulated "experimental" signal decays and theoretical decays calculated using one of three different methods. Each calculation uses the same S in (ω) that generated the simulated decays. Figure 3a shows the result of applying the simple introduce errors in the intermediate (20-35 rad/s) Ω region. Using the cumulant expansion up to fourth order (χ 2 + χ 4 ), the mismatch is reduced at all times over a large portion of the drive amplitudes (Ω 20 rad/s for the parameters chosen in this example). IV. NOISE SPECTROSCOPY BASED ON THE CUMULANT EXPANSION The χ 2 + χ 4 cumulant expansion method can be used to improve CW noise spectroscopy when the experimental signal decays become non-exponential. The accuracy of a given noise spectrum estimate S ′ (ω) can be tested by comparing the cumulant expansion signal decay, calculated using S ′ (ω), and the experimental decay. Furthermore, the non-exponential, oscillatory behaviour observed at short timescales is the result of a wide frequency filter that overlaps with S(ω) across a range of frequencies, sometimes extending to ω = 0. This short-time behaviour thus contains broad spectral information and can be used to extend the range over which S(ω) can be determined. In particular, one can choose a drive frequency for which the signal decay is well-matched by the χ 2 + χ 4 calculation (e.g. Ω > 20 rad/s in figure 3) and extract information about S(ω) for ω < 20 rad/s from detailed fitting of the short-time behaviour. To illustrate this, figure 4 shows the short-time behaviour of a χ 2 + χ 4 calculated signal decay for two different noise spectra. One spectrum is labelled 'correct', while the other represents an error in which S(ω) at low frequencies has been changed. Here, the error is introduced for ω < 10 rad/s, while the drive amplitude Ω = 32 rad/s. This shows that the decay is sensitive to variation in S(ω) at frequencies far below the probing frequency Ω. A. Noise spectroscopy protocol To take advantage of the accuracy of the χ 2 + χ 4 cumulant expansion for improving noise spectroscopy, we propose a gradient ascent protocol based on matching to the experimental signal decay using a single chosen pulse amplitude, Ω P . An initial estimate of the noise spectrum, S 0 (ω), is obtained from the standard approach of fitting to exponential decays for a group of probe frequencies. Then, by accurately fitting a detailed signal decay at Ω P using the cumulant expansion method, particularly in the short-time regime, the full noise spectrum can be determined. The signal decay given by the cumulant expansion method is We discretize the above expression, and define a fitness function as the root-mean square error between the experimentally measured decay, σ x (t j ) , and the calculated decay s ′ (t j ) for a given S ′ (ω): We can then calculate the gradient of the fitness function, ∂Φ ∂S ′ (ωi) , for any target frequency ω i . The gradient is used to update the estimate of S ′ (ω) towards a closer matching of the experimental and calculated decays. The full protocol is: 1. To obtain an initial estimate, S ′ 0 (ω), use exponential fits of decays over a range of Ω, and matching to σ x (t) = exp(−S ′ 0 (ω)t/2). 2. Select a drive amplitude, Ω P , for detailed matching of the decay curve. Ω P should be low enough to display non-exponential features at short times, but not so low that the χ 2 +χ 4 calculation is inaccurate. Otherwise, the following steps will not converge to a high fitness function Φ. To improve the speed of calculation/convergence, we can use the knowledge that the simple exponential decay fitting is accurate when the signal decays are smooth exponentials, and only update S ′ (ω i ) for ω i where that condition is not satisfied. B. Demonstration of protocol The cumulant expansion noise spectroscopy protocol described above was applied to the simulated experiments presented in section 3 corresponding to three different noise spectra. Figure 5 shows the result obtained using the simulation with the input noise S input (ω) = 30Hz 2 /ω (with a plateau as ω → 0). The initial estimate, S 0 (ω), uses the standard exponential fitting method at 11 pulse amplitudes in the range of 20-125 rad/s. The final estimate was obtained by detailed fitting to a single decay curve as described above. For comparison, this final step was done with three different choices for the parameters (Ω P , T ), where T is the total pulse duration. The final noise spectrum estimate S f inal (ω) is a much better fit in the low frequency regime to the correct (input) spectrum. Some artifacts are introduced in the form of oscillations in the intermediate frequency range (ω = 8 − 30 rad/s). These artifacts have characteristic periods of order ∼ 2π/T and are a consequence of remaining error between the χ 2 + χ 4 decay and the true decay, such as contributions from χ 6 and higher terms. These oscillations are not a consequence of the gradient optimization and we have not found a straightforward way to remove them. However, given that S 0 (ω) is typically a smooth function, and that these oscillations are confined to a certain band of frequencies, smoothing or sliding window averaging can be used to suppress the oscillations in the final estimate. Alternately, they can be fully removed if the noise spectrum can be fit to a certain functional form, such as 1/f k . Figure 6 shows the results of applying the cumulant expansion noise spectroscopy protocol to the same three simulated experiments shown in figure 2. The upper panels show the input, initial, and final S(ω) determined by fitting the signal decay at pulse amplitude Ω P and total time T . The lower panels of figure 6 show the result of fitting S f inal (ω) from the upper panel with a general power law C ·ω α , where C and α are free parameters. Note that this power law fit is applied in a frequency range that excludes the plateau region in S f inal (ω). To obtain the initial estimate, S 0 (ω), figure 6a uses the same experiment set as figure 5, figure 6b uses experiments at 15 pulse amplitudes in the range 1-120 rad/s, and figure 6c uses 9 pulse amplitudes in the range 10-1250 rad/s. V. CONCLUSION In summary, we have treated the problem of spin evolution in the presence of single-axis phase noise during an experiment with CW excitation, with a goal to improve the determination of an arbitrary noise spectral density. By retaining cumulant expansion terms up to fourth order in the system-bath coupling, we can more accurately match coherence decay dynamics that exhibit non-exponential and oscillatory behaviour, and thereby extract more accurate spectral information, especially at low frequencies. We present a two-step protocol: (1) estimate S(ω) using the standard exponential fitting approach by probing over a set of frequencies (low-resolution signal decay experiments); (2) refine S(ω) based on fitting a single, high-resolution signal decay using the fourth order cumulant expansion calculation. Results shown are for a simulated experiment with Sin = 30Hz 2 /ω. The final spectrum estimates, S f inal (ω), are shown for three different (ΩP , T ) conditions, where T is the total pulse duration. The initial estimate S0(ω) is determined using the standard method of exponential fits to equation 19. The oscillations that appear in the range of 8-30 rad/s are artifacts (discussed in the text) and can be removed by fitting S f inal (ω) to a functional form such as 1/f k . Since this second step consists of probing at a single frequency, it is efficient in terms of experimental resources. For the cases of 1/f and 1/f 2 noise (with low frequency cutoff plateau), we have shown that this protocol allows for accurate determination of S(ω) to zero frequency, i.e. the low frequency regime where standard CW and pulsed noise spectroscopy fail. While the examples given were noise spectra of the form 1/f α , the theoretical analysis and protocol are applicable to arbitrary spectra, and in future work we plan to test this applicability in simulations and real experiments. In addition, inhomogeneous broadening is typical in physical spin systems, and should also be included. This can be expressed as an additional Hamiltonian component H(t) = βσ z , where β is a static random variable. Thus, inhomogeneous broadening yields a peak in S(ω) at ω = 0, which should enhance the oscillations in signal decay at short timescales for low probing frequencies. Our protocol should therefore reveal such broadening. Additional work could also extend the cumulant expansion noise spectroscopy protocol to include multi-axis noise and/or higher order cumulants (χ 6 ) for more general applications. FIG. 6. The result of applying the cumulant expansion noise spectroscopy protocol to three different noise spectra. The noise spectrum used to simulate each experiment is shown in blue (Sin(ω)). The spectrum obtained from the standard exponential fitting (S0(ω)) is shown in red, and the result of the cumulant expansion protocol (S f inal (ω)) is shown in green. A fitting of S f inal to a general power law form C · ω α is shown in the lower panels. The input noise spectrum, pulse amplitude ΩP , and total pulse duration are:
2018-06-26T15:01:59.000Z
2018-06-26T00:00:00.000
{ "year": 2018, "sha1": "fb58db426a9a49286c37b1214b9145851eb654d8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1806.10043", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fb58db426a9a49286c37b1214b9145851eb654d8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
266120421
pes2o/s2orc
v3-fos-license
Predictive Factors of Response to Streptozotocin in Neuroendocrine Pancreatic Neoplasms Pancreatic neuroendocrine neoplasms (Pan-NENs) may exhibit a heterogeneous clinical course, ranging from indolent to progressive/metastatic behavior. In the latter scenario, streptozocin (STZ) is considered the cornerstone of systemic treatment; however, response to STZ-based chemotherapy may vary among individuals. In this narrative review, we aimed to identify the predictive factors of response to STZ in advanced Pan-NENs. We performed an extensive search in international online databases for published studies and ongoing clinical trials evaluating STZ in Pan-NENs. We found 11 pertinent studies evaluating 17 patient-, tumor-, or treatment-related factors. Age, CgA blood levels, tumor grade, Ki-67% index, anatomical location of the primary tumor, tumor stage, site of metastasis origin, liver tumor burden, extrahepatic spread, functional status, O6-methylguanine-methyltransferase (MGMT) status, line of therapy, and response to previous treatments were all statistically associated with radiological response and/or survival. The identified predictors may help clinicians make appropriate treatment decisions, in this way improving clinical outcomes in patients with advanced Pan-NENs. Introduction Pancreatic neuroendocrine neoplasms (Pan-NENs) are a heterogeneous group of neoplasms that arise from the neuroendocrine cells of the pancreas.These tumors can exhibit diverse clinical behaviors, ranging from indolent, slow-growing tumors to aggressive and rapidly progressive malignancies [1][2][3].The classification of NENs has changed over time.The WHO 2022 Classification of Endocrine and Neuroendocrine Tumors divides NENs into well-differentiated neuroendocrine tumors (NETs) and poorly differentiated neuroendocrine carcinomas (NECs).NETs are graded as G1, G2, and G3 based on increased Ki-67 index index, whereas NECs are, by definition, high grade [4].In addition, Pan-NENs can be classified as functioning or non-functioning tumors, depending on their hormone-secreting activity. Surgical resection of localized disease is the mainstay of therapy [5], whereas in the setting of advanced disease, systemic treatment is the standard of care.For welldifferentiated NETs, the therapeutic armamentarium has progressively increased in recent decades, and comprises biotherapy (somatostatin analogues, SSAs), targeted agents (such as everolimus and sunitinib), interferon, chemotherapy, and radiopharmaceuticals (peptide receptor radionuclide therapy, PRRT).For aggressive, poorly differentiated, metastatic NECs, cytotoxic chemotherapy is the only widely available treatment.In the context of Pan-NENs, streptozocin (STZ)-based chemotherapy is considered an important option in the systemic treatment [6]. STZ (2-deoxy-2-({[methyl(nitroso)amino]carbonyl}amino)-β-D-glucopyranose) is an antibiotic and anticancer drug that was isolated for the first time from Streptomyces achromogenes in 1960.STZ has well-known diabetogenic effects due to the selective destruction of pancreatic islet β-cells, and for this reason, has been largely used to induce diabetes in experimental animals [7,8].Moreover, STZ is an alkylating agent with an established efficacy in Pan-NENs that has led to the approval of this compound on the basis of historical randomized trials [9,10]. STZ has demonstrated its activity as an anticancer drug in Pan-NENs when administered as monotherapy [11], and in combination with other chemotherapeutic agents, including 5-fluorouracil (5-FU) [12] and doxorubicin (DOX) [13].The indication for STZ use in Pan-NENs therapy varies according to the different international guidelines for Pan-NENs, including the European ones, namely, the European Neuroendocrine tumor Society (ENETS) and the European Society for Medical Oncology (ESMO) guidelines, the National Comprehensive Cancer Network (NCCN), and the Japanese Neuroendocrine Tumor Society (JNETS) guidelines.ENETS, ESMO, and JNETS guidelines indicate STZ use for metastatic pancreatic NETs (Pan-NETs) G1, G2, and G3 but not for pancreatic NECs (Pan-NECs) [14][15][16][17].Otherwise, according to NCCN guidelines, the level of recommendation for STZ as a therapeutic option for Pan-NENs is lower than the European guidelines [18].These recommendations are summarized in the Supplementary Table S1. Moreover, given the lack of head-to-head comparative studies between the abovementioned therapies, the position of STZ in the treatment algorithm is mainly based on safety/toxicity profiles and comorbidities.The identification of predictive factors of response would then be crucial in the appropriate treatment choice of Pan-NENs.In this context, this narrative review aims to critically evaluate the available data on the potentially relevant predictive factors of response to STZ in Pan-NENs. Materials and Methods We performed an extensive search for published clinical studies employing STZ in Pan-NENs in international online databases (PubMed, Web of Science, and Scopus) using the following terms: neuroendocrine pancreatic tumor, neuroendocrine pancreatic neoplasm, streptozotocin.We included all the studies evaluating STZ (alone or in combination) with a robust statistical methodology (e.g., studies providing the statistical significance and the p-value to sustain the results achieved) and excluded data originating from the cumulative analysis of both Pan-NENs and NENs of other anatomic sites, regarding it as not informative/potentially misleading.From 1980 to 2022, different editions of the WHO classification of Pan-NENs have been redacted, containing differences in the nomenclature and grading of tumors.We chose to maintain the terminology provided in each of the selected studies.A schematic overview of the main changes in the subsequent editions of WHO classification of Pan-NENs is provided in the Supplementary Table S2. With the same keywords used to retrieve published articles, we conducted a search about possible ongoing registered clinical trials (RCTs) on the registries of the US National Institutes of Health, ClinicalTrials.gov,and the European Medicine Agency, Eudract. The search was last updated on 1 September 2023. Results We detected 11 pertinent published clinical studies.The results are summarized in Line of therapy Response to prior treatment As for RCTs, ClinicalTrials.govand Eudract did not report any trial having as its main or secondary objective the identification of predictive factors of response to SZT in Pan-NENs. Age We found three studies in which age was evaluated as a predictive factor.A recent retrospective analysis was performed on 243 well-differentiated advanced Pan-NETs, selected from a database in the timeframe 1992-2013, who received 5-FU, DOX, and STZ combined chemotherapy regimen (FAS) [19].The study aimed to assess the objective response rate (ORR) according to the Response Evaluation Criteria in Solid Tumors (RECIST) criteria version 1.1.Survival outcomes were also considered.A total of 220 patients were evaluable for ORR and progression free survival (PFS), whereas all 243 patients were evaluable for overall survival (OS).The median age of the study population was 56 years, and, in terms of stage, 223 patients (92%) presented a metastatic disease, with the remaining 20 (8%) presenting with locally advanced, unresectable disease.Unfortunately, data about patients' tumor grade are lacking.Univariate and multivariable Cox regression analyses for OS suggested that age > 55 vs. age ≤ 55 years correlated with a worse prognosis (p = 0.018 and p = 0.01).In the same study, in the Cox regression model for PFS, carried out on 220 evaluable cases, age was not statistically significant.Another retrospective study included 28 advanced Pan-NETs, treated with STZ/5-FU between 2002 and 2018 [20].The data about tumor grade were available in 25 patients, 5 of them NET G1, 19 G2, and 1 G3 (grade missing in 3 cases).As regards the tumor stage, 26 (92.8%) were metastatic, 22 cases (79%) presented synchronous liver metastases (LM), and 4 metachronous LM.In two cases, the data on tumor stage were lacking.In this study, the median age was 63 years and the analysis of patients' outcome revealed no significant difference in PFS according to patients' age > or ≤65 years.Finally, in a retrospective study performed on 84 patients with locally advanced or metastatic pancreatic endocrine carcinoma (islet cell carcinoma) according to 2000/2004 WHO classification (G1 and G2 NET in the latter WHO 2010 classification) treated with the FAS regimen [21], the 2-year PFS was significantly different, i.e., 26% for patients <54 years and 51% for patients ≥ 54 years (p = 0.04), whereas the 2-year OS showed no significant difference (65% vs. 76%).Notably, age showed a significant impact on PFS both at univariate and at multivariable analysis (p = 0.03 and p = 0.005, respectively), with an age lower than median value (equal to 54 years) being associated with worse survival. Blood Markers Four studies have investigated the impact of serum biomarkers on patients' response to STZ therapy.In a first work, a CgA decrease of more than 30% was associated with a significantly improved ORR (69% vs. 23%; p = 0.004) [22].Rogers found CgA (elevated vs. normal) not to be associated with PFS and OS at multivariate analyses (p = 0.20 and p = 0.29, respectively) [19].Another retrospective study included 96 Pan-NETs who received the combination STZ/5-FU between 1998 and 2014 [23].Tumor grade (classified according to 2010 WHO) was G1 in 11 patients (11.5%),G2 in 76 (79.2%),G3 in 6 (6.3%), and missing in 3 cases (3.1%); tumor stage was III in 6 patients (6.3%) and IV in 90 patients (93.8%).In this study, a reduction of CgA > 30% (observed in 28 cases), compared to a reduction < 30%, correlated (p = 0.001) with treatment response measured according to RECIST criteria (version 1.0).Statistical significance was not achieved at univariate and multivariate analysis for time to progression (TTP) and OS (for TTP, p = 0.909 and p = 0.651; for OS, p = 0.117 and p = 0.741).In line with this finding, in the study by Kouvaraki, a decrease of CgA > 30% correlated to the response to treatment (ORR) (p = 0.04) [21], whereas pretreatment CgA values, normal vs. increased (available for 60 patients) had no significant impact on 2-years PFS and OS. Associated Genetic Syndromes We found a single study evaluating the possible role of genetic syndromes in predicting the response to STZ, namely, the study by Antonodimitrakis [24] performed on 133 Pan-NENs (2010 WHO, G1 = 50, G2 = 48, G3 = 8, unknown = 27; stage I = 2; II = 4; III = 10; IV = 117) treated with a combination of STZ/5-FU in the years 1981-2014.In this retrospective study, the presence of a condition of multiple endocrine neoplasia type 1 (MEN1) did not significantly modulate any of the outcomes evaluated, namely, radiological response, OS, and PFS.Tumor grade has been evaluated as a predictor of survival in three studies, all with a retrospective design.In a series of 20 patients, all with unresectable or metastatic Pan-NENs (2017 WHO, NET G1 = 3; NET G2 = 13; NET G3 = 3; NEC = 1; stage III = 2, IV = 18) who underwent weekly STZ and oral S1 fluoropyrimidine derivate combined therapy for at least 2 months, Ono found that PFS was not significantly different in NET-G3/NEC-G3 patients compared with NET-G1/G2 patients (p = 0.4126) [25].In another series, the presence of a G3 tumor had a negative impact on survival from the start of treatment at both univariate (p < 0.001) and multivariate analyses (p = 0.002) [24].Also, G3 tumors had a significantly shorter PFS (6 months) than G2 (13 months) and G1 (31 months).In the study by Kouravaki, out of 30 patients for whom histological grade information was available, high-grade tumors correlated with shorter PFS (p < 0.003 by log-rank test), while the OS did not significantly differ between patients with low-and high-grade tumors [21]. Anatomic Primary Tumor Site A recently published retrospective study performed on 84 patients with unresectable (stage IV) Pan-NENs, treated with STZ/5-FU between 2002 and 2018 (histology available in 28 patients; G1 = 5, G2 = 19, according to 2010 WHO classification; G3 = 4, according 2017 WHO classification) [27], revealed that PR in primary tumors was more frequent among tumors located in the pancreatic tail than those located in the pancreatic head (49% vs. 14%; p = 0.03).A trend toward prolonged OS (without reaching statistical significance) was also observed, whereas PFS was not significantly different. Primary Tumor Size In the above-mentioned study [27], ORR, PFS, and OS did not significantly differ according to the tumor size (≤50 mm vs. >50 mm).In another retrospective study [25], the tumor size (≤50 mm vs. >50 mm) was not a predictor of response to treatment, based on PFS (p = 0.175). Tumor Stage Tumor stage has been evaluated as a predictor of response in three studies.Rogers showed tumor stage not to be a predictor of PFS and OS [19].In the study by Antonodimi-trakis [24], stage IV emerged as a negative predictor factor at multivariate analysis for PFS (p < 0.032), whereas the impact of metastatic disease was slightly significant at multivariate analysis for OS (p = 0.051).In the study by Kouvaraki, patients with locally advanced tumors (n = 2) did not differ from those with metastatic tumors (n = 31) in terms of ORR (25% vs. 41%; p > 0.05) [21]. Site of Metastasis Origin In the above-mentioned study by Reher [27], it was also observed that metastases originating from the pancreatic tail achieved a PR to STZ/5-FU more frequently than metastases originating from the pancreatic head (88.5% vs. 41.7%;p = 0.005). Liver Tumor Burden Severity of liver involvement has been evaluated in three studies.In a multi-center evaluation, Shibuya applied 10, 25, and 50% as cut-off values for liver tumor burden, with no evidence of any trend for radiological response [26].Similarly, in another study, patients with higher (> 0%) liver tumor burden did not exhibit statistically higher ORR than those with lower (≤10%) involvement (p = 0.086, Fisher's exact test).In the same study, higher liver involvement had no significant impact on TTP and was associated with a statistically significant deterioration of OS at univariate (HR, 2.2; p = 0.024) but not at multivariate analysis [23].In the study by Kouvaraki, minor (defined as ≤75%) liver involvement was not statistically associated with higher ORR, although it was found to be an important prognostic factor related to survival.Indeed, at univariate analysis, the PFS rate at 2 years was 41% (95% CI, 24% to 57%) in the group of patients with LM ≤ 75% (n = 61), whereas all the patients with LM of more than 75% (n = 12) experienced PD by 14.2 months (p < 0.01 by log-rank test).The OS rate at 2 years was 83% for patients with LM ≤ 75%, whereas all patients with LM > 75% died by 15.5 months (p < 0.0001).The multivariate survival analysis confirmed that the extent of liver disease was independently associated with shorter PFS and OS [21]. Extrahepatic Spread The impact of extrahepatic spread has been evaluated in two studies.In the study carried out by Lahner, the involvement of more than two metastatic sites and the presence of bone lesions were not associated with response to treatment (p = 0.244 and p = 0.237, respectively) [22].Otherwise, the presence of bone metastases emerged as a negative prognostic factor in terms of OS at univariate and multivariate analyses (p = 0.009 and p = 0.015, respectively).This impact was not confirmed at univariate and multivariate analysis for PFS (p = 0.663 and p = 0.711, respectively).In the study by Kouvaraki, the ORR was 19% for the group of patients with extrahepatic metastases with or without liver involvement (n = 21), compared with 47% (p = 0.03 by Fisher's exact test) for the group of patients with liver metastases only (n = 55) [21]. Functional Status The potential predictive value of the presence of a hormonal syndrome (functioning Pan-NEN) was evaluated in five studies.A retrospective analysis failed to find a significant difference in ORR between functioning (n = 9) and non-functioning (n = 41) Pan-NETs (p = 0.452) [22].Another study showed that functional status was a significant factor at univariate Cox regression analysis for OS (p = 0.034) but not at multivariate model for OS (p = 0.36) and PFS (p = 0.15) [19].In one more study [24] on 100 Pan-NETs evaluable for ORR and PFS, 57 were non-functioning tumors and 43 cases were associated to an hormonal syndrome (gastrinoma in 14 cases, glucagonoma in 8, insulinoma in 6, VIPoma and PTHrp-producing in 3 each, and serotonin-producing in the remaining 2 patients).In terms of PFS, the functional status had a significant impact at univariate (p = 0.044) but not at multivariate analysis (p = 0.088).Among the 28 cases with a radiological response to the study treatment, 11 had a functioning Pan-NET (1 CR and 10 PR were achieved in these patients).The study by Dilz, including 74 non-functioning and 22 functioning tumors, failed to demonstrate a significant impact of tumor functionality on ORR (p = 0.625) [23].In the study by Kouvaraki, the diagnosis of gastrinoma (n = 11) correlated with a statistically significant reduced ORR if compared with other tumor types (n = 73), with 0% of response in the case of gastrinomas vs. 45% for the remaining cases (33 responders all among other functioning tumors types together with non-functioning tumors; p = 0.002) [21].The diagnosis of gastrinoma does not have a significant impact on 2-year PFS and 2-year OS, nor on PFS (at both univariate and multivariate analysis). Mechanisms of DNA Repair Only one study evaluated the possible predictive role of O6-methylguaninemethyltransferase (MGMT), a protein involved in the mechanism of DNA damages repairing, in Pan-NEN.In this retrospective study [28], Hijioka aimed at assessing the impact of MGMT deficiency on ORR to STZ, administered in monotherapy or in doublets with 5-FU.The study included 13 patients with advanced well-differentiated Pan-NETs, with a tumor grade of G1 in three cases, G2 in eight, and G3 in twi, according to 2017 WHO classification, who received STZ alone (n = 3) or STZ/5-FU (n = 10).The study population consisted of 54% of cases with and 46% without MGMT expression (determined by immunohistochemistry).MGMT-negative cases had a significantly higher percentage of PRs, assessed through RECIST criteria version 1.1, as compared with MGMT-positive cases (83.3% vs. 14.3%;p = 0.013) [28]. Line of Therapy The line of therapy was evaluated in four studies.The study by Lahner [22] included patients received a combination treatment of STZ and 5-FU as first-line treatment in 27 cases (54%), second-line treatment in 13 (26%), and >second-line in 10 (20%).The impact of STZ/5-FU, evaluated using Fisher's exact test, as first-line (n = 27) vs. >first-line (n = 23) therapy was not significant on ORR (p = 0.387).The authors pointed out that patients receiving first-line STZ had an OS of 89 months, which dropped to 22 for second-line treatment, and this result was statistically significant for first vs.subsequent therapy lines (p = 0.001, log-rank test).However, the authors specify that OS was calculated from the time the drug was administered, this approach not permitting a correct interpretation of the data.Shibuya et al., in their retrospective study, observed no statistically significant difference in ORR between STZ-based chemotherapy as first-or second-line treatment (p = 0.490 at univariate analysis and p = 0.475 at multivariate model), with ORR of 27.3% for first-line vs. 20.5% for second-line [26].In the study performed by Dilz, 54 Pan-NETs were treatment-naive (56.3%) and received STZ/5-FU as first-line treatment.Otherwise, 42 of the included patients had received a previous systemic treatment at the time of study start, with the majority of them (n = 30) receiving SSAs, 6 received other kinds of chemotherapy, and the remaining 6 patients other no specified treatments.The impact of STZ-5-FU as first-line vs. >first-line was not statistically significant (p = 0.413).In addition, the univariate and multivariate analysis confirmed that the treatment line had no significant impact on TTP (p = 0.706 and p = 0.878, respectively) [23].In another study, the univariate analysis revealed that patients who received FAS as a second-line chemotherapy showed a statistical trend toward a worse 2-years PFS rate compared with those patients who had not received previous chemotherapy for their disease (p = 0.08 by log-rank test), but OS did not significantly differ.Interestingly, multivariate analysis using the Cox proportional hazards model revealed that prior chemotherapy was independently associated with shorter PFS (p = 0.01) [21]. Response to Prior Treatments The influence of previous treatments on the efficacy of STZ is another potentially relevant issue to be considered.In a study on 45 patients with advanced well-differentiated pancreatic endocrine carcinoma [29], the authors found that treatment with STZ and DOX, following a previous course of chemotherapy, had a negative prognostic effect on both OR (p = 0.0033) and OS (p = 0.008).Moreover, they showed a further negative effect of STZ on OS in patients undergoing chemoembolization (p = 0.005).This study, however, has some weaknesses, as only 11 of the 45 patients had received previous chemotherapy, and 4 embolization. As a final point, given the fundamental role of ORR as an early and accurate indicator of response to treatment, we outlined (Figure 1) the criteria employed to select the eight predictors with significant impact on this key endpoint. Discussion Our study identifies several potential predictors of response to STZ with at least one end-point with statistical significance: age, CgA blood levels, tumor grade, Ki-67% index, anatomical location of the primary tumor, tumor stage, site of metastasis origin, liver tumor burden, extrahepatic spread, functional status, MGMT status, line of therapy, and response to previous treatments. OS and PFS have been evaluated as outcome measures in the majority of data sources, while the correlation with radiological response has been investigated less extensively.Moreover, for some of the predictors, the findings are conflicting, possibly reflecting heterogeneity in the design of the clinical studies. For Pan-NENs, age is considered one of the most relevant and well-known prognostic factors.It has been validated by several studies, including patients with either localized or advanced disease.Specifically, higher age has been demonstrated to correlate with reduced OS [30][31][32].Interestingly, the age cut-off value differs among studies, being 60 years in many works [30,31], and 65 [32,33] or 75 years [34] in others.In line with the literature data, the study by Rogers confirmed a significant impact of higher age (>55 years) on OS [19].Otherwise, in the study by Kouvaraki [21], age had a not statistically significant impact on OS, albeit a trend for better OS for patients with higher age than the median value was observed.In both cases, the regimen was FAS, but a relevant difference in terms of Discussion Our study identifies several potential predictors of response to STZ with at least one end-point with statistical significance: age, CgA blood levels, tumor grade, Ki-67% index, anatomical location of the primary tumor, tumor stage, site of metastasis origin, liver tumor burden, extrahepatic spread, functional status, MGMT status, line of therapy, and response to previous treatments. OS and PFS have been evaluated as outcome measures in the majority of data sources, while the correlation with radiological response has been investigated less extensively.Moreover, for some of the predictors, the findings are conflicting, possibly reflecting heterogeneity in the design of the clinical studies. For Pan-NENs, age is considered one of the most relevant and well-known prognostic factors.It has been validated by several studies, including patients with either localized or advanced disease.Specifically, higher age has been demonstrated to correlate with reduced OS [30][31][32].Interestingly, the age cut-off value differs among studies, being 60 years in many works [30,31], and 65 [32,33] or 75 years [34] in others.In line with the literature data, the study by Rogers confirmed a significant impact of higher age (>55 years) on OS [19].Otherwise, in the study by Kouvaraki [21], age had a not statistically significant impact on OS, albeit a trend for better OS for patients with higher age than the median value was observed.In both cases, the regimen was FAS, but a relevant difference in terms of sample size (243 for Rogers' study vs. 84 for Kouvaraki's one) should be considered, suggesting a different statistical power for the two studies.Regarding the impact of age on PFS, literature evidence is also conflicting [35,36] but, overall, shows a worse PFS in older patients.Two of the works included in our analysis did not find a statistically significant effect of age on PFS [19,20], whereas the study by Kouvaraki demonstrated that patients with age lower than the median value (equal to 54 years) had a worse PFS [21].In this latter case, however, the limited sample size should be taken into account. CgA is a protein commonly secreted by NENs, including Pan-NENs.CgA is a clinically useful biomarker of NENs with a sensitivity of 66%, specificity of 95%, and an overall accuracy of 71% in Pan-NENs [37,38].However, it is important to note that CgA levels can be influenced by various factors, including concomitant medical conditions and medications.CgA levels have been linked to tumor burden in Pan-NENs [39,40], whereas its prognostic role is more conflicting.Interestingly, CgA can serve as a biomarker for the evaluation of the therapeutic response in GEP-NETs [41].Overall, the studies included in our analysis demonstrated that a decrease in CgA levels was associated with an improved ORR.These data are also supported by another retrospective study that included 133 well-differentiated Pan-NETs (2010 WHO classification), treated with the combination of STZ and 5-FU [24].In this study, 28 of the 100 cases that were radiologically evaluable for the assessment of the response to treatment displayed an objective response, and, specifically, 3 patients had CR and 25 patients PR.Of these 28 Pan-NETs, in 18 cases (64%), a biochemical response with reduced levels of CgA by > 50% was also observed. Tumor grade, which is determined using measures of tumor proliferation (mitotic index and Ki-67), is often used as a surrogate for the biological aggressiveness of NENs.Indeed, increasing tumor grade correlates with a decrease in OS and PFS in Pan-NEN [42].The results of our search show that such a relation is substantially valid also for STZ-treated patients.Indeed, the only study reporting the PFS results not to be related with tumor grade is hampered by the exiguous number of participants with NET/NEC G3 [25].By contrast, the role of tumor grade in determining radiological response to STZ is yet to be investigated. Ki-67 index is a measure of cellular proliferation, and its role as a prognostic marker in NENs is well-established [4].Several studies support a different response to chemotherapy in NENs according to Ki-67 value [43].One of the most relevant, within this context, is represented by the NORDIC study [44], which demonstrated a significant difference in the response to platinum-based chemotherapy according to Ki-67 in advanced gastrointestinal NENs.In this study, patients with Ki-67 < 55% had a lower response rate if compared to cases with Ki-67 > 55% (15% vs. 42%; p < 0.001).Further studies have supported this evidence, suggesting a cut-off of Ki-67 equal to 55% to separate patients who mostly benefit from platinum-based chemotherapy, and patients who should be treated preferably with other therapeutic options (as targeted agents or other chemotherapy drugs) [45].With regard to STZ, a work by Shibuya showed a higher ORR for G2 (23%) compared to G1 (20%) and G3 (18.2%)Pan-NETs.The results of our review cannot establish a definitive role of Ki-67 as a predictive marker in Pan-NENs.However, among the five studies analyzed, two were able to assign to Ki-67 a significant role, namely, a better ORR for Ki-67 > 5% [26], a worse TTP and OS for Ki-67 > 15% [23], and a trend (p = 0.070) for a better PFS for Ki-67 < 10% [20].Otherwise, the remaining two studies failed to demonstrate a significant impact on ORR [22] or on PFS [25].Moreover, in a retrospective study performed on 77 NENs (mostly pancreatic, n = 65, 84.4%) treated with STZ in combination with 5-FU or DOX, the multivariate analysis indicated that PFS was higher in patients with Ki-67 < 10% when compared with patients with Ki-67 ≥ 10% (p = 0.034) [46].Therefore, a possible explanation of these differences in the achieved results and significance across the studies could be found in the Ki-67 cut-off chosen, which is largely heterogeneous in the selected works.We can speculate that, as for Ki-67, a value of 6 to 9% might represent the best target in the evaluation of the response to STZ in Pan-NENs patients. The reason for the different response of the primary tumors (and related metastasis) according to the anatomical location of the primary tumor [27] is unclear.Several factors might affect the outcome, such as different site-related genetic profiles and local factors (i.e., tumor microenvironment). In patients with Pan-NENs, stage is a well-established predictor of prognosis regardless of any other variable [47,48].However, significant correlations with survival were found only in one of the two studies reporting PFS and OS as outcome measures.In STZ-treated patients, tumor stage also seems not to be of value for radiological response, although such conclusion is supported by a single study. In patients with Pan-NENs, the liver is the most common site of metastasis, with approximately 28-77% of patients either presenting with synchronous LM or developing metachronous LM in their lifetime [49].Clinical studies consistently indicate that the occurrence of LM has a detrimental effect on patient prognosis [50,51], and their extension is linked to survival [52,53].Chemotherapy regimens including STZ are widely recommended in patients with advanced tumors when the tumor burden is high [5,16].The results of our review show that a lower liver tumor burden is associated with better survival outcomes in Pan-NEN patients treated with STZ-based chemotherapy, irrespective of the cut-off values that are applied to define the extent of involvement [21,23]. The presence of extrahepatic metastases has been demonstrated to have a significant impact on Pan-NEN survival, and that is a crucial point to be considered in the management of these patients [5].Among the extrahepatic sites, bone secondary lesions are considered a quite rare occurrence in Pan-NEN.However, according to the literature data, about 13% of patients affected by GEP-NET develop bone metastases [54].Of note, bone metastases have been identified as a negative prognostic factor for PanNEN, although only limited evidence is available.A retrospective monocentric study evaluated 314 Pan-NENs, showing that the survival of patients with bone metastases was significantly reduced when compared with patients without bone lesions (p = 0.016) [55].Another study, including NENs from different primary sites, showed a higher proportion of bone metastases in high-grade NECs than in Pan-NETs (20% vs. 8%) [56].In this study, the impact of bone metastasis on Pan-NETs survival was not significant, despite a trend suggesting a negative role for this disease localization (p = 0.222; OS of 62.1 months for patients with bone metastases vs. 75.4months for patients without bone lesions).In the study by Lahner, included in our review, a significant impact on patients' OS was demonstrated for bone metastases, while the significance was not reached in terms of PFS [22].To date, the role of bone metastases in the therapeutic approach of Pan-NETs has not been clarified [57].In this context, PRRT has emerged as an effective therapeutic option, also providing an improvement of associated symptoms as bone pain [58,59].Moreover, the presence of bone lesions has been postulated to be a negative predictor for response to chemotherapy [57], even if there are no conclusive data about this issue. Focusing on STZ, the studies included in our analysis report conflicting results: one of them showed a not significant impact of more than two metastatic sites as well as of the presence of bone metastases on ORR [22], while the other detected a lower ORR in Pan-NETs with extrahepatic secondary lesions [21].In the work by Kouvaraki, the site of extrahepatic lesions is not further specified.Therefore, a specific interpretation of the impact of bone lesions on STZ efficacy is not feasible. The functional status of Pan-NETs, based on hormone secretion, has been postulated to influence the response to STZ.Specifically, non-functioning Pan-NETs are awaited to have a better response to STZ treatment.A potential rationale beyond this difference could be that functioning tumors are often well-differentiated and exhibit slower growth rates, making them less susceptible to the cytotoxic effects of STZ.Few works have evaluated the impact of tumor functionality on response to STZ administered both as single agent and in combination schemes for Pan-NENs; specifically, STZ monotherapy and STZ/5-FU in the study by Moertel [12]; STZ/5-FU or STZ/DOX in the study by Eriksson [60]; STZ/5-FU in the study by Schrader [20].In these three studies, detailed statistical data are lacking, thus preventing a correct interpretation of the provided data.However, both Eriksson and Schrader found different ORR to STZ-based chemotherapy according to different types of hormonal syndromes (specifically, VIPoma, and insulinoma resulted in increased ORR when compared to other hormonal syndromes).In our analysis, two of the included studies failed to demonstrate a significant impact of tumor functional status on ORR [22,23], in line with available literature data [12,20,60].Only one work demonstrated a significantly lower ORR in patients with gastrinoma vs. other functioning and non-functioning tumors [21], supporting a differential activity of STZ according to the type of functioning Pan-NET. MGMT loss has been advocated as a possible positive predictive factor of response for STZ in Pan-NENs [28].Interesting, in a study performed in NENs of different anatomic sites, including the pancreatic, PFS and OS from first alkylant use (temozolomide, dacarbazine, and STZ) were higher in patients with MGMT protein loss (respectively, 20.2 vs. 7.6 months, p < 0.001, and 105 vs. 34 months, p = 0.006), thus suggesting that MGMT status is associated with response to alkylant-based chemotherapy in NENs [61].In a study performed in 2023 by Yagi, of the 19 cases treated with STZ with known MGMT status, 6 cases had SD and 4 cases PD in MGMT-positive patients (n = 10), while 5 cases had PR and 4 SD in MGMTnegative patients (n = 9), and these data support the role of MGMT status in modulating the response to STZ [62].While the data reported in our review are intriguing, confirmation in prospective controlled studies are expected.The importance of this field is also testified by the high percentage of Pan-NENs that are MGMT-deficient [63,64].Moreover, additional mechanisms of repair of DNA damage should be explored. The response to STZ may be influenced by the previous treatments received by the patient.However, there are limited data in the literature on the efficacy of STZ in Pan-NENs in patients who have already received previous therapies, and this is probably attributed to the fact that STZ-based chemotherapy regimens have long been the first line of treatment in patients with NENs.In Delanuoit's study [29], patients who have not been previously exposed to chemotherapy, or who have received limited prior treatments, have a better response to STZ.In the same direction, the study by Kouvaraki confirmed the line of therapy (specifically, FAS as second-line chemotherapy) as a negative independent prognostic factor for PFS.However, these data were not confirmed in OS [21].One possible explanation is that patients who have been extensively treated with other chemotherapy agents might have developed resistance mechanisms to other anticancer agents, including STZ.Patients who had received chemoembolization with DOX also showed reduced OS.However, the small number of patients treated with chemoembolization makes the interpretation of data difficult [29].Finally, a brief observation should be discussed about the role of prior surgery as a potentially impacting factor.In this context, a retrospective study demonstrated, both at univariate as well as at multivariate analysis (p = 0.004 and p = 0.009, respectively), an increase in OS in a population of 133 patients who had previously undergone surgery [24].These data were not confirmed at PFS evaluation conducted on 100 of the included patients (univariate p = 0.817; multivariate p = 0.754).Notably, only 38 of the 133 patients underwent surgery of the primary tumor, and the criteria for choosing surgery are described in detail.Furthermore, this study also included some patients with NENs G3, and the characteristics of the surgically resected patients are not reported [24].A potential positive prognostic impact of previous surgery was not confirmed by Kouravaki, who, at both univariate and multivariate analyses, found no statistically significant differences in PFS and OS in Pan-NEN patients with previous surgery [21].In their retrospective study, Rogers et al. also found a non-significant increase in PFS (p = 0.57) and OS (p = 0.25) in patients with advanced Pan-NENs who received surgery of the primary tumor and were subsequently treated according to the FAS scheme [19].This was also found in Delanuoit's study [29].However, the different results shown in these studies are not easily comparable, both because of the different sample sizes and the different chemotherapy schedules that were used across the various studies.Furthermore, Antonodimitrakis's study also included NENs G3 s patients; given the high proliferative index, these patients generally respond better to chemotherapy and surgery, ultimately resulting in the reduction of the burden of disease, which could have a positive impact on PFS and OS. The following features, on the contrary, failed to show any statistically significant end-point: PS, association with genetic syndromes, primary tumor size, and somatostatin receptors expression. It is well known that patients' PS can influence treatment response.Generally, patients with good PS and fewer comorbidities tend to have better treatment outcomes in different types of cancer, including NENs [44].Literature data confirm this observation for NENs treated with chemotherapy [65]; in this study, for 57 NENs (66.7% Pan-NENs) receiving chemotherapy (FOLFOX scheme, an association of 5-FU and oxaliplatin) plus the antiangiogenic bevacizumab, PS of 0 correlated with higher ORR (p = 0.034).In the study included in our analysis [22], PS was not found to have a significant impact on response to STZ-based chemotherapy.However, patients with lower PS presented higher ORR and decreased PD rate if compared to cases with PS = 2. Therefore, we cannot rule out that in this case, the limited sample size might have reduced the statistical power to detect significant outcomes. MEN1 syndrome apparently has no effect on radiological response, OS, and PFS; however, the low number of cases (8 out of 133 subjects, 6%) in the study by Antonodimitrakis [24] might have underscored a possible predictive role. As for the primary tumor size, no difference in STZ response was found.Coupling these data with the limited capacity to significantly reduce primary Pan-NENs volume, in cases of symptomatic patients with large primary tumors, the benefit of STZ remains to be determined. Finally, SRI is not correlated with PFS, although this conclusion comes from a single study [20] in which the low number of subjects (n = 28) may have affected a possible statistically significant difference.Interestingly, in a work by Krug [46] on 77 NEN patients, mostly Pan-NENs (n = 64, 84.4%), a positive Octreoscan (56 out of 70 patients evaluable) indicates that SRI predicts a better ORR (p = 0.046).Lastly, we might speculate that newer SRI techniques, with higher sensitivity and specificity, could represent a powerful tool in the prediction of responses. Study Limitations The current study contains some limitations.Most of the evaluated studies did not have the identification of predictors of response to STZ as the primary outcome, and the sample size was therefore not properly calculated for this specific aim.Other limitations are represented by the retrospective nature of many of the selected works, the heterogeneity of the included populations, and thedifferent versions of the WHO classification that has changed across the years.Finally, the chemotherapy regimens employed are also different in the included studies, ranging from STZ administered as monotherapy to the combination of STZ with other anticancer drugs (mainly with the antimetabolite 5-FU, and the anthracycline DOX as doublets, but also in triplets in the FAS scheme). Conclusions In our review, we have detected, summarized, and critically evaluated the possible predictive factors available in the scientific literature with the hope of helping clinicians to maximize the chances of response to STZ in patients with Pan-NENs.Future clinical trials, specifically aimed to elucidate the value of the already-detected factors and eventually identify novel ones, are warranted. Supplementary Materials: The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/jcm12247557/s1:Table S1: Streptozotocin recommendations according to key guidelines for pancreatic neuroendocrine neoplasms, Table S2: WHO classifications of pancreatic neuroendocrine neoplasms from 1980 to 2022 editions. Table 1 . They are listed in the following order: Table 1 . Characteristics of the selected studies evaluating STZ for pancreatic neuroendocrine neoplasms.These key study data are graphically depicted according to the order in which there are reported in the manuscript.
2023-12-09T16:07:58.094Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "6d461a320abdcace65ddf80bc75aefda7c202baa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/12/24/7557/pdf?version=1701946245", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "266b05e9191ff642ac642e074f7e5b1ceff8d404", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259616131
pes2o/s2orc
v3-fos-license
Identification and characterisation of anti-IL-13 inhibitory single domain antibodies provides new insights into receptor selectivity and attractive opportunities for drug discovery Interleukin-13 (IL-13) is a cytokine involved in T-cell immune responses and is a well validated therapeutic target for the treatment of asthma, along with other allergic and inflammatory diseases. IL-13 signals through a ternary signalling complex formed with the receptors IL-13Rα1 and IL-4Rα. This complex is assembled by IL-13 initially binding IL-13Rα1, followed by association of the binary IL-13:IL-13Rα1 complex with IL-4Rα. The receptors are shared with IL-4, but IL-4 initially binds IL-4Rα. Here we report the identification and characterisation of a diverse panel of single-domain antibodies (VHHs) that bind to IL-13 (KD 40 nM-5.5 μM) and inhibit downstream IL-13 signalling (IC50 0.2-53.8 μM). NMR mapping showed that the VHHs recognise a number of epitopes on IL-13, including previously unknown allosteric sites. Further NMR investigation of VHH204 bound to IL-13 revealed a novel allosteric mechanism of inhibition, with the antibody stabilising IL-13 in a conformation incompatible with receptor binding. This also led to the identification of a conformational equilibrium for free IL-13, providing insights into differing receptor signalling complex assembly seen for IL-13 compared to IL-4, with formation of the IL-13:IL-13Rα1 complex required to stabilise IL-13 in a conformation with high affinity for IL-4Rα. These findings highlight new opportunities for therapeutic targeting of IL-13 and we report a successful 19F fragment screen of the IL-13:VHH204 complex, including binding sites identified for several hits. To our knowledge, these 19F containing fragments represent the first small-molecules shown to bind to IL-13 and could provide starting points for a small-molecule drug discovery programme. Introduction Interleukin-13 (IL-13) is a Th2-type cytokine exhibiting both proinflammatory and anti-inflammatory effects that is produced by several cell types, including activated T-helper type 2 cells, basophils, eosinophils and mast cells (1)(2)(3). It displays a pleiotropic behaviour by eliciting different cell types, including Bcells, fibroblasts, macrophages and endothelial cells (4). While IL-13 is a key mediator in the onset of asthma, by playing an active role in IgE production, mucus hypersecretion, airway fibrosis and hyperreactivity to inhaled spasmogens (5)(6)(7), it also is the dominant cytokine in the induction of tissue fibrosis associated with chronic inflammation (8). In addition, IL-13 has crucial immunosuppressive and anti-inflammatory functions by inhibiting the production and release of other proinflammatory cytokines, such as IL-1, IL-6 and IL-12, as well as downregulating inflammatory mediators such as leukotrienes and prostaglandins (9). IL-13 shares 25% amino acid sequence homology with interleukin-4 (IL-4), and their close functional relationship is evidenced by the sharing of two cell surface receptors. IL-4 signalling is mediated via the assembly of either type-I receptor complexes, including IL-4Ra and the common gamma-chain (g c ), or via the formation of type II receptor complexes with IL-4Ra and IL-13Ra1, which is also the functional receptor complex for IL-13 (10). IL-4 and IL-13 have distinct sequential signalling complex formation. IL-4 initially interacts with IL-4Ra (K D 1 nM) followed by binding of this binary complex to either g c (K D 559 nM) or IL-13Ra1 (K D 487 nM). In contrast, IL-13 has negligible affinity for IL-4Ra alone and must first bind to IL-13Ra1 (K D 30 nM), which greatly enhances the interaction with IL-4Ra (K D 20 nM) (11). IL-13 and IL-4 are proto-typical four-helix bundle, short-chain cytokines, with an up-up-down-down topology of the helices (12)(13)(14). There are a few important structural differences between IL-13 and IL-4, including an additional disulphide bond between helices A and D in IL-4 and an extended helix C. In the formation of type II receptor signalling complexes both IL-4 and IL-13 interact with IL-13Ra1 via a hydrophobic cleft between helices A and D, which contains different amino acids in the two cytokines but retains the same surface shape and hydrophobicity (11). There are two charged residues in IL-4 (E9 and R88), which are essential and form key interactions with IL-4Ra. These residues are conserved in IL-13 (E12 and R65), however, the surrounding receptor binding surface of IL-13 differs from IL-4. The IL-4Ra binding site of IL-4 is predominantly positively charged with the corresponding surface of IL-4Ra negatively charged (15), but this charge complementarity is not conserved in the interface between IL-13 and IL-4Ra, which in part probably accounts for the negligible affinity between free IL-13 and IL-4Ra. Currently, there are several anti-IL-13 therapeutic antibodies available for the treatment of allergic and inflammatory diseases (16-18). The IL-13 and IL-4 signalling pathways have also been implicated in cancer biology (19), which further highlights IL-13 as a well-validated therapeutic target. Here, we report the results of a pioneering antibody-assisted drug discovery approach applied to IL-13, which identified attractive new options for small molecule drug discovery, together with providing new insights into receptor selectivity and allosteric regulation of . We describe the identification and characterisation of a diverse panel of inhibitory llama single-domain antibodies (VHHs) targeting IL-13, including NMR-based mapping of the antibody epitopes, which led to the identification of several previously unknown allosteric regulatory sites on IL-13. The allosteric acting VHH204 was found to have an interesting mechanism of inhibition, involving stabilisation of a conformation of IL-13 unable to bind IL-13Ra1 (receptorincompetent state). This led to the identification of a functionally important conformational equilibrium present in free IL-13, which provides a molecular basis for the differing receptor selectivity and staged ternary signalling complex assembly seen for IL-13 and IL-4. To assess the potential to identify small-molecule fragments that bound specifically to the receptor-incompetent conformation of IL-13 we screened a 19 F fragment library against the IL-13:VHH204 complex. The screening of small molecule modulators against cytokines has been previously implemented against IL-2 (21), whereby the small molecule inhibitors targeted allosteric sites that are inherently adaptive between the free-and receptor-bound IL-2, and also induced structural features incompatible to IL-2:IL-2Ra complex formation (22). Similarly, in this study the 19 F fragment screen against the IL-13:VHH204 complex identified a number of hits that were shown to bind to several functionally significant regions on IL-13, that could potentially be developed to stabilise the receptor-incompetent state of the cytokine. Protein expression and purification The coding region for human IL-13 (1-113) was cloned into pET3a(+). IL-13 was expressed as inclusion bodies in BL21 (DE3) pLysS E. coli cells (Novagen). For NMR studies uniformly 15 N and 15 N/ 13 C labelled IL-13 was expressed in BL21 (DE3) pLysS E. coli cells cultured in modified Spizizen's minimal media (23) containing 15 N-NH 4 Cl and 13 C-glucose. For non-isotopically labelled IL-13, cells were cultured in LB media, at 37°C, and protein expression induced with 0.5 mM IPTG at an optical density of 0.7 at 600 nm. The cells were cultured for 16 h before harvesting by centrifugation. Insoluble IL-13 was refolded and purified by optimising the procedure described by Moy et al., 2001 (13). Cell pellets were resuspended in 50 mM Tris-HCl, pH 8.0, supplemented with cOmplete protease inhibitors (Roche), 1 mM MgCl 2 , benzonase (MilliporeSigma) and 0.5 mg/mL lysozyme, and lysed by cell disruption (Constant Systems) at 30 kpsi before inclusion bodies were collected by centrifugation. Inclusion bodies containing IL-13 were washed twice with 100 mM Tris-HCl, pH 7.8, 5 mM DTT, 5 mM EDTA, 2 M urea, 1.0% v/v Triton X-100 and once in the same buffer without detergent and urea. Following washing, inclusion bodies were resolubilised in 6 M guanidine-HCl, 50 mM Tris-HCl, pH 8.5, 1 mM EDTA and 20 mM DTT at 2 mg/mL. Resolubilised IL-13 was refolded by dropwise dilution into refolding buffer (50 mM Tris-HCl, pH 8.2, 100 mM NaCl, 3 M guanidine-HCl, 1 mM oxidised glutathione) with a dilution of 1:20. IL-13 was refolded at room temperature for 24 h. The refolding mixture was concentrated by tangential flow filtration (Sartorius) and then dialysed into 25 mM Tris-HCl, pH 7.5, 100 mM NaCl, or NMR buffer (25 mM sodium phosphate pH 6.0, 100 mM NaCl), prior to purification by size exclusion chromatography using a Superdex-75 column (GE Healthcare). VHH coding sequences were cloned into pET21a(+) with an Nterminal hexa-histidine tag. VHHs were expressed as inclusion bodies in BL21 (DE3) E. coli cells (Novagen). Cells were cultured at 37°C in LB media and protein expression was induced with 0.5 mM IPTG at an optical density of 0.7 at 600 nm. The cells were then cultured overnight before harvesting by centrifugation. Cell pellets were resuspended in 50 mM Tris-HCl, pH 8.0, supplemented with cOmplete protease inhibitor (Roche), 1 mM MgCl 2 , benzonase (MilliporeSigma) and 0.5 mg/mL lysozyme and lysed by cell disruption at 27 kpsi before inclusion bodies were collected by centrifugation. The inclusion bodies were washed twice with 100 mM Tris-HCl, pH 7.8, 5 mM EDTA, 2 M urea, 1.0% v/v Triton-X 100 and 5 mM DTT, and once in the same buffer without detergent and urea. Washed inclusion bodies were resolubilised in 6 M guanidine-HCl and 2 mM DTT at 0.5 mg/mL and refolded by dialysis. Resolubilised inclusion bodies were dialysed twice against a 10X volume of 50 mM Tris-HCl, pH 8.5, 1 M guanidine-HCl, 0.2 mM oxidised glutathione and 1 mM reduced glutathione, and then twice against a 10X volume of 1X PBS, pH 7.4. The refolded N-His tagged VHHs were initially purified by affinity chromatography on a Ni-NTA superflow column (QIAGEN). The column was washed with 5 column volumes of 1X PBS, pH 7.4, and 20 mM imidazole, followed by elution of the VHHs over a 10-column volume gradient of imidazole from 20 to 500 mM. Final purification of VHHs was performed by size exclusion chromatography using a Superdex-75 column (GE Healthcare) previously equilibrated in 25 mM Tris-HCl pH 7.5 and 100 mM NaCl or NMR buffer (25 mM sodium phosphate pH 6.0, 100 mM NaCl). VHH phage display biopanning of IL-13 In order to perform phage display biopanning of IL-13, purified IL-13 was biotinylated on its positively charged surface residues using a Lightning-Link Rapid Biotin Conjugation Kit (Type B), following manufacturer's instructions (Innova Biosciences). A naïve llama VHH phage library, provided by UCB Biopharma, was used to perform enrichment of phage that bound to IL-13, using a protocol adapted from Wilkes et al., 2020 (24). 500 nM biotinylated IL-13 was immobilised on a Nunc MaxiSorp ELISA plate (Thermo Fisher) coated with 5 mg/mL neutravidin (Thermo Fisher) in 1X PBS, pH 7.4, for round one (R1) of biopanning and 5 mg/mL streptavidin (Thermo Fisher) in 1X PBS, pH 7.4, for round two (R2). In parallel, neutravidin-or streptavidin-only plates were used as control. Following incubation, all unbound phage were removed with 5 to 20 washes in PBS-T (0.05% v/v Tween-20 in 1X PBS, pH 7.4) followed by two washes in 1X PBS, pH 7.4, to remove the detergent. Bound phage were eluted with 100 mM HCl and subsequently neutralised with 1 M Tris-HCl, pH 8.0. Monoclonal rescue was performed on individual phage colonies resulting from R2 of biopanning, as previously described (24). Subsequently, monoclonal ELISA assays were performed to confirm binding of VHHs to IL-13. Briefly, 10 mg/mL biotinylated IL-13 in E-Blocking (1% w/v BSA in 1X PBS, pH 7.4) was immobilised on a Nunc MaxiSorp ELISA plate previously coated over night at 4°C with 5 mg/mL streptavidin and then blocked for 1 h with E-Blocking. In parallel, a streptavidin-only plate was used as a control. Monoclonal rescued phage were blocked with P-Blocking (2% w/v BSA and 2% w/v milk in 1X PBS, pH 7.4), added to individually washed wells and incubated for 1 h. HRP-linked anti-M13 antibody (Thermo Fisher) at a final dilution of 1:10,000 in P-Blocking was added to each washed well and incubated for 1 h. 50 mL of 1-Step Ultra TMB-ELISA substrate solution (Thermo Fisher) was added to each well and the reaction was allowed to proceed for 10 min at room temperature. The reaction was quenched by adding 50 mL/well of 2.5% w/v of sodium fluoride in water. A microplate reader (Versamax) was used to read the target absorbance at 630 nm and background at 490 nm. VHH clones that showed binding to IL-13 in the monoclonal ELISA were sequenced by Eurofins Genomics. The cDNAs of the identified unique binders were then reformatted from the phagemids into a pET21a(+) vector by Ligation-Independent Cloning (LIC) using the In-Fusion HD Cloning Kit (Takara Bio) following manufacturer's recommendations. Bio-layer interferometry experiments Binding between IL-13 and the VHHs was assessed by bio-layer interferometry using an Octet QKe system (Sartorius). Protein samples were diluted into 1X HBS-EP+ buffer (10 mM HEPES, pH 7.4, 150 mM NaCl, 3 mM EDTA and 0.005% v/v Tween-20) and experiments were carried out at 25°C, with 1,000 rpm constant shaking. Ni-NTA biosensors (Sartorius) were loaded with each Nterminally His-tagged VHH at a concentration of 100 nM until a binding response of approximately 1.5 nm was achieved. The VHHs were titrated with increasing concentrations of IL-13 for a total time of 300 s association followed by 300 s dissociation. Raw data were double referenced and aligned using the Octet Data Analysis Software (Sartorius) and analysed using Prism 7 software (GraphPad). In-cell activity assays The effect of VHHs on IL-13 signalling was investigated using an in-cell activity assay in HEK-Blue IL-4/IL-13 cells (InvivoGen), specifically designed to monitor the activation of the STAT6 signalling pathway induced by IL-13 via the expression of a soluble reporter gene, secreted embryonic alkaline phosphatase (SEAP). Cells, maintained in a static incubator set at 37°C and 5% CO 2 atmosphere, were cultured according to manufacturer's recommendations in Growth Medium: Dulbecco's Modified Eagle Medium (DMEM; Sigma Aldrich) supplemented with 10% v/v fetal bovine serum (FBS; Sigma Aldrich), 50 U/mL penicillin (Sigma Aldrich), 50 mg/mL streptomycin (Sigma Aldrich), 2 mM L-glutamine (Sigma Aldrich), 100 mg/mL normocin (InvivoGen), 10 mg/mL blasticidin (InvivoGen) and 100 mg/mL zeocin (InvivoGen). The assays were performed in Test Medium: modified Growth Medium lacking blasticidin and zeocin, and with FBS replaced with heat-inactivated FBS (SigmaAldrich). All proteins used for the experiments were diluted in Test Medium, with purified IL-13 between 0 and 100 ng/mL used to build a calibration curve reflecting cytokine activity. IL-13 at 5 ng/mL in the absence or presence of a saturating concentration of each VHH (10 times the K D determined by BLI) was used to evaluate the effects of the VHHs on signalling activity. In addition, IL-13 from OriGene was used as a positive control and IL-6 from InvivoGen as a negative control. 180 mL/ well of cells were seeded on a 96-well microwell plate at a cell density of 280,000 cells/mL for the assays. 20 mL protein samples (pre-incubated for 30 min at room temperature) were dispensed into each well and cells were incubated for 24 h in a static incubator set at 37°C with 5% CO 2 atmosphere. Next, 20 mL of the cell supernatant were transferred to a new 96-well plate and 180 mL/well of QUANTI-Blue substrate (InvivoGen) pre-warmed at 37°C was added. Following a static incubation at 37°C for 10 min, to allow the SEAP to metabolise the QUANTI-Blue into a colorimetric-detectable product, the 96-well plate was transferred to a microplate reader (Versamax) to measure the absorbance at 640 nm. The assay results were analysed using Prism 7 software (GraphPad). NMR spectroscopy: VHH epitope mapping For epitope mapping of VHHs, 15 N/ 1 H TROSY-HNCO (25) spectra were acquired from uniformly 15 N/ 13 C labelled IL-13 (110 mM) with a 10% molar excess of VHH in a 25 mM sodium phosphate, pH 6.0, 100 mM NaCl, 10 mM EDTA, 0.02% w/v sodium azide buffer containing 5% D 2 O/95% H 2 O. TROSY-HNCO experiments were collected using either a Bruker Avance II 800 MHz spectrometer or Bruker Avance III 600 MHz spectrometer, equipped with a cryoprobe, at 25°C with acquisition times of 80 ms in 1 H, 21 ms in 15 N and 25 ms in 13 C. Non-uniform sampling (NUS) of 25% was used during data collection and datasets were reconstructed using the IST algorithm within NMRPipe (26). NMRPipe was used for data processing and spectra were analysed using NMRFAM-Sparky (27). The minimum chemical shift change for each backbone amide NMR cross peak between free and VHH-bound IL-13 was determined by calculating the lowest possible combined shift change using the following equation: 15 N and 13 C chemical shifts between free and bound spectra. NMR spectroscopy: sequence-specific backbone resonance assignments of IL-13 when bound to VHH204 Sequence-specific backbone resonance assignments for IL-13 bound to VHH204 were determined using a combination of TROSY-HNCACB, TROSY-HN(CO)CACB, TROSY-HNCA, TROSY-HN(CO)CA, TROSY-HNCO and TROSY-HSQC spectra (25). 3D NMR experiments were acquired from a 240 mM sample of uniformly 15 N/ 13 C labelled IL-13 bound to unlabelled VHH204 (10% molar excess of the VHH), in a 25 mM sodium phosphate, pH 6.0, 100 mM NaCl, 10 mM EDTA, 0.02% w/v sodium azide buffer containing 5% D 2 O/95% H 2 O. NMR experiments were collected on a Bruker Avance III 600 MHz spectrometer, equipped with a cryoprobe, at 25°C. Typical acquisition times were 90 ms in 1 H, 18 ms in 15 N and 8 ms in 13 C (25 ms for CO). Triple resonance experiments were acquired using NUS at 32% and datasets were reconstructed using the IST algorithm within NMRPipe. NMRPipe was used for data processing with the effective acquisition time in 1 H reduced to 60 ms for the TROSY-HNCA and TROSY-HN(CO)CA experiments. Analysis of all spectra was carried out manually using NMRFAM-Sparky. For the detection of selected NOEs for IL-13 when bound to VHH204 an 15 N/ 1 H NOESY-TROSY experiment was collected from a 300 mM sample of uniformly 15 N labelled IL-13 bound to VHH204 (10% molar excess of the VHH) under the same conditions as described above (28). The experiment was collected with acquisition times of 70 ms in direct 1 H, 18 ms in 15 N, and 18 ms in indirect 1 H, with an NOE mixing time of 600 ms. NUS of 32% was used during data collection and the data were reconstructed using the IST algorithm within NMRPipe. During processing the effective acquisition time of direct 1 H was cut back to 45 ms. Analysis of all spectra was carried out manually using NMRFAM-Sparky. 19 F fragment screening of the IL-13: VHH204 complex A library of approximately 1100 fluorine containing fragments were cocktailed into groups of 12 ensuring no overlap of 19 F signals. Cocktails were initially prepared at a concentration of 4.2 mM in d 6 -DMSO and diluted to 800 µM in 50 mM Tris-HCl, pH 7.5, 100 mM NaCl before a final dilution to 40 µM ligand concentration (1% d 6 -DMSO) in either 50 mM Tris-HCl, pH 7.5, 100 mM NaCl containing 10% D 2 O (for control samples), or as above with the addition of 20 µM IL-13:VHH204 complex (for protein complex samples). NMR spectra were acquired at 25°C on a Bruker 600 MHz Avance Neo spectrometer fitted with a 5 mm QCI-F CryoProbe and a SampleJet sample changer. Data were collected using a 19 F Carr-Purcell-Meiboom-Gill (CPMG) pulse sequence (29, 30) with a total echo time of 160 ms across a sweep width of 126 ppm with an acquisition time of 1 s. All spectra were processed using TopSpin 4.0.9. Fragments were considered binders when the 19 F signal intensity was significantly reduced in the spectra with protein present compared to the spectra recorded in the absence of protein. The initial screen of 1100 fragments using 19 F CPMG NMR resulted in 40 fragments that showed binding to the IL-13: VHH204 complex. These were further investigated using 1 H Saturation Transfer Difference (STD) NMR (31). STD NMR samples were prepared with a ligand to protein ratio of 100:1 (1 mM ligand, 10 µM protein) in 500 µl 50 mM Tris-HCl, pH 7.5, 100 mM NaCl (90% H 2 O, 10% D 2 O) with 5% d 6 -DMSO to help solubilize the ligand. STD NMR spectra were recorded using a Bruker 600 MHz Avance Neo spectrometer equipped with a 5 mm QCI-F CryoProbe. Data were acquired and processed using the standard Bruker software and collected at 25°C. The protein was saturated in the methyl region of the spectrum at 0 ppm and off-resonance saturation was performed at 33 ppm. A series of 120 50 ms EBurp2 pulses were applied with a 4 µs delay between each pulse resulting in a total saturation time of 6 s. Protein signals were removed by applying a 100 ms spinlock. Interleaved on-and offresonance data were recorded, processed separately and then the difference spectra obtained by subtracting the on-from the offresonance spectra. Data were zero filled once and an exponential multiplication window function applied (LB 2 Hz). To rank the relative strength of binding, percentage STD values were determined for each fragment by taking a ratio of the integral area of the peaks in the difference spectrum compared to those in the off-resonance spectrum. Eight hits from the initial 19 F NMR screen showed a positive response for binding to the IL-13:VHH204 complex by 1 H STD NMR. These were followed up with protein-observed 15 Generation of inhibitory VHHs targeting IL-13 A naïve llama VHH phage library, provided by UCB Biopharma, was used to perform phage display biopanning against biotinylated IL-13. Out of 96 VHH clones selected as hits after 2 rounds of panning, a total of 30 showed binding towards IL-13 in a monoclonal ELISA assay ( Figure 1A). The 30 hits were sequenced by Eurofins Genomics and analysis of the amino acid sequences was carried out using Clustal Omega integrated in the MEGA X software (33). The 30 VHHs were grouped into 16 families, with some differences seen in CDRs 1 and 2, and a greater degree of diversity in CDR3 ( Figure 1B). VHH families showed diversity in CDR3 in terms of both length (5-19 residues) and biochemical properties, with examples of families with an additional disulphide bond, as well as CDRs with a marked acidic or basic nature. Despite the relatively low number of anti-IL-13 VHHs identified, the VHHs selected through biopanning showed a high degree of sequence diversity ( Figure 1B). Following reformatting, bacterial expression and purification of 16 VHHs representative of the families identified, a series of biophysical and functional studies were undertaken. Initially, binding of the VHHs to IL-13 was characterised by bio-layer interferometry (BLI (34)) using an Octet QKe system (Sartorius). His-tagged VHHs were loaded onto Ni-NTA biosensors and titrated with increasing concentrations of untagged IL-13. As illustrated by the example sensorgrams shown for VHH204 binding to IL-13 in Figure 1C, IL-13 was confirmed to bind to all the VHHs except VHH206, with a typical concentration-dependent association followed by complete dissociation. Moreover, the steady state binding curves obtained were consistent with a one-site saturation binding model ( Figure 1D and Supplementary Figure 1A), as shown for a number of VHHs representative of distinct sequence families in Supplementary Figure 1A. Table 1 summarises the dissociation constants (K D ) determined for the panel of anti-IL-13 VHHs, which range from 40 nM to 5.5 mM, as expected for VHHs from a naïve library. The effects of the selected VHHs on IL-13 signalling was investigated by an in-cell activity assay using HEK-Blue IL-4/IL-13 Cells (InvivoGen), specifically designed to monitor the activation of the STAT6 pathway induced by IL-13 signalling. IL-13 was preincubated with each VHH at a saturating concentration, corresponding to 10 times the K D , before addition to the assay. Figure 1E shows the dose-response curve obtained for VHH204, revealing inhibition of IL-13 signalling with an IC 50 of 26.2 ± 1.0 mM. All VHHs showed complete inhibition of IL-13 signalling, with IC 50 values ranging from 0.2 mM to 153.8 mM (Supplementary Figure 1B and Table 1). In general, the IC 50 values determined for the VHHs are in agreement with the corresponding affinities, with a lower concentration of a tight binder and a higher concentration of a weak binder required to fully inhibit IL-13 signalling through IL-13Ra and IL-4Ra. Epitope mapping of inhibitory VHHs by NMR We performed NMR chemical shift perturbation mapping studies to determine the binding epitopes for the panel of inhibitory anti-IL-13 VHHs. TROSY-HNCO spectra of uniformly 15 N/ 13 C-labelled IL-13 were collected in the presence of a 10% molar excess of each unlabelled VHH. Minimal backbone chemical shift changes (N, NH, CO) induced by the binding of the VHHs to IL-13 were determined (35) and mapped onto the structure of IL-13, revealing multiple different binding epitopes that decorated the surface of IL-13. These could be grouped into 7 distinct VHH binding sites. The largest group typified by VHH235 contains 8 VHHs, which induce chemical shift perturbations throughout helices A and D, and are likely to sterically block the main interface between IL-13 and IL-13Ra1 (Supplementary Figures 2B, G, L). Three VHHs, including VHH227, interact with the site III region of IL-13 and clearly block the binding of IL-13Ra1 domain 1 (Supplementary Figures 2C, H, M). VHH245 appears to uniquely bind to the bottom of helices A and D blocking the interaction sites for both IL-13Ra1 and IL-4Ra (Supplementary Figures 2D, I, N). The remaining VHH interaction sites identified on IL-13 appear to be inhibitory through non-steric blocking of receptor binding and therefore represent previously unknown allosteric regulatory sites on IL-13. For example, the NMR mapping results for VHH238 highlight that it binds to the Cterminus of helix A and the N-terminus of helix D, which suggests a potential allosteric mechanism of action, or possible steric inhibition through slight overlap with Site I and Site II (Supplementary Figures 2E, J, O). Finally, chemical shifts results for VHH204 clearly show that it binds to the long CD loop and helix B of IL-13, which indicates a fully allosteric mechanism of inhibition ( Figures 2B, C, and Supplementary Figures 2A, F, K). Given the complete inhibition of IL-13 mediated signalling by VHH204, through binding to a novel allosteric site, we chose to study the IL-13:VHH204 complex in more detail. Comprehensive sequence-specific backbone NMR assignments were determined for IL-13 when bound to VHH204. Assignments were made for 72% of the non-proline IL-13 residues, with residues G78-R86 and A46-I52, from the CD loop and helix B, respectively, not assigned as the peaks from many of these residues are missing from 3D spectra of the complex due to exchange broadening. This is consistent with the initial identification of these residues as the region involved in VHH204 binding by NMR minimal shift mapping (Figure 2A), and as VHH204 has a K D of 2.6 mM the NMR signals from many residues at the antibody binding site are likely to be in intermediate exchange and substantially broadened/missing (36,37). The assignment of backbone NMR signals for the majority of residues in IL-13, when bound to VHH204, allowed us to determine the actual NMR chemical shift changes induced by VHH204 binding for residues assigned in both free and VHH204-bound IL-13 spectra. Figures 2B, C show the combined actual and minimal backbone chemical shift changes induced by VHH204 binding to IL-13 mapped onto the structure of IL-13, with minimal shift changes shown for residues where no NMR assignments could be obtained for IL-13 bound to VHH204. Substantial chemical shift changes observed for residues within the helical bundle of IL-13 clearly indicate that conformational changes are induced at the receptor binding sites of IL-13 when VHH204 binds to the allosteric site ( Figures 2D, E). Mechanism of inhibition of the allosteric VHH204 Several solution structures and associated NMR data have been reported for IL-13 (PDB: 1GA3 [BMRB: 4843], 1IK0 [BMRB: 5004]). We carefully evaluated both the reported IL-13 structures and associated NMR constraints, which revealed that in solution IL-13 exists in two conformations characterised by substantial differences in interhelical angles and the 'flipping' of F107 between the surface of the protein and buried within the hydrophobic core (12,13). The presence of two distinct conformations is reflected in both differences in the reported IL-13 structures and in a significant number of deposited NOE constraints that are not satisfied by a single conformation, such as NOEs involving residues A9, L10 and F107 (13). This conformational heterogeneity for IL-13 probably explains the lack of a reported crystal structure for IL-13 alone. Crystal structures reported for the ternary IL-13 signalling complex show the importance of surface F107 in making contacts with IL-13Ra1 followed by the positioning of IL-13 residues E12 on helix A and R65 on helix C for interaction with IL-4Ra (14). Given that it is known that IL-13 alone has no affinity for IL-4Ra and must first bind IL-13Ra1, we propose that the two solution conformations of IL-13 represent a receptor-incompetent and a receptor-competent state, respectively. More specifically, as determined by QHELIX (38), the receptor-incompetent conformation is characterised by interhelical angles between helices A and D of -163.6°and between helices A and C of -138.6°( Figure 3A), while the receptorcompetent conformation exhibits A-C and A-D interhelical angles of -151.3°and -159.3°, respectively ( Figure 3B), in line with the crystal structure of the ternary complex. This latter conformation of free IL-13 is closely comparable to IL-13 bound to its receptors, whereby interhelical angles of -146.2°and -154.9°w ere calculated between helices A and D and between helices A and C, respectively ( Figure 3C). It seems likely that binding to IL-13Ra1 stabilises the conformation of IL-13 consistent with binding to IL-4Ra. Analysis of the combined actual and minimal backbone NMR chemical shift changes induced by VHH204 binding to IL-13 revealed large chemical shift changes for residues at the receptor binding sites on helices A, C and D, including F107 and surrounding residues. Interestingly, residue A9 on helix A showed a large combined backbone chemical shift change of 0.5 ppm. This large upfield shift is possibly due to a shielding effect arising from complete localisation of the F107 side chain in a buried position adjacent to A9, as VHH204 binding stabilises IL-13 in the receptorincompetent conformation, consistent with the complete inhibition of IL-13 signalling seen in the cell-based assay. To further investigate whether VHH204 was stabilising IL-13 in the receptor-incompetent conformation we characterised the NOE cross-peaks that were visible between backbone NHs and side chain protons in the IL-13:VHH204 complex 3D NOESY-TROSY spectra. Due to differences in the interhelical angles between receptor-competent and receptor-incompetent conformations of IL-13 selected residues will have significantly different backbone amide to side chain proton distances in the two conformations, as illustrated in Figures 3D, E, which will be characterised by different NOE cross-peak patterns (39). Representative examples showing the differences in NOE cross-peak patterns observed for residues in free compared to VHH204-bound IL-13 are shown in Figures observed NOE cross-peaks are summarised in Table 2. The changes seen in both backbone chemical shifts and NOE patterns associated with specific backbone amides of IL-13 strongly suggest that binding of VHH204 stabilises the receptor-incompetent conformation of IL-13 resulting in the complete inhibition of the cytokine seen. 19 F fragment screen of the IL-13-VHH204 complex A 19 F fragment library of~1100 fragments was screened against the IL-13:VHH204 complex with the aim of identifying hits that bound to the receptor-incompetent conformation of IL-13, which The magnitude of the shifts seen were consistent with relatively weak binding, with fragment affinities predicted to be in the micromolar to millimolar range. The first of these binding sites is formed by residues C29-W35 on the AB loop and residues N53-G56 on the BC loop ( Figure 4A). Interestingly, fragments that bind to this site induce chemical shift changes in several residues with backbone amides orientated towards the interior of the helical bundle, on helices A, D and C. This suggests that small molecules binding to this site have the potential to alter the interhelical angles of IL-13 and potentially act as allosteric modulators of IL-13. The second fragment binding site identified is predominantly made up of residues on helix C ( Figure 4B). Small molecules binding here have the potential to be functional inhibitors by sterically blocking the interaction with IL-4Ra. The final fragment binding site identified on VHH204-bound IL-13 consists of residues at the Nterminus of helix B and C-terminus of helix C and adjacent residues on helices A and D ( Figure 4C). Again, the chemical shift changes induced in residues with backbone amides facing into the helical bundle, on helices A, C and D, suggest that small molecules binding to this site could also alter the interhelical angles of IL-13 and have the potential to act as allosteric modulators of IL-13 activity. Discussion Phage display biopanning of a naïve library of llama VHHs, intrinsically characterised by high sequence variability (40), resulted in the identification of a diverse panel of VHHs that bound to IL-13. BLI experiments showed that the panel of VHHs identified exhibited a broad range of affinities for IL-13, with K D values ranging from low nanomolar to low micromolar (Table 1), perhaps reflecting the high CDR3 diversity. Cell-based activity assays allowed the effect of each VHH on IL-13 signalling to be evaluated, which revealed that all of the VHHs were inhibitory of IL-13 signalling. NMR chemical shift perturbation mapping studies showed that the panel of anti-IL-13 VHHs bound to a range of different epitopes, decorating the surface of IL-13 (Supplementary Figure 2). Interestingly, a number of VHHs were found to bind to previously unknown allosteric sites on IL-13, with the allosteric mechanism of inhibition by VHH204 characterised in detail. The novel allosteric regulatory sites identified on IL-13 could reflect currently unknown mechanisms of regulation in vivo. The identification of a number of novel allosteric VHH inhibitors of IL-13 prompted careful evaluation of the two previously reported solution structures of IL-13, together with the associated NMR-derived structural constraints. This analysis revealed that IL-13 exists in two distinct conformations in solution, with one form consistent with the crystal structure of IL-13 reported in complex with its receptors, IL-13Ra1 and IL-4Ra (PDB: 3BPO). The second conformation of free IL-13 shows substantial differences in the interhelical angles between helices A and D (IL-13Ra1 binding site) and helices A and C (IL-4Ra binding site), as shown in Figure 3A. Our analysis of the previously reported NMR data and associated IL-13 structures appears to indicate that IL-13 in solution interconverts between these two distinct conformations on a relatively slow timescale (s -1 ). This conformational equilibrium strongly favours the initial binding of IL-13Ra1 to IL-13, resulting in stabilisation of the conformation of IL-13 with interhelical angles consistent with binding to IL-4Ra. This provides a molecular basis for the inability of IL-13 alone to bind IL-4Ra, which requires initial binding of IL-13 to IL-13Ra1 before this binary complex can bind IL-4Ra with a K D of 20 nM (11). In contrast to IL-13, IL-4 binds to IL-4Ra with an affinity of 1 nM, with IL-4 first binding IL-4Ra in the formation of the ternary IL-4 signalling complex (11). Comparison of the deposited structures for IL-13 and IL-4 show several important differences. Firstly, in the reported NMR structures IL-13 residue F107 is shown to flip between a receptorincompetent buried position ( Figure 5A) and a receptor-competent surface exposed position ( Figure 5B). The corresponding residue in IL-4 is Y124. The more polar nature of this side chain would make it energetically less favourable for the side chain of Y124 to be buried in the hydrophobic core of IL-4. IL-4 also has an additional disulphide bridging helices A and D, stabilising the interhelical angle between these helices, together with an extended helix C that could serve to stabilise the orientation of helices A and C ( Figure 5C). The structural differences between IL-4 and IL-13 K104NH -L17HD A -D 6.9 Å (✗) 3.2 Å (✓) Yes For the two distinct conformations of IL-13, selected inter residue distances between protons are marked with a (P) or (O) depending on whether a 1 H-1 H NOE cross-peak would be expected in NOE-based NMR spectra, with an expected NOE indicated by a (P). further support our hypothesis that a conformational equilibrium in isolated IL-13, involving substantial changes in interhelical angles, is critical in determining the distinct receptor binding selectivity and assembly of the ternary cytokine-receptor signalling complexes. A structural investigation of IL-13 in complex with VHH204 using NMR showed that VHH204 binds to a previously unknown allosteric site on IL-13, consisting of helix B and a portion of the long CD loop (Figure 2). The inhibitory activity of VHH204 ( Figure 1E) appears to arise from conformational changes induced at the receptor binding sites of IL-13, on helices A, C and D (Figure 2). Important differences in long range NOE cross-peaks seen for free and VHH204-bound IL-13 suggest that binding of VHH204 stabilises a conformation of IL-13 with interhelical angles incompatible with binding to IL-13Ra1. To identify small molecules that bind to the receptor-incompetent conformation of IL-13 we screened the IL-13: VHH204 complex against a 19 F fragment library. Backbone amide NMR chemical shift perturbation mapping of hits from this screen confirmed that a number of 19 F containing fragments bound relatively weakly to functionally relevant sites on IL-13, or to regions that could potentially induce changes in the interhelical angles of IL-13. This points to novel opportunities to develop small molecule therapeutics targeting IL-13. To conclude, we have identified and characterised a diverse panel of anti-IL-13 inhibitory VHHs, which has resulted in the B C A FIGURE 4 NMR chemical shift perturbation mapping of 19 F fragments binding to the IL-13:VHH204 complex. (A-C) Combined backbone amide ( 15 N and 1 H) minimal shift changes induced by binding of selected 19 F fragments to IL-13 are shown on equivalent backbone ribbon and surface representations of free IL-13 (PDB: 1GA3). A gradient from white to red (0.01 to 0.03 ppm) indicates the size of the minimal shifts observed, with shifts ≤ 0.01 ppm shown in white and ≥ 0.03 ppm in red. Residues P3-T8, P27, I37, T40, M43, Y44, A46-I52, S58 and C71-V85 for which no minimal shift data were obtained are coloured in yellow. Likely 19 F fragment binding sites are highlighted by blue circles. Examples are shown for fragment binding sites localised on the AB and CD loop region of IL-13 (A), predominantly on helix C (B) and apparently involving the N-terminus of helix B, C-terminus of helix C and adjacent residues on helices A and D of IL-13 (C). identification of several previously unknown allosteric regulatory sites on IL-13. The work reported here has also revealed a novel inhibitory mechanism of action for the allosteric acting VHH204, in which binding to IL-13 prevents any interaction with IL-13Ra1 by stabilising a non-receptor binding conformation of IL-13 present within the conformational equilibrium seen for free IL-13 in solution. This provides new molecular insights into the differing receptor selectivity and assembly of ternary signalling complexes seen for IL-13 and IL-4. We also report the identification of possibly the first small-molecules shown to bind to functionally significant regions of IL-13, which points to the potential to develop small molecule therapeutics targeting IL-13. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
2023-07-11T15:52:48.195Z
2023-07-06T00:00:00.000
{ "year": 2023, "sha1": "0cce5e8c02805f45fc1bd424c1752c2ef30a97f5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1216967/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ce04f015332fabdc036103fb044004430f054e4", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }