text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
The Persistent Challenge of Pneumocystis Growth Outside the Mammalian Lung: Past and Future Approaches
The pathogenic fungi in the genus, Pneumocystis, have eluded attempts to continuously grow them in an ex vivo cultivation system. New data from transcriptomic and genomic sequencing studies have identified a myriad of absent metabolic pathways, helping to define their host obligate nature. These nutrients, factors, and co-factors are acquired from their mammalian host and provide clues to further supplementation of existing media formulations. Likewise, a new appreciation of the pivotal role for the sexual cycle in the survival and dissemination of the infection suggests that Pneumocystis species are obligated to undergo mating and sexual reproduction in their life cycle with a questionable role for an asexual cycle. The lack of ascus formation in any previous cultivation attempts may explain the failure to identify a sustainable system. Many characteristics of these ascomycetes suggest a biotrophic existence within the lungs of the mammalian hosts. In the present review, previous attempts at growing these fungi ex vivo are summarized. The significance of their life cycle is considered, and a list of potential supplements based on the genomic and transcriptomic studies is presented. State of the art technologies such as metabolomics, organoids, lung-on-a chip, and air lift cultures are discussed as potential growth systems.
INTRODUCTION
Fungi in the genus, Pneumocystis, are host-obligate pathogens that can cause lethal pneumonia in mammals with impaired immune function, including humans. During the previous decades, HIV+ patients have made up the largest proportion of hospitalized cases of Pneumocystis pneumonia within the United States. However, recent studies have shown that malignancies now are the most prevalent host factor for hospitalizations with Pneumocystis jirovecii pneumonia accounting for 46% of cases in the United States compared to 17.8% with HIV+ as the underlying illness (Kanj et al., 2021). Moreover, a recent assessment of pneumonias in children 1-59 months in African and Asian countries revealed that P. jirovecii was a significant cause of infection, especially in children younger than 1 year of age (The Pneumonia Etiology Research for Child Health (Perch) study group, 2019). The increase in susceptible populations and the dearth of treatment options, signal a desperate need for new therapies. The search for new treatments for this fungal infection, as well as other aspects of investigation, have been hindered by the lack of a continuous in vitro cultivation system.
Such an ex vivo cultivation system remains elusive since the identification of these fastidious organisms over a century ago. As a result, the scientific community lacks a genetic system to explore gene function by knock-out/in technology; a drug screening assay that can discern pneumocysticidal vs static outcomes and one that can be used to screen the drug-induced phenotype of the infecting Pneumocystis species; as well as epidemiological studies that can identify drug resistant genotypes and track them throughout populations.
The origin of the Pneumocystis species was fraught with problems. It was first identified in 1909 as part of the life cycle of the parasite, Trypanosoma cruzi, in animals co-infected with the trypanosomes and Pneumocystis (Redhead et al., 2006). The pursuant attempts at taxonomic classification further exacerbated its valid identity by applying the International Code of Zoological Nomenclature rules for naming these microbes which resulted in acceptance of "Pneumocystis" as a zoonosis and thus one species, Pneumocystis carinii, could infect several mammalian species including humans and rats. The identity of Pneumocystis as fungal or protozoan was also controversial. Its true nature as a fungal pathogen was not fully resolved until well into the 21st century. In 2006, invalid names were eliminated, and the different species of Pneumocystis were validated with typification and named according to the Botanical Code of Nomenclature rules used for fungi (Redhead et al., 2006).
Gene and genome sequencing provided certain confirmation that "Pneumocystis" was a genus comprised of many species and each species was usually associated with a single mammalian host species. Efforts to correctly name the species led to the valid names and descriptions of 5 formally described species to date: Pneumocystis jirovecii which infects humans (Homo sapiens) (Redhead et al., 2006); P. murina which infects mice (Mus musculus) ; P. carinii (Redhead et al., 2006) and P. wakefieldiae, that infect rats (Rattus norvegicus) ; and P. oryctolagi which infects rabbits (Oryctolagus cuniculus) (Dei-Cas et al., 2006).
Such efforts are not purely academic exercises, as understanding of a fungal or protozoan identity could suggest different approaches for cultivation outside the mammalian lung. Indeed, investigators applied techniques associated with the culture of either of these microbes as well as other tissue culture approaches, but none led to the "Holy Grail" of continuous passage and growth. This failure was not surprising considering the general lack of systematic assessments in the various trials and more importantly, the lack of understanding of the reduced metabolic capabilities of these host-obligate fungi.
Although little progress has been made in this area, strides in understanding the role of the life cycle and new technology provide avenues for further progress towards this goal. In this review, the previous attempts to cultivate these fungi will be summarized; the life cycle and implications on growth outside the lung will be discussed; the lack of metabolic capacity as revealed by genome sequencing will be examined; and novel, potential in vitro approaches will be presented.
PREVIOUS CULTIVATION ATTEMPTS
Review of the published attempts to culture rodent-derived and human-derived Pneumocystis on cell monolayers and in cellfree media clearly reveal the lack of continuous passage and the abbreviated growth in primary (host-derived organisms that were not passaged) culture (Tables 1, 2). The assessment of "growth" varied widely, including microscopic enumeration with various tinctorial staining methods, ATP content, total DNA quantification and quantitative PCR methods to targeted genes, but in toto, there has been no system that has emerged as a reproducible method that has stood the test of time using any quantification technique. The failure of various laboratories when trying to replicate published methods has not been well documented in the literature as negative studies are not given priority in journals. Two studies garnered high interest; results reported by Merali et al. (1999) using Transwell inserts and Schildgen et al. (2014) using a novel cell line and an air-liquid interface system with CuFi-8 cells. Both reports were discounted when other laboratories conducted serious attempts at replication and were unsuccessful (Liu et al., 2018). Such publications are quite valuable to the community, avoiding lost time and expensive reagents. The most practical test would be the widespread adoption of a successful technique by many laboratories, which has clearly not occurred post-publication of any report.
Mostly guided by requirements of other fungi or pathogens, culture attempts have relied on cell monolayers flooded by liquid media or cell-free media to which the Pneumocystis species (spp.) are directly inoculated. Neither environment adequately mimics the unique location in the lung where these fungi grow in the "hypophase" of the alveoli which is a thin continuous layer of about 200 nm that likely covers the entirety of the alveolar surface (Fehrenbach, 2001). The composition of the hypophase includes pulmonary surfactant which is regulated by pH and Ca 2+ proximally and provides low surface tension. The Type II pneumocyte or Alveolar Epithelial Cell Type 2 (AEC2) secretes a myriad of other factors including epidermal growth factor, VEGF, adhesion molecules, lipids (especially dipalmitoyl phosphatidylcholines) entactin, laminin, fibronectin, and proteoglycans. The presence of this liquid lining layer means the alveolar epithelial cells (AEC1 and AEC2) are not directly exposed to air (Knudsen and Ochs, 2018). Most of these factors have been added to one in vitro system or another, without success.
THE PROPOSED LIFE CYCLE
A better understanding of the life cycle and the host obligate nature of these microscopic fungi are essential to the Pneumocystis ouroborous that may include an asexual cycle, a sexual phase, an immune-debilitated mammalian host and dissemination of the infection. First guided by images provided by light-, fluorescent-, and transmission electron microscopy, proposed life cycles included asexual replication via binary fission; asexual and sexual replication leading to the production of asci (once referred to as "cysts"); exit of the asci from the host; and release of spores ("daughter forms") to initiate infection in a new host. A distillation of these hypotheses and proposed life cycle stages are shown below, Figure 1. Recent studies using molecular technology such as RNA-seq and high throughput sequencing have provided strong evidence for a sexual cycle in the lung with primary homothallism as the mode, but these and other reports have cast doubt whether an asexual cycle is necessary or even operational (Hauser and Cushion, 2018). That the ascus is the agent of transmission was shown by experiments in the mouse model of Pneumocystis infection . Treatment with commercially available echinocandins does not eliminate Pneumocystis pneumonia, rather these drugs inhibit the ability of Pneumocystis to produce asci, owing to their inhibition of β-1,3-D-glucan biosynthesis. Our laboratory showed that these mice without asci, but with large organism populations that did not produce -β-1,3-D-glucan, were unable to transmit the infection. Inoculation of these same organisms into P. murina-naïve and immunosuppressed mice were able to reconstitute the infection with the re-emergence of asci.
BIOFILM FORMATION
Formation of biofilms is a strategy used by pathogenic microbes as well as microbes found throughout the environment. Organized microbial communities of fungi and bacteria attach to biotic or abiotic matrices to exchange plasmids, reduce susceptibility to antimicrobial agents and/or host immune responses or to protect members from other environmental stresses. These structures are also used to seed environments by dispersion from the community. Many pathogenic fungi utilize biofilms to survive within the host environment including Candida spp. (Ramage et al., 2001a,b;Mowat et al., 2007), Cryptococcus neoformans (Martinez and Casadevall, 2007), and Aspergillus fumigatus (Mowat et al., 2007). The structure of the Pneumocystis cells with the alveoli of the lung follows the characteristics of a biofilm as they are closely enmeshed, contain exopolymeric components, and spread throughout the lung (Cushion et al., 2009). We showed that P. carinii and P. murina could produce macroscopically visible and reproducible biofilms on inserts composed of hydrophilized PTFE (Biopore-CM from Millipore) and Millicell-HA cellulose (Cushion et al., 2009). These fungi rapidly formed tightly adherent biofilms that were able to maintain ATP levels over 2-3 weeks. Notably, the organisms completely changed morphology in biofilms. Figure 2A shows an organism cluster from an RPMI-1640-based cell-free culture stained with a rapid variant of the Wright-Giemsa stain. Note the changes in structure as the biofilm matures ( Figures 2B-L). Although all samples were stained with Wright-Giemsa, many structures excluded the dyes and were refractile under light microscopy. Minimal Essential Medium with Earle's salts (MEME) with 20% horse serum, 500 µg/ml, S-adenosyl-methionine sulfate (twice per day), 80 µg/ml of p-aminobenzoic acid, putrescine, ferric pyrophosphate, L-cysteine, L-glutamine, and N-acetyl-glucosamine with penicillin and streptomycin; 31 • C, normal atmosphere P. carinii were inoculated in 0.4 µM pore, collagen coated Transwells and suspended in the medium described above.
DNA stained with Hoechst dye 33258 and analyzed using an HPLC system.
The biofilm cultures were then used to assess the effects of drugs on their ATP content. ATP measurement is used as a surrogate read out for viability (Cushion and Collins, 2011). These studies revealed several differences between the standard cell free suspension cultures and the biofilms. P. carinii organisms were more resistant to the echinocandins in mature biofilms than those in the RPMI 1640-based cell free suspension assay. Newly forming (nascent) biofilms were more susceptible than established biofilms and the populations in the nonadherent phase of the biofilms were generally more susceptible to echinocandin activity than the adherent populations. Notably, higher serum concentrations (10-20%) abrogated the efficacy of the echinocandins, especially anidulafungin, in suspension or biofilm assay systems. Exposure to anidulafungin consistently and significantly reduced the ATP levels than did caspofungin or micafungin in either in vitro assay system. Though promising, the biofilm cultures failed to propagate outside the primary culture.
WHAT ELEMENTS ARE MISSING FROM ALL THESE CULTIVATION EFFORTS?
The systems summarized above and other unreported failures to identify an environment where these fungi can thrive, "begs the question-" what was lacking in these myriad of culture methods? With some exceptions, there has not been many systematic evaluations of media and supplements for Pneumocystis spp. growth. The standard approach of adding supplements in limited concentrations based on various rationales has not proven to be fruitful. Our laboratory embarked on a yearlong study supported by the United States National Institutes of Health, to systematically evaluate nutrients, trace metals, lipids, cofactors, and other compounds. We used an RPMI 1640-based cell free system with a 10-20% serum supplement, which seems to be indispensable for viability in our hands. In some cases, the rationale for supplements was based on a comparative genomics study that revealed the lack of ability to synthesize vital compounds, e.g., myo-inositol Porollo et al., 2014;Cushion et al., 2016), while others were more exploratory in nature, e.g., trace metals. Our experiences revealed a lack of batch-tobatch consistency from the cryopreserved Pneumocystis spp. we used. However, the addition of myo-inositol most consistently improved the ATP content vs the un-supplemented cultures.
Another apparent constancy from most of the studies was the peak of replication, growth, and viability early in primary culture after a few days in culture, with declining values thereafter. Such a clear signal suggests the following: (1) the small increases could be explained by a "coasting" of the Pneumocystis spp. life cycle stages, where some stages completed a replication or there is a spore release from asci using intracellular nutrients previously gleaned from the host environment; (2) the artificial media did not replenish critical nutrients or the concentrations were insufficient or at levels to be inhibitory to further growth; (3) there was no asci production, leading to a cessation of the life cycle; (4) the total environment cannot sustain replication, lacking a sufficient carbon dioxide level, support phase (liquid, gel, solid, e.g.), or substrate; (5) stimuli for asci production/sexual reproduction were absent. This last point is likely critical for a sustainable culture system, as we have shown recently that the sexual cycle is required for replication in rodent models of Pneumocystis infection Miesel et al., 2021).
Much of the life cycle of these fungi occurs in the mammalian lung and specifically in the alveoli. Within the alveoli, Pneumocystis preferentially and specifically attaches to the AEC1 cells which raises the question, why? Perhaps there are certain receptors on its surface that forms the tight interdigitation with these fungi. If so, what might be the purpose? Two thoughts come to mind that are not mutually exclusive. One is that there is an intimate exchange of nutrients that Pneumocystis requires. The second suggests a more cunning reason. By binding tightly to the very cell necessary for gas exchange, the fungal parasite directs a change in its immediate environment by altering the gas mixture, favoring a more hypoxic environment. There is some evidence that supports this hypothesis. The Pneumocystis genomes lack homologs to carbonic anhydrase genes (Ma et al., 2016). Carbonic anhydrases Recent studies suggest that the cyst/ascus (containing eight spores) is the agent of infection (inward arrow). After inhalation, the spores ultimately take residence in the terminal portion of the respiratory tree, the alveoli (enlarged bundles of alveoli shown in the illustration). Neither the mechanism of migration to the alveoli nor the form in which the organism arrives in the alveoli (intact ascus or individual spores) is known. It is speculated that the spores are released by exhalation (outward arrow). (B) Asexual phase. Haploid trophic forms are thought to replicate asexually by binary fission, a process whereby a single trophic form duplicates its genetic material and creates two daughter forms of roughly equal sizes. (C) Sexual phase. Two presumptive mating types conjugate, undergo karyogamy, and produce a diploid zygote that progresses through meiosis and then an additional mitosis to produce eight nuclei. The nuclei are packaged into spores by invagination of the ascus cell membranes. After completion, excystment occurs via a protunicate release by unknown mechanisms, which may involve a pore or opening in the cyst wall (yellow oval). The released spores become the vegetative forms, the haploid trophic forms, that can then undergo asexual or sexual replication (Images of man, alveoli purchased from Superstock Photos, http://www.superstock.com). Annual Reviews Authors: There is no need to obtain permission from Annual Reviews for the use of your own work(s). Our copyright transfer agreement provides you with all the necessary permissions. Our copyright transfer agreement provides: ". . .The nonexclusive right to use, reproduce, distribute, perform, update, create derivatives, and make copies of the work (electronically or in print) in connection with the author's teaching, conference presentations, lectures, and publications, provided proper attribution is given. . ." catalyze the interconversion between carbon dioxide and water and the dissociation of carbonic acid, bicarbonate, and hydrogen ions. These enzymes maintain acid -base balance and helps to transport carbon dioxide. In the lungs, carbon dioxide is being released, so its concentration is lower than in tissue. The yeast genome contains a carbonic anhydrase encoded by NCE103 (Martin et al., 2017). The nce103 null mutant exhibits impaired growth under aerobic conditions, as do bacteria lacking these FIGURE 2 | The morphology of Pneumocystis changes dramatically during biofilm formation. (A) P. carinii from the supernatant of a 3-day-old standard short-term culture stained with Hema3, illustrating the differences in morphology from the biofilm structures. (B-H) Images were taken from 16-day-old biofilms inoculated with P. murina (obtained as a fresh isolate). The images were obtained from films on inserts that were scraped with a pipette tip, aspirated, air dried, and stained with Hema3, a rapid Wright-Giemsa stain. Images were viewed under oil immersion. Bars, 10 m. (B) P. murina cluster showing a cyst-like structure with a stalk (arrow). enzymes However, growth under these conditions can be restored if augmented with high levels of carbon dioxide, which apparently satisfies the need for bicarbonate formation supplied by the carbonic anhydrases. Might these obligate fungal pathogens be facilitating their survival by increasing the carbon dioxide in their immediate environment? There is some evidence that this might occur. Our laboratory explored the effects of 3 different gas mixtures on the ATP content of P. carinii in an RPMI 1640based medium (Joffrion et al., 2006). A significant increase in ATP content was observed for fungi grown under microaerophilic conditions (10-15% O2; 7-15% CO2) when compared to standard conditions of 5% CO2. Anaerobic conditions resulted in sharp decreases of ATP by 24 h, suggesting that oxygen is required. Interestingly, these different atmospheres resulted in distinct responses to trimethoprim-sulfamethoxazole. Whereas the ATP levels of P. carinii in standard medium with 5% CO2 had decreased levels of ATP by 75% with treatment, those treated under microaerophilic conditions fell by only 50% vs the untreated controls. While the increase in carbon dioxide levels did not permit continuous culture, such a condition may play a key factor in future experiments with a more supportive culture structure. It should also be noted that the pulmonary epithelial pneumocytes do not have detectable carbonic anhydrases on their apical surface, accessible to the fluid lining, but activity is very abundant on the pulmonary endothelial cell surface facing the plasma (Effros, 2008). Thus, in such a bicarbonate starved environment, perhaps these fungi can find an alternative mode of acquisition.
The recent publications of the genomes of P. carinii, P. murina, and P. jirovecii identified metabolic cycles and pathways that are lacking in these fungi and are likely to lead to more rational supplementation studies (Cisse et al., 2014;Ma et al., 2016). Concomitant with such insights is the requirement for a balance of the supplement concentrations. Advances in metabolomics technology can suggest appropriate levels based on metabolic flux analyses while newer approaches to cell-based culture, e.g., air lift cultures, alveolar organoids, offer alternatives not available until recently. Lastly, a better understanding of the life cycle of these fungi and their apparent reliance on sexual reproduction strongly suggest that factors which stimulate this mode of replication needs to be considered in future in vitro culture systems. These considerations are discussed below.
BIOTROPHY
Several publications have revealed the host-obligate nature of Pneumocystis spp. (Cushion et al., 2007;Cisse et al., 2014;Hauser, 2014;Porollo et al., 2014;Ma et al., 2016). An early analysis of Expressed Sequence Tags (ESTs) from infected rat lungs demonstrated the absence of critical genes in such pathways as the pyruvate bypass and glyoxylate cycle, but the presence of genes necessary for carbohydrate metabolism, and suggested that Pneumocystis spp. may be obligate biotrophs (Cushion et al., 2007).
Biotrophy has been observed in fungi that invade plants, derive their energy from the host cells, but do not kill them. Obligate biotrophs complete their entire life cycle within the plant host, including the sexual cycle, and are incapable of growth outside the host (Lorrain et al., 2019). Pneumocystis spp. only grow within the lungs of mammals and complete their life cycle therein; are unable to grow outside the lungs (at present); and do not invade host cells. Pneumocystis do not produce hyphae in the lung and maintain an extracellular existence. Thus, Pneumocystis spp. fulfills these elements for biotrophic existence, including perhaps the lack of pathogenic effects. Though Pneumocystis spp. cause disease in immunocompromised hosts, it is currently held that mammals with intact immune systems are often transiently infected and may even enter a commensal lifestyle with the fungi without associated illness. It is only when the host tips the balance and losses its ability to control the infection that the organisms grow unchecked causing the disease state. It is also well known that Pneumocystis spp. grow slowly, even in severely immunocompromised hosts, suggesting they have adapted well to their hosts and are averse to killing them.
Another intriguing attribute of some obligate biotrophy in fungi such as plant pathogenic rust fungi is the network of intercellular hyphae called haustoria (Struck, 2015). Haustoria are structures considered to facilitate the acquisition of nutrients from the host cell. Haustoria form after the fungi penetrates the cell wall of the plant cell but do not wound the plant plasma membrane (Figure 3A). The haustoria grow in the living plant cells and are in intimate contact with the plant cell cytoplasm ( Figure 3B). The cytoplasm of the host and fungus are separated by the host plasma membrane, the fungal plasma membrane and a matrix called the "extrahaustorial matrix" (Figures 3A,B). The species within the genus Pneumocystis are not known to produce hyphae, like many other fungi. Rather, they are relegated to the external mammalian lung environment. It is not difficult to imagine that the intimate interaction of the Pneumocystis trophic form with its alveolar host cell (Figure 1C) may be similar to that of the obligate biotrophic fungi with a structure that could conceivably facilitate nutrient acquisition. Early studies reported activation of the plasmalemmal vesicular system in the alveolar cells associated with trophic forms near the site of attachment to the AEC1 (Settnes and Nielsen, 1991), suggesting an approximation of a feeding complex. The trophic forms have long been considered the vegetative form of these fungi and this complex could be a novel manner of acquiring necessary nutrients lost during evolution of their parasitism.
LOSS OF GENES IN METABOLIC PATHWAYS SUGGEST EXTENSIVE SUPPLEMENTS ARE NECESSARY
Comparative genomic analyses do not suffer from problems associated with the analysis of the transcriptome, such as transient expression of genes involved in metabolic pathways. Using genomic analyses, Cisse et al. documented a dearth of enzymes necessary for amino acid synthesis in P. carinii (Cisse et al., 2014), with only 2 enzymes representing potential aspartic acid and glutamic acid synthesis as compared to Schizosaccharomyces pombe's 54 enzymes involved in amino acid biosynthesis. While our study also did not detect any gene homologs involved in isoleucine, leucine, lysine or valine biosynthesis, homologs for phenylalanine, tyrosine, and tryptophan biosynthesis were identified in all 3 species (Porollo et al., 2014). In addition, homologs for metabolism of glycine, serine, threonine, alanine, aspartate, glutamate, arginine, proline, histidine, tyrosine, phenylalanine, and tryptophan were detected as well as for valine, leucine, isoleucine, and lysine degradation. Counter-intuitive to the need for host amino acids, Ma et al. reported a dramatic reduction in amino acid permeases in the 3 sequenced genomes when compared to free-living and pathogenic fungi: 1 in each Pneumocystis genome vs 10 to 32 in the others (Ma et al., 2016). This finding could mean that the single permease in Pneumocystis is promiscuous or there are other means to import their amino acid requirements. In fact, Ma et al. notes that the number of transcription factors and transporters in Pneumocystis are among the lowest in fungi (Ma et al., 2016).
In a study which compared the genomes of P. jirovecii, P. murina and P. carinii with S. pombe, we noted the absence of inositol-1-phosphate synthase (INO1) and inositol monophosphatase (INM1) from all the Pneumocystis genomes FIGURE 3 | Haustorial complex, a specialized feeding organ of biotrophic fungal parasites of plants. To move from host cell to fungus, nutrients must traverse the extrahaustorial membrane, the extrahaustorial matrix, the haustorial wall, and the haustorial plasma membrane. A neckband seals the extrahaustorial matrix from the plant cell wall region so that the matrix becomes a unique, isolated, apoplast-like compartment. The haustorium connects to intercellular fungal hyphae by way of a haustorial mother cell. A proton symport system in the haustorial plasma membrane drives sugar transport from plant to parasite. as well as from S. pombe (which had previously been reported) (Porollo et al., 2014). These genes encode the enzymes necessary for myo-inositol synthesis, a compound necessary for life in eukaryotes. This notable absence led us to the identification homologs to the S. pombe inositol transporter genes (ITR1 and ITR2) and the characterization of a highly specific myoinositol transporter in the genomes of all 3 species (ITR1) with a second homolog (ITR2) in the 2 rodent species (Porollo et al., 2014;Cushion et al., 2016). Notably, there were increased genes in inositol phosphate metabolism, starch and sucrose metabolism, and amino sugar and nucleotide sugar metabolism when compared to the genome of the free-living fission yeast. Also of interest was the 5 additional genes encoding enzymes in the tryptophan metabolism pathway in P. carinii and P. murina, which were absent in P. jirovecii (Porollo et al., 2014). To evaluate myo-inositol auxotrophy, supplementation studies of myo-inositol to both rodent species in a cell free in vitro system resulted in a notable increase in ATP and a longer period of viability (Porollo et al., 2014). Table 3 is a compilation of predicted nutritional supplements or conditions from published genomics studies.
SEXUAL REPRODUCTION HOLDS THE KEY FOR IN VITRO REPLICATION
Until recently it was assumed by most of the scientific community that like other fungi, Pneumocystis species could replicate asexually and sexually (Vossen et al., 1977;Vossen et al., 1978;Cushion, 2010;Skalski et al., 2015). Attempts to propagate these fungi in vitro typically led to apparent increases in the trophic form numbers and it was accepted that asexual replication was the preferred mode in these less-than-optimal culture systems. Other microbes like Giardia duodenalis, provided precedent in this thinking as only the trophozoites of these protozoans can be cultured in vitro (Perrucci et al., 2019) and those in the field settled with this shortcoming. With more current studies illustrating the reliance on the sexual cycle and asci formation as required for proliferation , the absence of sexual replication in any in vitro system may be a key reason for lack of continuous growth. Notably, the use of a longacting echinocandin, rezafungin, as well as anidulafungin and caspofungin when given in a prophylactic model could prevent the infection (Miesel et al., 2021). Thus, we surmise that not only are these fungi dependent upon the host for the metabolic requirements they can no longer synthesize, but they must also undergo sexual replication to survive.
INSIGHTS FOR FUTURE SUCCESS
Thoughtful re-consideration of the previous attempts to propagate Pneumocystis species outside the mammalian lung environment indicates a clear role for a host cell component. Alveolar cells may offer an initial substrate for the newly released trophic forms from the asci or they may contribute directly to the nutrient pool via an intimate feeding structure as suggested above. Alternatively, secreted molecules from the alveolar cell encapsulated in extracellular vesicles or exported through simple exocytosis may also provide another source of required compounds. In our laboratory, we are exploring both two-dimensional and three-dimensional cellular based systems as P. carinii Cholesterol potential growth systems. We provide some preliminary results and discussions below.
THREE-DIMENSIONAL ALVEOLAR ORGANOIDS
Methodology for the routine culture of enteric organoids is now quite advanced, but alveolar organoids remain a challenge (Li et al., 2020). Our laboratory explored a mouse lung cell organoid approach for anticipated inoculation of P. murina, the species that infects the mouse. This reasoning was based on evidence of species specificity of Pneumocystis for its mammalian hosts, e.g., P. carinii only infects rats, P. murina only infects mice. Results were promising and will be published in a separate report.
AIR-LIQUID INTERFACE CELL CULTURES
Air liquid interface (ALI) cultures are used for respiratory research. Both primary cells from donors and immortalized cell lines have been used. The ALI system is defined by contact of the basal surface of the cells with liquid culture medium while the apical surface is exposed to air. Such a juxtaposition permits differentiation, as in the case of human bronchial epithelial cells transitioning to a pseudostratified mucociliary phenotype. To initiate the cultures, cells are typically placed onto a permeable membrane of a cell culture insert with medium within the insert and in the cell well below. Once confluent, the medium is only provided to the basal chamber, causing the "air lift" of the cells in the insert. Indeed, Schildgen et al. used such an ALI system with human airway cells, CuFi-8, which were derived from the bronchus of a patient with cystic fibrosis (Schildgen et al., 2014). As in previous attempts, P. jirovecii could not be continuously cultured in this system and in this case, it may have been due to the type of cell used for the ALI, bronchus derived cells rather than alveolar epithelial cell. Our laboratory used this approach with the A549 cell line (ATCC CCL 185) which is described as an adenocarcinomic human alveolar basal epithelial cell derived from cancerous lung tissue of a 58-year-old Caucasian male (Giard et al., 1973). After air lift, the cultures were inoculated with P. murina, since the species infecting humans, P. jirovecii is not widely available. There was not a high expectation for growth, but this was rather an observational exercise. Somewhat surprisingly, the A549 cultures produced AEC1-like cells that stained with podoplanin, a marker for this cell type, and not with Surfactant Protein B, a marker for AEC2 cells. Staining with a marker specific for P. murina, the Major Surface Glycoprotein (Msg) superfamily of surface antigens, we could show that the fungi were attached to the apical surface and to the AEC1 cells. An optimal ALI would use host cells from the natural host wherein the Pneumocystis species reside. This approach demands more investigation as a potential growth system.
Alveolar Lung Chips
Significant advancements in lung-chip models also offer promise. Such systems are commercially available and offer support platforms for epithelial and vascular channels, cell-cell interactions, immune cells, extracellular matrix and mechanical forces (Benam et al., 2016). Importantly for the potential growth of Pneumocystis spp., alveolar epithelial cells (AEC) can maintain AEC1 and AEC2 cell structure with expression of specific markers. Such systems can be used to evaluate drug response and inflammatory responses and may even be able to provide insights into factors responsible for species specificity. One drawback is that these systems are currently limited in their life span, which is about 2 weeks.
METABOLOMICS
Metabolomics is a growing field that enables the unbiased detection and measurement of metabolites resulting from cell metabolism. Commonly, NMR spectroscopy is used to build profiles of the physiology of cells at the time of sampling. Application of this technology to identify and assess the utilization of host products by Pneumocystis in cell culture or even in vivo should provide additional detailed information related to cell cycle requirements of these obligate fungi. Our lab used H-NMR spectroscopy to assess the dynamics of uptake and secretion of metabolites by P. murina within the extracellular medium. We learned that there were notable differences in the metabolite profiles between organisms that were isolated from the rodent lungs and used immediately afterwards and those that had undergone cryopreservation and subsequently thawed and reconstituted. We gathered that the source of organisms needs to be accounted for in future culture supplementation attempts. The freshly isolated organisms completely depleted glucose available in the media by day 3, whereas cryopreserved fungi had glucose levels comparable to the day of inoculation into the medium. The metabolic byproduct acetate accumulated in greater concentrations and more rapidly in cultures inoculated with the freshly isolated P. murina vs the cryopreserved fungi. Such information can be used to direct choices of culture supplements, supplement concentrations, and regimens for refeeding.
DISCUSSION
The establishment of a continuous cultivation system for Pneumocystis spp. may lie in our grasp though many parameters must be considered prior to embarking on this ambitious goal. Systematic evaluations that titrate concentrations of appropriate supplements will be necessary. Metabolomics and metabolic flux analyses can be used to guide the amounts and timing of additives. Serious consideration must be given to cellular or other support matrices. Alveolar organoid for studies of pathology and perhaps growth, as well as Air Liquid Interface (ALI) cultures, are exciting avenues that should be explored. Recent genomic, animal model, and transcriptomic studies clearly reveal a requirement for sexual reproduction resulting in the formation of asci for survival of these fungi. Stimulation of the sexual cycle will be paramount in determining a successful culture milieu.
AUTHOR CONTRIBUTIONS
MC, NT, SS, and AP contributed to conception and design of the study. NT and SS performed and wrote the ALI and organoid sections. AP conceived and analyzed the metabolomics data. All authors performed the statistical analysis. MC wrote the first and final draft of the manuscript. NT, SS, and AP wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. | 7,991.2 | 2021-05-20T00:00:00.000 | [
"Medicine",
"Biology",
"Environmental Science"
] |
An Information Theory-Based Approach to Assessing Spatial Patterns in Complex Systems
Given the intensity and frequency of environmental change, the linked and cross-scale nature of social-ecological systems, and the proliferation of big data, methods that can help synthesize complex system behavior over a geographical area are of great value. Fisher information evaluates order in data and has been established as a robust and effective tool for capturing changes in system dynamics, including the detection of regimes and regime shifts. The methods developed to compute Fisher information can accommodate multivariate data of various types and requires no a priori decisions about system drivers, making it a unique and powerful tool. However, the approach has primarily been used to evaluate temporal patterns. In its sole application to spatial data, Fisher information successfully detected regimes in terrestrial and aquatic systems over transects. Although the selection of adjacently positioned sampling stations provided a natural means of ordering the data, such an approach limits the types of questions that can be answered in a spatial context. Here, we expand the approach to develop a method for more fully capturing spatial dynamics. The results reflect changes in the index that correspond with geographical patterns and demonstrate the utility of the method in uncovering hidden spatial trends in complex systems.
Introduction
Today's digital landscape presents a world full of information with more access to geotagged datasets. Accordingly, in this era, analysts are less likely to struggle with a lack of available data. Instead, they are taxed with overwhelming amounts of information and charged to make good use of the data. Large-scale datasets present a variety of challenges (e.g., storage, data integrity, security), yet offer great opportunities for pivotal discoveries. From detecting medical outbreaks to mining social media data to developing management options for impaired ecosystems, there is a great need for methods that not only provide insight on observable phenomena but can uncover latent characteristics and emergent properties in a veritable data haystack [1].
Geospatial assessment is a well-developed and growing field. Spatial analyses typically involve the visual assessment of mapped parameters, use of zonal/spatial statistics (e.g., Moran's I), or regression and data aggregation approaches (e.g., principal components analysis) [2,3]. Visual analytics, data mining and data discovery are fields in statistics catalyzed by high speed computing, enhanced data storage capability, machine learning techniques and visualization tools [4,5]. These approaches are often highly interactive, and applications may range from simple data exploration and visualization to pattern recognition and model development. They have been used to study a variety of problems from land use change and text mining to intelligent transportation and network security (e.g., [6][7][8]). Szewrański et al. [9] demonstrate the utility of combining GIS and business intelligence (BI) to enhance visual data discovery by linking ArcGIS and Tableau. Similar to the use of ArcGIS with programming tools (e.g., R, Python) or BI platforms (e.g., Microsoft Power BI, Qlik Sense), such an approach capitalizes on the unique strengths of each tool. While this is a useful approach, researchers note that complex problems often necessitate the use of highly complicated tools and techniques which may limit a broader application of the approaches [4,10]. Still researchers are faced with the need to understand complex systems, capture patterns and trends in multiple variables, and identify system drivers. Furthermore, there is a growing emphasis on identifying patterns in underlying dynamics before a system shifts in its overall condition, which can result in costly, long-term effects. A large and thriving literature presents the development and use of statistical approaches to detect early warning signals of regime shifts and tipping points in time series data, but there is a relative lack of such studies on spatial regimes.
Classic early warning indicators (EWI) are based on the concept of critical slowing down (CSD), the phenomenon whereby a system's rate of return to equilibrium slows down in the proximity of a bifurcation point [11,12]. These CSD-derived indicators assess univariate data for changes in variance, autocorrelation, conditional heteroskedasticity, density ratio and spectral reddening, among others [13,14]. Their appeal lies in their generality and ability to be widely applied without requiring equations, models, or even a mechanistic understanding of the key system processes. However, when applied to real data, inconsistencies in the ability of CSD-derived indicators to detect regime shifts have been problematic [15][16][17][18][19]. Their general applicability was also reduced when researchers found that not all bifurcations are preceded by CSD [20], giving a false negative, and that it is possible to detect critical slowing down in systems that exhibit nonlinearity but do not have a bifurcation point, giving a false positive [21]. Spatial correlates of CSD-derived indicators (e.g., spatial variance, near-neighbor autocorrelation, spatial skewness and spatial spectral density) have been developed and offer many of the same benefits and fewer concerns than their temporal correlates [11,22,23]. Their utility is being confirmed in empirical studies [24][25][26][27], but their trends may not be consistent in self-organized, patterned spatial systems because environmental changes other than an impeding regime shift may be driving trends in the indicator [28]. Spatially heterogeneous stressors also appear to confound the detection of a CSD-signal [29,30].
Alternative spatial EWIs that aim to avoid the issues associated with CSD-derived indicators have largely been based on vegetative patch size distributions, with the expectation that they fit a power law function unless an environmental stressor changes the patch size distribution by truncating it [31,32]; thus, a changing power law fit acts as an EWI. There has been controversy over the biological reasonableness of this approach [33]. Regardless of the merits of the debate, the method was developed for terrestrial drylands, so it may not be appropriate for other types of spatial systems, particularly if they are not heterogeneously distributed across space. Rather than track the patch size distribution, several methods focus on other patch size properties such as time fluctuations in the largest cluster size, variance in the size of the largest patch in proportion to the area of the system, variance in the proportion of the largest patch to the total area occupied by the same species, and the probability that a cluster will grow or shrink as a function of its size [34][35][36][37]. However, as with the spatial correlates of critical slowing down, most authors are evaluating these variables over time, requiring temporal data to document the changes to spatial metrics [38]. Few methods can detect a critical transition with only 2-3 temporal snapshots, as Weissmann et al. [36,37] attempted with their model of probability of cluster growth. Other spatial EWIs are being developed, such as the recovery length method of Rindi et al. [39], and network-based indicators such as degree, assortivity, and clustering [40]. The recovery length refers to the spatial distance from a perturbation at which a population recovers and may be less data intensive than classic indicators [41]. As a system moves closer to a critical transition, the recovery length increases. However, this metric is only appropriate for systems that have a sharp boundary between habitats, such as algal canopies, mussel beds, shallow lakes, salt marshes, and forest-savannah [39], and is not suitable for highly spatially heterogeneous systems. Network based indicators such as those in Yin et al. [40] may be more general in their adaptability to a system type but require long term data with high frequency measurements. A conclusion of most EWI studies is that multiple methods will always be required, to account for key differences between the ecosystem types and inconsistencies of the signal detection within a given indicator. Coupled with these issues is the challenge of capturing a regime shift using univariate data (monitoring one variable). Unless the system is exquisitely well understood, there is the risk that the variable chosen to represent the system's response to a perturbation is insufficient or inaccurate; this is a core issue for traditional indicators [42]. Using traditional EWIs for multivariate systems requires tracking trends in the indicator separately for each individual variable (i.e., examining 50 bird species requires the computation and tracking of 50 variance patterns). However, there has been limited success with such an approach. Although Litzow et al. [43] found that monitoring an increasing variance in pooled fisheries catch data greatly increased the detection of a collapse, other researchers noted inconsistent trends in univariate EWIs (e.g., variance, autocorrelation) as a system approaches a critical transition [13,19,44].
Multivariate methods thus become highly desirable, as they are more likely to capture the realistic complexity inherent in human and natural systems [12,16,45]. The variance index was developed by Brock and Carpenter [46], and it detects the dominant variance component in a multivariate system. It is computed using the largest eigenvalue of the covariance matrix and should spike prior to a transition; however, the results from this index are sometimes unclear [16,17].
Information theory (IT) may offer a useful alternative to the methods mentioned above. IT-based approaches have been useful for understanding ecosystem function, structure and complexity [47][48][49][50]. In a spatial context, entropy has been applied to geography, geoinformatics (e.g., for city zoning, visualization and modelling), landscape diversity and cognitive development [51][52][53][54][55][56]. Fisher information has been demonstrated as an effective tool for capturing trends in complex systems. It can be employed to assess univariate and multivariate systems using a variety of data types (e.g., economic, social, environmental). There is no strict data requirement, minimal assumptions are necessary, and it is agnostic with regards to the degree of heterogeneity it can handle [57][58][59].
Fisher information was developed as a measure of disorder in data [60] and provides a means of quantifying organizational dynamics in complex systems [61]. It has been adapted into an index that reflects the dynamic order within a system by collapsing patterns in the underlying system variables into a measure that can be tracked to assess systemic change [58]. This form of Fisher information has been used to assess sustainability, political instability and resilience, and it has been proposed as an EWI in a variety of human and natural systems at multiple spatial scales (e.g., [17,57,59,[62][63][64][65][66]). However, it has primarily been employed to evaluate temporal dynamics with time as a natural ordering parameter. In the first foray into geospatial assessments, Sundstrom et al. [67] used Fisher information to assess spatial regimes in avian and zooplankton communities. Abundance data was gathered from historical records for over 200 species collected from routes along transects through multiple terrestrial ecoregions and aquatic domains. The Fisher information detected spatial regimes in both systems and delivered additional details about changes in the communities not provided by other multivariate approaches. Selecting adjacent routes along each transect afforded the ability to use linear proximity (i.e., the next station) to order the data; however, such an approach limits the types of questions that can be explored or the assessments that can be performed.
Here, our goal is to adapt the computation of Fisher information to develop a general method for handling geospatial data in a way that does not require conceptualizing the study area as a series of transects. The approach intends to offer an assessment of patterns across a landscape by capturing the trends in the variables that characterize the condition at each sampling location. Using simulated and real data, we test the utility of the method and identify mechanisms for detecting signals of geospatial change. This effort is an extension of the spatial regimes work [67] and involves examining methods at the nexus of information theory, systems thinking and geographical information systems.
Fisher Information
Fisher information was developed as a statistical measure of the amount of information inherent in data useful for estimating a parameter [60]. Accordingly, it relates to the order and, therefore, the patterns in data [17]. The form of Fisher information used in this work is based on the probability of observing states (s) of a system, p(s) [61,68]. From Equation (1), note that the Fisher information (I) is proportional to the slope of the probability of observing a system state p(s) with respect to the state (dp(s)/ds); hence, the higher the probability of observing a state (i.e., more consistent patterns), the higher the Fisher information: System states reflect the system condition using a set of measurable variables (x i ). When assessing temporal trends, the trajectory of a system is defined by a series of points over time, e.g., pt i t j : . , x n t j . Systems may experience a nominal variation within a particular state or dramatically change due to internal dynamics (e.g., variation in linked mechanisms or in response to external perturbations). Given measurement uncertainty and the fact that systems randomly fluctuate, the points within a finite range may be viewed as observations of the same state; hence, the likelihood of a specific state relates to the number of points that fit within a specified range (or tolerance) [58]. Karunanithi et al. [58] adapted Equation (1) to handle empirical data using this grouping strategy or "binning" approach, and Fisher information (henceforth, denoted as FI) is numerically estimated as: where q(s) ≡ √ p(s). Interpreting FI is predicated on the fact that distinct processes and patterns control different system regimes. Since the deviations in FI indicate changes in the system condition, tracking FI provides a means of capturing this behavior. Increasing FI signifies a rising dynamic order and suggests possible movement to more consistent (stable) patterns. Conversely, decreases in FI denote instability, resilience loss and may warn of an impending regime shift [16,58,66]. When comparing the stability of different systems, regions or periods of interest, the mean (µFI), standard deviation (σFI) and coefficient of variation of FI (cvFI) may be used to help distinguish stable regimes from critical transitions (or regime shifts). Stable regimes are defined by relatively high FI with little to no variation (↑µFI and ↓σFI) [62,69]. The coefficient of variation σ µ is a measure of the dispersion around the mean and is typically low for more stable systems (↓cvFI; [67]). Although transitions may be defined as declines in FI between two stable regimes [58], we identify them as periods characterized by a relatively high standard deviation and coefficient of variation in FI (↑σFI, ↑cvFI; [67]). The details on the derivation, calculation and interpretation of FI may be found in [58,61,67,70,71].
For temporal studies, the basic steps for computing FI include: (1) gathering measurable variables for the study period; (2) dividing the time series data into moving windows that advance forward one-time step for each iteration. The size of the window is based upon the amount of data; however, it is suggested that each window contain at least 8 points [70]; (3) determining the measurement uncertainty for each variable (size of states), which becomes the boundary (tolerance) around each system state. The size of states (sost) may be estimated by using the amount of variation in a stable portion of the study dataset or within a similar system as a proxy [70]; (4) in each window, binning points in states of the system using sost; (5) counting the number of points grouped in each state and dividing this value by the total number of points in the window to produce p(s); (6) computing q(s) = √ p(s) and calculating FI using Equation (2). This process is repeated to provide a FI result for each window, thereby producing index values over time. Using the binning approach, FI ranges from 0 to 8 [58]. The algorithm has been coded in Matlab and Python [1,70].
Assessing Geospatial Patterns with FI
Adapting FI to assess spatial dynamics involves first understanding that the core of the approach involves tracking system states. The system condition may change both temporally and spatially, where the condition at a location (l) is defined by Sundstrom et al. [67] as C l J : x 1 l j , x 2 l j , x 3 l j , . . . , x n l j . Since the goal is to evaluate patterns over a geospatial area defined by latitude, longitude and possibly elevation (or depth for aquatic systems), the challenge then becomes: What ordering principle should be used for this type of data? How do we capture patterns over an entire area?
The initial dilemma was determining the optimal way to traverse the area ensuring that all sampling stations are included in the assessment and a FI value could be assigned to specific locations over the area. We wanted to examine the data based on the proximity of the survey sites; however, separating the area into a series of transects would not afford the ability to include adjacent stations that are not on a fixed path. Furthermore, processing the data in this manner is complicated by determining where the transects begin and end. Clustering approaches provide an interesting option but would, in effect, partition the area into discrete groups, thereby limiting the assessments to "regions" (one FI value per cluster) rather than providing unique FI values for each location over the entire geospatial area. Moving window techniques or kriging (a method of interpolation to fill data gaps or rasterize one-dimensional data) require a specific data structure (i.e., evenly distributed observations). With the aim of developing a method that uses raw data, accounts for the sampling location and is robust to the resolution, quality and type of data, we opted to use a distance measure to set up moving windows for the data.
Distance as an Ordering Parameter
We considered distance (d) metrics computed from the Pythagorean theorem and the Haversine formula. The three-dimensional Pythagorean formula (also known as the Euclidean metric) measures the orthogonal distance between two points in linear space: The Haversine ("great circle") formula uses spherical coordinates to account for the curvature of the Earth's surface (principally important when covering large areas) and is particularly useful when using latitudes and longitudes (Equation (4)) [72]: Incidentally, a map or equirectangular projection of the Pythagorean formula can be used to capture the curvature, as well: In Equations (4) and (5), λ and ϕ are the longitude and latitude, respectively, and the mean radius (r) is approximated at 6371 km [72].
To examine these methods, each approach was used to compute the distance from a reference location to the location where the data was collected (i.e., survey location). We define the reference as the point closest to the origin (or the minimum latitude and longitude). We created a short algorithm to compute the Euclidean distance and used an existing function (lldistkm.m) from the Matlab file exchange to calculate both EpPythagorean and Haversine distances [73]. Compared to Haversine calculations, it is believed that the distance estimates from the Euclidean metric are computationally "light" (simpler formula); however, the computational speed was not an issue for either method as the Matlab code produced results within seconds. Since the Haversine distance is largely viewed as a very robust, "well-conditioned" approach [72], we opted to use it to order the data. Note: the Pearson correlation coefficients between the Haversine, Euclidean and the EpPythagorean formula were high and statistically significant (rho ≥ 0.99, p-value ≤ 0.05) for both the model and real data.
Below are the basic steps for using FI to assess geospatial data: 1.
Gather data for the study area. Data should include the route (survey station) number, route location (latitude and longitude) and values for measured variables.
2.
Use the latitude and longitude for each station to compute the distance from a reference location.
Here, the reference location is defined as the minimum latitude and longitude from the data. The Haversine distance from the reference location is computed for all routes. 3.
Order the data into a sequence of points by the Haversine distance from the reference location (from close to far).
4.
Divide the data into windows which capture small geographical "sections" of the area based on the proximity to the reference station. Essentially, the first window will contain the data from the stations that are closest to the reference site. The following window will advance forward to the next closest station, and so on. As noted in Section 2.1., each window will contain at least 8 stations.
5.
Estimate the measurement uncertainty for each variable (size of states) using the amount of variation in a stable portion of the study dataset or within a similar system as a proxy [70]. 6.
In each window, bin points into states of the system using the sost. 7.
Count the number of points grouped into each state and divide this value by the total number of points in the window to produce p(s).
Repeat steps 6-8 for each window.
As in temporal studies, this process results in a FI value for each window which is plotted at corresponding route locations (latitude and longitude) over the geospatial area. For this study, the data was managed in Excel, and short Matlab algorithms were developed or employed to compute the distance metrics (Equations (3)-(5)). The existing FI code [70] was used to compute FI from the data "ordered" in step 3. The visualizations of the data and results were done in Matlab (R2018b) and ArcGIS Pro.
Case Studies
The spatial patterns for system variables may fluctuate in a variety of ways. They may remain relatively the same, increase (or decrease), deviate dramatically from location to location (or region to region) or exhibit some behavior between these extremes. As a rudimentary test of the ability of FI to discriminate between these basic patterns, we created four spatial surfaces that were generated by simulating data to mimic variables with geospatial patterns that are homogenous (HoG), heterogeneous (HeT), symmetrically differentiated (HnH: half homogeneous and half heterogeneous) and patchy (Patch: heterogeneous patch surrounded by a homogenous surface). The data for these surfaces were generated using the 'rand' function in Matlab (R2018b). We also used a combination of the simulated variables to test the method for assessing spatial patterns in multivariate systems. Finally, we employed FI to examine spatial patterns in avian community structure. The breeding bird survey data on the total species richness and total population (or number of individuals) detected at each route across the state of Louisiana were gathered from the USGS North American Breeding Bird Survey (BBS) for the years 1990 and 2014 [74]. To provide a sense of how FI performs in normal and extreme cases for discrete data, we initially tested the method on both the raw (actual) data and data simulated to mimic homogeneous and heterogeneous patterns across the state. We then compared the FI results for the raw BBS data for 1990 and 2014, and evaluated the FI values against an ecoregion map of Louisiana and a USGS land cover map [75]. The ecoregion map provides a general expectation for the community structure in that avian bird communities within an ecoregion should be more similar than bird communities from different ecoregions. The ecoregion map, however, is based on the potential vegetation as a function of underlying geological and climatic variables, so it does not always represent the on-the-ground reality. Therefore, we also visually assessed the changes in FI against a 2001 land use map which more accurately reflects the actual habitat types across the state. These comparisons are only meant to highlight the possible utility of a spatial assessment using FI. The BBS case study presents the basic ability of Fisher information to detect broad changes in a community structure across large spatial scales where the community structure is largely expected to be spatially autocorrelated (the bird community structure in nearby sampling locations should be more similar than that in distant sampling locations), as well as broad shifts in the community structure as a result of differences between the underlying habitats in which the routes are found. Table 1 summarizes the patterns and parameters (i.e., mean µ and standard deviation σ) used with the 'rand' function in Matlab to generate the data and the expected FI results for the simulated case studies. A plot of the surfaces shows that the primary axes (x, y) use cartesian coordinates from 1 to 20, and z reflects the simulated data values (Figure 1). The point closest to the origin (1,1) was used as the reference point and Haversine distances were computed from this location. The survey route coordinates and variable values were ordered by the distance from the reference into a 400 X 1 array. The windows were established based on the proximity to the reference location, so that window 1 contained "stations" located at (1,1), (1,2), (2,1), (2,2), (1,3), (3,1), (2,3), (3,2), (3,3) and (1,4). Since the homogeneous patterns simulate relatively low random variation, the range of the values generated for the homogeneous case was used as an estimate of the measurement uncertainty (sost = [2]) for these initial case studies. FI values computed using a window size of 10 (hwin = 10) produced spatial patterns in line with the expected results. The homogeneous patterns reflect a high steady FI (µFI = 8, σFI = 0, cvFI = 0), and the FI for the heterogeneous case is low and noisy (µFI = 2.30, σFI = 0.69, cvFI = 0.30) (Figure 2a,b). The results for the patch (µFI = 6.91, σFI = 1.56, cvFI = 0.23), and the half and half (µFI = 4.19, σFI = 2.10, cvFI = 0.50), demonstrate how FI captures shifting spatial patterns and corresponds with changing parameter dynamics (Figure 2c,d). Furthermore, because FI is computed in overlapping windows, the trends in the index begin to change prior to "reaching" the outstanding feature (e.g., patch). The true power of FI is highlighted when the method is used to assess multivariate data. We used a combination of the simulated cases to mimic a multivariate system comprised of two (HoG and Patch) and three (HoG, HeT and Patch) variables. Note that the characteristics of the underlying variables remained intact and showed through even when in combination with other distinct patterns ( Figure 3).
Heterogeneous (HeT)
Highly The true power of FI is highlighted when the method is used to assess multivariate data. We used a combination of the simulated cases to mimic a multivariate system comprised of two (HoG and Patch) and three (HoG, HeT and Patch) variables. Note that the characteristics of the underlying variables remained intact and showed through even when in combination with other distinct patterns ( Figure 3). to high (red), where the FI value at each location represents the change in dynamic order (i.e., patterns) from one location to the next. High steady FI indicates stable patterns and low FI suggests more variable patterns from location to location. (HeT), (c) patch, and (d) half and half (HnH) data. FI ranges from low (blue) to high (red), where the FI value at each location represents the change in dynamic order (i.e., patterns) from one location to the next. High steady FI indicates stable patterns and low FI suggests more variable patterns from location to location.
Case Study: Breeding Bird Survey Data
A comparison of the raw 1990 Breeding Bird Survey data and simulated data representing homogeneous (HoG) and heterogenous (HeT) patterns demonstrates the performance of FI on discrete data. Figure 4 provides a plot of the raw and simulated total species (TS) and total population (TP) data for each survey route. Table 2 . FI values were plotted at the route locations (latitude, longitude) corresponding to each window. The results from the raw BBS data (μFI = 5.72, σFI = 1.20, cvFI = 0.23) indicate an increasing FI (and stability) at the survey routes from southwest to east across the landscape, with a clear reduction in FI from central west to southeast separating the state, as well as a high FI near the eastern border ( Figure 5a). As expected, FI for the homogeneous data (μFI = 8, σFI = 0, cvFI = 0) is reflected by high steady FI, and the exact opposite is true for the Patch. FI values range from low (blue) to high (red), where the value at each location represents the change in dynamic order (i.e., patterns) from one location to the next. High steady FI indicates stable patterns and low FI suggests more variable patterns from location to location.
Case Study: Breeding Bird Survey Data
A comparison of the raw 1990 Breeding Bird Survey data and simulated data representing homogeneous (HoG) and heterogenous (HeT) patterns demonstrates the performance of FI on discrete data. Figure 4 provides a plot of the raw and simulated total species (TS) and total population (TP) data for each survey route. Table 2 . FI values were plotted at the route locations (latitude, longitude) corresponding to each window. The results from the raw BBS data (µFI = 5.72, σFI = 1.20, cvFI = 0.23) indicate an increasing FI (and stability) at the survey routes from southwest to east across the landscape, with a clear reduction in FI from central west to southeast separating the state, as well as a high FI near the eastern border ( Figure 5a). As expected, FI for the homogeneous data (µFI = 8, σFI = 0, cvFI = 0) is reflected by high steady FI, and the exact opposite is true for the heterogeneous data (µFI = 4.09, σFI = 1.14, cvFI = 0.28), where FI is relatively low and highly variable for much of the area (Figure 5b,c).
To compare the 1990 results to more recent trends in the avian community structure, we also evaluated 2014 BBS data. Figure 6 (Figure 8a,b). An examination of the land use map (Figure 9) provides a visual assessment of the degree of habitat heterogeneity within each ecoregion, and roughly confirms these findings. For example, the Southeastern Plains consists of a heterogenous intermixture of pasture/hay and medium intensity human developments in woody wetland/forest, whereas the Texas-Louisiana Coastal Plan is more homogenous and largely dominated by cultivated crops and pasture/hay. 5. The FI results for a multivariate assessment of the bird community structure. FI was analyzed using both the total species and total population data for (a) raw 1990 Louisiana BBS data and simulated (b) homogeneous (HoG) and heterogeneous (HeT) patterns. The FI values range from low (blue) to high (red). High steady FI indicates stable patterns, while low FI suggests more variable patterns from location to location. The FI results for a multivariate assessment of the bird community structure. FI was analyzed using both the total species and total population data for (a) raw 1990 Louisiana BBS data and simulated (b) homogeneous (HoG) and heterogeneous (HeT) patterns. The FI values range from low (blue) to high (red). High steady FI indicates stable patterns, while low FI suggests more variable patterns from location to location.
Discussion and Concluding Remarks
With the rise in availability of large-scale geospatial datasets coupled with the complexity of challenges in a more connected global society, there is a need for methods that afford the ability to examine patterns and trends in multiple variables without requiring the use of modelling, restrictive methods or stringent data requirements. Fisher information has been used to study patterns in a variety of human and natural systems. Researchers have effectively demonstrated the utility of the method and compared it to contemporary approaches, noting that the approach often delivers unique information regarding the patterns of change in complex system dynamics not present in other methods [17,67]. While it has been used to explore temporal change in social and ecological systems of various scopes and scales, its limited application to spatial data showed promise [67]. To examine such data, it was necessary to adapt the method to capture the dynamic order over a geospatial area. The previous version of Fisher information was constrained because the approach involved ordering data along one dimension. In other words, the data was either ordered by time (e.g., [17]), or by using geographically sequential sampling locations that fell along a "straight-line" transect [67].
To develop a means of assessing the dynamic order in a spatial context, we considered a variety of methods, including a cluster analysis and complex moving window techniques. However, upon revisiting the theory and framing the quandary in its most basic terms (changing condition from location to location), we found a simple solution: order the sampling locations by distance. Euclidean and Pythagorean metrics are well-known approaches. However, because the methods measure orthogonal distances, they produce "errors", particularly when approaching meridians [72]. The Haversine formula accounts for the curvature of the Earth's surface and is generally seen as the most efficient method for assessing distance based on latitude and longitude; accordingly, it was used for the analyses. As a side note, we found that the equirectangular projection of the Pythagorean distance provided an approximation that closely resembled the Haversine results, and the distance computed from all three methods was highly correlated.
Discussion and Concluding Remarks
With the rise in availability of large-scale geospatial datasets coupled with the complexity of challenges in a more connected global society, there is a need for methods that afford the ability to examine patterns and trends in multiple variables without requiring the use of modelling, restrictive methods or stringent data requirements. Fisher information has been used to study patterns in a variety of human and natural systems. Researchers have effectively demonstrated the utility of the method and compared it to contemporary approaches, noting that the approach often delivers unique information regarding the patterns of change in complex system dynamics not present in other methods [17,67]. While it has been used to explore temporal change in social and ecological systems of various scopes and scales, its limited application to spatial data showed promise [67]. To examine such data, it was necessary to adapt the method to capture the dynamic order over a geospatial area. The previous version of Fisher information was constrained because the approach involved ordering data along one dimension. In other words, the data was either ordered by time (e.g., [17]), or by using geographically sequential sampling locations that fell along a "straight-line" transect [67].
To develop a means of assessing the dynamic order in a spatial context, we considered a variety of methods, including a cluster analysis and complex moving window techniques. However, upon revisiting the theory and framing the quandary in its most basic terms (changing condition from location to location), we found a simple solution: order the sampling locations by distance. Euclidean and Pythagorean metrics are well-known approaches. However, because the methods measure orthogonal distances, they produce "errors", particularly when approaching meridians [72]. The Haversine formula accounts for the curvature of the Earth's surface and is generally seen as the most efficient method for assessing distance based on latitude and longitude; accordingly, it was used for the analyses. As a side note, we found that the equirectangular projection of the Pythagorean distance provided an approximation that closely resembled the Haversine results, and the distance computed from all three methods was highly correlated.
Distance as an ordering parameter was quite useful for adapting FI for spatial assessments. The approach afforded the ability to use moving windows (which capture small geographical sections of data) to traverse the geospatial area by organizing the data based on distance from a reference location. The case study results for the simulated data reflected changes that corresponded with geographical dynamics and matched the expected results based on an understanding of patterns from previous FI work (e.g., [77]). FI from the breeding bird case study highlighted multiple ways in which the method could be useful for spatial assessments: to monitor change over a geographical area, within a spatial region, or even to compare homogeneity/heterogeneity among regions. In addition, the method could be used in longitudinal studies to determine how patterns changed over time (e.g., preand post-hurricane Katrina). Due to changes in the sampling techniques, resources, catastrophic events, topographical changes, etc., it is not uncommon for sampling sites to vary over time. Typically, alternate locations are chosen which capture important variables at sites that are accessible to surveyors and adequately cover particular regions or features of interest. Spatial Fisher information computations are based on the data collected at sampling sites, and while the approach is not limited by static sampling locations, as with any method, it is important to ensure that the survey sites available during the periods of interest capture the same area. As demonstrated by the comparative assessment of breeding bird survey patterns in 1990 and 2014, note that while the sampling sites were not exactly the same in both years, the locations still covered the same area. In addition, the number of survey sites actually increased from 33 in 1990 to 44 in 2014. Still, we were able to comparatively assess how a breeding bird community structure changed in the region during these two periods.
FI could also be used to identify the presence or spatial extent of transition zones when moving from one spatial region to another, though we lacked data of sufficient spatial resolution to test this in our BBS case study. There is no limit in the size of the area (global, national, regional, city, or community), nor in the number of sampling sites used to capture the area. While a higher resolution is ideal, even sparse datasets afford the ability to capture behavior useful for assessing aggregate spatial (or temporal) patterns and trends. The case studies presented demonstrate the utility and versatility of the method through its ability to detect patterns in both continuous and discrete data. Note that while the data resolution in the initial simulated cases was much higher than the data used and generated for the breeding bird data, the Fisher information trends were distinctive and comparable. For example, Figure 2a,b and Figure 5b,c show that the method successfully identified patterns and trends (e.g., homogeneous and heterogeneous) in both relatively high and low(er) resolution cases. The case studies were used for illustrative purposes, as they served merely to highlight the possible uses of spatial Fisher information in an ecological context, rather than draw any ecological conclusions.
Furthermore, an application to multivariate data highlights the core strength of the method in capturing distinct trends in the index based on patterns in the underlying data. This is particularly important for the complex problems we face today, where drivers and management options are unknown or difficult to identify (e.g., harmful algal blooms). Future work includes exploring other distance approaches (e.g., nearest neighbor) or adding a spatial autocorrelation weighting factor to test the proximity between points. It would also be useful to examine the impact of the reference location (e.g., min vs. max latitude and longitude, closest to a particular feature) and to evaluate other approaches for estimating measurement uncertainty. Measurement uncertainty is a universal issue for data collection efforts, with data accuracy information often not being provided. Accordingly, it is critical that approaches be developed to handle this uncertainty. As noted in Section 2.1, when developing the computational approach for Fisher information [58,70], strategies were developed for estimating uncertainty by using the variation (e.g., standard deviation) of the measured variables in a similar system or in a relatively stable portion of the variables from the study dataset, as an approximation of measurement uncertainty. In this study, we used the range of simulated homogeneous data as a proxy for stable dynamics; however, as in temporal studies, the use of a "stable" (relatively low standard deviation) region in the raw dataset may preclude the need for a proxy.
Other forthcoming activities involve applying the method to other datasets (e.g., human, natural, social), particularly where there are known spatial shifts, comparing the index results to other approaches (e.g., principal components analysis, Moran's I, early warning indicators), finding a means of combining both space and time into the assessment, and developing methods to identify which variables drive changes in the index to facilitate identification of management options. This paper is a proof of concept and serves as a springboard for extending Fisher information to geospatial assessments. There are many questions left to be answered, yet this effort demonstrates a method that could provide a valuable tool for mining spatial data to detect latent patterns and signals in complex systems.
Author Contributions: T.E., H.C. and S.S. conceptualized the study. T.E. and W.-C.C. processed the data. T.E. developed the approach, adapted the algorithm and performed the analyses. T.E. and S.S. wrote the first draft and all authors contributed to the revision and approved the final manuscript.
Funding: This research received no external funding. | 9,398.2 | 2019-02-01T00:00:00.000 | [
"Environmental Science",
"Geography",
"Computer Science"
] |
Collaborative Robot Safety for Human-Robot Interaction in Domestic Simulated Environments
Human-robot interactions carry several challenges, the most important being the risk of injury to the human. In industrial robotic systems, robots are mostly caged and isolated from humans in a safety guard environment. However, as time has passed, the use of domestic robots has emerged, leading to a high need in research on robot safety in domestic settings. Human-Robot collaboration is still in an initial stage; thus, safety assessments in domestic environments are critical in the field of collaborative robots or cobots, with simulations being the first stage of research. In this study, a preliminary investigation on the simulation of human’s safety throughout human-robot interactions in home surroundings with no safety fence is presented. A simulation model is designed and developed with Gazebo in the Robot Operating System, ROS-based, to simulate the human-robot interaction. In the robot trajectory, safety interaction can be simulated. In one example, the robot’s speed can be reduced before a collision with a human about to happen, and it can be minimized the risk of the collision or reduce the damage of the risk. After the successful simulation, this can be applied to the real robot in a domestic working environment.
Introduction
Robots have succeeded in increasing productivity and performing risky or monotonous activities in industrial settings. Research recently has focused on the potential for the use of robots to assist people in medical, workplace, or home environments beyond a purely "industrial" setting. The aging population in the developing world [1,2] is an important motive for the use of utilities or personal robots. Robots are intended for daily living tasks [3] including dish clearing [4], load-carrying cooperation [5,6], and feeding [7,8], and social interaction [2,9]. The marketing of robots for amusements [10] and home maintenance [11] is also growing. As robots switch from isolated working cells to unstructured and collaborative environments, knowledge about their environment needs to be better acquired and interpreted [12]. Security [13] and, more broadly, reliability [14] are one of the critical issues which hinder the entry of robotics into unstructured, human environments. Dependability includes physical safety as well as operational robustness, as described by Lee [14]. Some robots, mainly built for social interaction [9,10,15], prevent security problems under their IOP Publishing doi:10.1088/1757-899X/1096/1/012029 2 small size and mass and minimal manipulation. Figure 1 indicates the lack of a vital robot safety program which could lead to serious or fatal injuries to humans and loss of investment in capital in machinery [21] as well.
Related Works
Industrial safety standards [16] are designed to ensure protection by separating the robot from human beings, and thus do not extend to encounters between robot and human beings. Industrial experience has, however, shown that mechanical re-engineering is always the most powerful safety method for reducing risks [17]. This technique is often used for collaborative robots. For example, the whole-body viscoelastic robot was created by Yamada et al. [18]. [19][20] Zinn et al. suggested to lower the effective inertia of the robot by using distributed parallel actuation. Although these and other mechanical re-design methods led to reducing the impact force during a collision, the collision is not avoided. Additional safety steps, the use of device control and planning, as illustrated in the following section, are necessary for secure and human friendly interaction in unstructured environments. Olawoyin [21] studied the safety and automation in the working environment in a collaborative robot system and concluded that efficiency must be optimized to escape safety constraints in safety-related issues in automation and robotics. Dombrowski et al. [22] stressed a certain importance for preparing human robot cooperation (HRC) in the automated factory devices. Weitschat and Aschemann [23] have established a new approach that still meets the international safety requirements of collaborative robotics for the improvement of robot performance. The method is focused on the projection of human arm movements on the robot's way of estimating a possible collision with the robot and refining the method to achieve the target under human-in-the-loop limitations. Zhu et al. [24] have implemented a variety of approaches in which robots are stuck at a local minimum before achieving their target. The simulation annealing (SA) approach has evaluated an artificial potential field approach as one of the effective local minimal escape techniques. Simulated ringing for local and global route schemes has been implemented. Demir and Durdu [25] have reviewed the objective of human-robot relationship research in order to establish models of human expectations for robot interaction to direct robot design and algorithmic creation that make interactions between humans and robots more natural and efficient. Svante Augustsson et al. [26] demonstrated how versatile security zones can be enforced. In the case study the atmosphere at a wall construction site is emulated by an industrial robot cell using a robot executing nagging routines. Tests showed people entering the Protection Eye system monitoring areas.
The zone violation was established and new warning zones started. The robot retracts, but continues its function at low speed and within a reasonable distance.
Methodology
Path planning for safety is a key component of an overall human robot contact secure policy. The robot can be better able to respond to unforeseen safety incidents by providing safety requirements at the planning stage. Planning is employed to boost the control outcome as a means of enhancing monitoring by using a smooth route design [27,28]. A similar approach is taken here to [29,30]. The possible risk requirements are however developed and evaluated using the proposed motion planning system in [31,32]. Growing criteria specifically takes the user's inertia manipulator and mass centre into account in determining hazards. A two-stage approach to planning is meant to deal with possible overlapping planning requirements. In a simulation, the proposed plan is tested to compare the parameters and show their performance in an example handling mission. Figure 2 shows the flow chart of the system overview. Human-robot interaction can be two kinds of such as cobot and industrial robot, while cobot can be a mobile robot or arm robot. This research study focuses on the arm robot in domestic environments. The system will be realized using Gazebo, a simulator for robotic research. The safety model will be designed and developed using Python language, and the trajectory planning for the collision with humans will be investigated. In the end, the safety of humans to robot interaction will be assessed and validated by the model with a standard benchmark problem.
Figure 2.
Overview of the human robot interaction. Figure 3 illustrates the steps or process of this research work. In order to locate the safest configuration, the arm robot needs to pass successfully to the end-stage. If the arm robot cannot find any obstacle, then it is moved forward. Still, if it finds an obstacle, then it can be able to analyze what kind of obstacle is it if it is non-human, then it is moved backward. Still, if it is human, then the robot reduces its speed and check the risk factor of their interaction if there is no risk, then it is moved forward. Still, any risk is where it can be minimized the danger for that interaction moved forward and check the safety of the interaction. If the interaction is not safe, then the robot reduces its speed and stops after a while, but if the interaction is safe, then it moved forward. In the end, this algorithm In order to realize the methodology proposed in this research work, a test simulation system is designed based on the experiment setup, simulation recorded for motion sensor without the object. Figure 4 shows a domestic environment and figure 5 represents the arm robot motion simulation on its trajectory in a domestic environment, while figure 6 illustrates the human and Robot interaction in a domestic environment, respectively. A domestic simulation environment is developed and illustrated in figure 6. In this environment, a human is placed, and it can move in 360° direction and forward and backward steps also, on the other hand, an arm robot is placed there which can move in 360° and three direction x, y and z-axis and also move forward and backward steps.
Simulating Robotics Hardware
GAZEBO Player / Stage in ROS is a common simulation platform for robots. This system allows multiple robots to be tested in complex outdoor environments. A large range of sensors can be designed for each robot. These models provide practical feedback. The setting is modeled as a 3D world made of static objects, but can be moved by the robots. This ability is based on a simulation of rigid-body physics, which is also included in the structure, which allows for physically realistic interactions.
Simulating Human Characters
Robots are commonly used for simulation environments. Two main things have to be understood in case of such a simulation: firstly, the human model has to be animated, and secondly, the model has to be managed to be able to achieve realistic behavior. These simulations are not an interaction partner, they represent the robot itself. Motion capture devices also do not store movement rudimentary animation, but can be extended to a large number of human characters, for animation of interaction partners. There are various approaches to regulating human behavior. In order to evaluate the human character's actions, environmental information is used. The proposed system currently does not have a method to achieve the autonomous actions of the human character simulated. These features can, however, be considered a useful extension and realized as research for the future. Figure 7 illustrates the simulation framework of this research. From the figure depending upon the environment, the arm robot stated, rewarded, and the agent takes the necessary action and briefly discuss below:
Simulation Framework
The environment consists of where the two arm joints are in space The reward is the negative of the gap between the fingers and the target The actions consist of a real upward or downward movement on either of the two joints The state force raises the cup, hold a cup, lower cup used to activate protection.
Expected Outcomes
The safety of human-robot interactions can be simulated: In figure 6, a human and robot interaction in a domestic environment are created. If the robot detects a human as an obstacle, then the robot reduces its speed and checks the risk factor of their interaction; if there is no risk, then it can move forward, but if any risk is still there, it tries to minimize the danger or stop there. The motion speed can be reduced: Using the Gazebo environment variable, the motion speed of an arm robot can be increased or decreased in real-time by using ROS-control and a necessary Gazebo plug-in adapter. [33] accomplished collision avoidance by assuming that each robot assumes some responsibility for each pair conflict, with the resulting constraints providing a range of variable speeds from which to choose using linear programming.
Conclusions
Visualization can be the first step towards gathering research results, before any implementation in the real world. In order to achieve the objective of this research study, a systematic process will be developed for ensuring safety during human-robot interaction in a domestic environment, based on an IOP Publishing doi:10.1088/1757-899X/1096/1/012029 8 explicit quantification of the level of danger in the interaction using ROS based gazebo simulator. Specifically, a method for assessing the level of risk at both the planning and control stages will be developed. Further, to accomplish the desired task of moving to the goal with a probability of collision with the human, evaluation trajectory fitness will be applied. In the end, the novel method will be investigated into physical system components that have been integrated and validated on a robot platform during real-time human-robot interaction and tested. | 2,744 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Features of electroweak symmetry breaking in five dimensional SUSY models
We explore the phenomenological predictions of a supersymmetric standard model, with a large extra dimension and unifying gauge couplings. The modified five dimensional renormalisation group equations make it possible to obtain light, maximally mixed stops, with a low scale of supersymmetry breaking and a low unification scale. This allows the fine-tuning to be lowered right down to the barrier coming directly from experimental lower limits on the stop masses. We also show that attempts at modifying the SUSY breaking pattern to obtain more natural soft terms at the high scale do not give the expected fine-tuning relaxation, and only RGE effects turn out to be effective in generating a lower fine-tuning.
Introduction
The discovery of the Higgs boson, of mass m h ∼ 125 GeV, at the first LHC run [1,2] and various null results in searches for super-particles (sparticles) appear to imply that the sparticles of minimal theories of supersymmetry (such as the MSSM) are likely to be out of reach of the LHC altogether. Further considerations regarding electroweak symmetry breaking and the naturalness aesthetic bring further doubt that perhaps supersymmetry is not a symmetry of nature after all.
At this time it is also worth to consider non minimal models that share much of the well grounded theoretical elegance of the (four dimensional) MSSM: supersymmetry, gauge coupling unification, anomaly cancellation, a minimal matter content to achieve these, unification of matter representations, an explanation of the big hierarchy problem, various dark matter candidates. The model we consider is five dimensional and brings with it additional interesting features: the possibility to observe Kaluza-Klein states of gauge (and other) fields [3], the possibility to achieve the observed Higgs mass with sparticles within reach of the next LHC run, and a much lower unification scale and supersymmetry breaking scale than is normally possible in four dimensions.
In this paper we explore electroweak symmetry breaking in a particular class of five dimensional supersymmetric theories [4,5]. We also wish to address the possibility to construct a 'natural theory' where stops are lighter than their first and second generation counterparts, and which is not spoiled by the renormalisation group effects that would usually make the 3rd generation similarly heavy at a low scale, after a long period of RG-running. Studying models that differ in how matter fields are located among branes and bulk opens the possibility to explore the effect different SUSY breaking patterns and modified five dimensional RGEs have on fine-tuning wrt a four dimensional theory. Unfortunately modifying the breaking pattern to obtain more natural soft terms at the high scale does not give the expected fine-tuning relaxation. However we show that modified RGEs enable us to obtain light maximally mixed stops. This allows us to lower fine-tuning down to the barrier coming directly from observational lower limits on the stop mass.
In our analysis we used renormalisation group equations outlined in [5,6] and adapted a C++ based spectrum generator originally intended for the (four dimensional) MSSM [7]. A similar modification may be carried out with any publicly available spectrum generator [8][9][10] 1 . The RGEs used in this paper may be found in [5] and further conventions in [6] and [15][16][17][18][19]. For earlier phenomenological studies of five dimensional theories see for example [20].
The outline of the paper is as follows: section 2 we outline the two base models that we wish to explore. In section 3 we explore the behaviour of the running parameters of the theories. In section 5 we look at naturalness and electroweak symmetry breaking as we vary the radius of the extra dimension. We also look at benchmark models of supersymmetry breaking in section 4. In section 4.3 we outline and explore a "natural" model in which the 3rd generation are spatially located on a different brane to the first two generations and in which supersymmetry is gauge mediated directly to the first two generations but only indirectly to the 3rd, which ideally would allow for light stops. In section 6 we conclude. In appendix A we outline the implementation of spectrum generator and RG solver in C++ code.
2 The 5D-SSM+(F ± ) Model The first model that we wish to explore is a five dimensional supersymmetric theory with the field content outlined in table 1 and is pictured in figure 1 (left). In this model the Higgs fields (H u , H d ), gauge fields and additionally F ± are bulk fields [21]. This matter content is necessary for the gauge couplings unification, as we shall explore further later. All five dimensional bulk matter fields are supersymmetric Hypermultiplets which due to even and odd boundary conditions lead to a four dimensional Chiral multiplet as a zero mode of the Kaluza-Klein expansion: such details are well documented, for instance in 1 To date, five dimensional theories are one such class of models that cannot yet be explored using SARAH [11][12][13][14] although it can still be a powerful tool to determine the RGEs of the low energy four dimensional effective theory that the five dimensional theory runs to [5]. [15][16][17][18][19]. The second model we wish to explore is outlined in table 2 and pictured in figure 1 (right). In model 2 only the third generation is located on a brane and the first and second generation are in the bulk along with the Higgs multiplets and F ± fields. The superpotential for both models is given by It would be very worthwhile to consider the generation of the termμF − F + in the superpotential, although for this paper we will not need to consider it, and postpone that to later work. We will now explore the running parameters of these two theories as one changes the scale of the extra dimension.
Running parameters
It is particularly interesting to understand and compare the behaviour of the various running parameters of these theories compared to the more usual four dimensional MSSM. The behaviour of the various parameters as a function of renormalisation scale for model 1 is pictured in figure 2. Of particular note is that unification happens much earlier if the size of the extra dimension is large [22] , than the usual four dimensional case. One also finds that the top Yukawa reduces rather significantly and becomes of similar order to the other Yukawa couplings near the unification scale. In addition one finds that even for initially vanishing A-terms the A t term may become multi-TeV in value at the electoweak scale, which is encouraging from the perspective of obtaining the observed 125 GeV Higgs mass. It is also the case (bottom left) that the gluino mass can become much hearvier than the other gauginos allowing for the theory to still have a light bino and wino whilst allowing for a gluino above current exclusions. The first model may be compared with model 2 similarly presented in figure 3 and in table 2. In these figures it is notable that that gauge couplings quite nearly unify but the gauge couplings rise rather than fall, after the KK modes start to take effect in the RGEs. The Y t still decreases in value, although now rather interestingly the A t becomes so quickly negative that it can quickly overcompensate the effect of the gluino soft mass, and for very large radius, the A t running may even return on itself. Again the wino and bino soft terms can be much smaller than that of the gluino, even starting from the same initial value.
Supersymmetry breaking in benchmark models
So far our exploration has been reasonably agnostic about how supersymmetry is broken, since the main feature of the models presented in the previous sections are their RGEs. In what follows we will simply refer to sets of RGEs we used, as models.
There are however a number of ways that have been proposed for the parametrisation of supersymmetry breaking in a five dimensional scenario. In this section we wish to identify these scenarios and look at their patterns of supersymmetry breaking which define their possible high scale spectra.
Gravity mediation (CMSSM)
Our first benchmark scenario is the simple CMSSM spectrum, however since easier generation of A-terms during running is a key feature of five dimensional running, we will always take A i = 0 case for which the difference between five and four dimensional theories is the most visible. This implies a very simple type of spectrum with just two free parameters M 1 2 and m 0 : defined at the unification scale.
Minimal Gauge Mediation (MGM) in five dimensions
In gauge mediation there is an additional characteristic scale at which SUSY is broken, which for brevity we will labelled M . For the five dimensional RGEs to have an impact on the spectrum and to not simply be an effective four dimensional theory with a low SUSY breaking scale we wish that M is at least O(1/R) and possibly nearer M unification . The soft terms in five dimensional GMSB, at the breaking scale, are then given by where F and M are the free parameters we will scan over. This paper is the first implementation of five dimensional GMSB soft masses [16,17,[23][24][25], with five dimensional RGEs [5,6]. In both model 1 and 2 we will take the supersymmetry breaking to be on the opposite brane to the matter, and both brane and bulk matter are essentially suppressed by the effect of the extra dimension, as in the above equation.
Realising natural SUSY with GMSB in five dimensions (nMGM)
The renormalisation group equations of model 1 may be used to explore a natural susy scenario as pictured in figure 4. In this model the 3rd generation is located on one brane and the 1st and 2nd generation on another, along with the supersymmetry breaking sector. The effects of supersymmetry breaking are mediated by gauge forces [26] (but one can also easily consider gravity mediation too in this context) and the result is that the 1st and 2nd generation and also the gauginos will receive normal (4D) GMSB soft mass contributions but the 3rd generation will be heavily suppressed [5,6,17,25]. The soft mass matrix for squarks and sleptons takes the form leading to an interesting natural SUSY spectrum of lighter 3rd generation squarks. This scenario suggests that natural SUSY softer terms are imprinted due to the 'geometry' of the particle mass bound in GeṼ g 1200 Table 3. Experimental exclusion limits used theory. We will consider such a natural spectrum in context of minimal gauge mediation, the resulting soft terms are similar to those in(4.2), however now only third generation sfermions are suppressed by 1/(M R) 2 . In the text we will refer to this as an nMGM spectrum. Needless to say, a similar model may be constructed using brane to brane gravity mediation. It would also be interesting to discuss models with H u and H d localised alongside the 3rd families, however it would require a much more serious modification of the RGE's of our Model 1 and 2, consequently we postpone that discussion to future work.
Electroweak symmetry breaking and naturalness
One important feature of a model is whether its parameter space can accommodate electroweak symmetry breaking. Figure 5 shows regions in the parameter space of our models where the breaking does not occur or which violate direct detection bounds summarised in table 3 [27]. Exclusions corresponding to varying size of the extra dimension (including the 4D case) are plotted together. For standard CMSSM and MGM boundary conditions Model 1 predicts rather standard spectra of sparticles quite similar to the 4D case. However Model 2 due to much lower gaugino masses compared to the A-terms allows us to obtain very light stops and maximal mixing even despite A-terms vanishing at the unification scale. In fact for large R = 10 −4 the peculiar shape of the CMSSM excluded region in model 2 comes from obtaining too light stops that would have already been observed. The MGM excluded region comes from the interplay between large scalar masses we obtain at the scale M when M = 1/R, and when they are generated during 5D modified running between 1/R and M >> 1/R. The minimal stop mass is obtained between these two situations and results in excluded part on the left hand side of middle row in Figure 5 where the small stop soft mass fails to push m Hu to negative values and break electroweak symmetry. This is also visible in nMGM plot on the bottom row of Figure 5. However here the problem is more severe since m Hu is not suppressed by 1/(M R) at the SUSY breaking scale, and a much bigger part of the parameter space is excluded. For nMGM spectrum this problem appears also for very small 1/(M R), because in this part of the parameter space the difference between Higgs and stop soft masses is the largest. These two effects lead to appearance of a window of allowed parameter space which is very interesting, since it is in that window, that we obtain the highest Higgs mass.
Exploring naturalness in benchmark scenarios
In MSSM-like theories, at a finite loop order, electroweak symmetry breaking is radiatively induced. The up-Higgs soft mass is driven to negative values, leading to At leading order, the running of this soft mass in four dimensions follows In five dimensional models the RGEs are rather different due to the power law contributions and one finds One might have expected a significant contribution to fine tuning from the power law contribution. However four and five dimensional theories actually have similar fine tuning as the much faster power law contribution can dominate the running for only a very small range of scales if the spectra we are comparing are similar. And so the final amount of fine-tuning for a given scenario depends mostly on the resulting spectrum rather than on the amount of power law running. This is quantified in figure 6, where in numerical calculations we use a standard fine-tuning measure with respect to parameter a defined as follows 2 [28][29][30] Fine-tuning connected with a set of independent parameters a i is then (5.5) Figure 6 shows resulting fine-tuning as a function of Higgs mass for different sizes of the extra dimension as well as the result one would obtain from 4D running. the top row shows results obtained assuming CMSSM-like soft terms (with A i = 0), the middle row shows gauge mediated boundary conditions and the bottom plot shows the nMGM ones.
The results in left panel show model 1 which gives a rather standard prediction despite power law contribution to running. However model 2 shown on the right hand side allows us to reduce fine-tuning very significantly. The reason are the gaugino masses that decrease during 5D part of the running (as shown in Figure 3). This protects the soft terms from the usual increase due to the heavy gluino. Since the A-terms do not grow proportionally to scalar masses we can easily achieve maximal mixing scenario for the light stops, and their direct detection bound is precisely what gives us the lower bound on fine-tuning we can see in model 2 with R = 10 −4 .
The bottom plot shows nMGM result which turns out quite similar to MGM and CMSSM model 1 results. The reason for this is that in model 1 the least fine tuned results are those for which M >> 1/R. Thus the scalar masses are initially very small and have to be generated with modified running. Consequently the 3rd family part of the spectra are very similar. The correction introduced by nMGM relies only on larger subleading corrections to the Higgs mass from first two families and other Higgs sector scalars. Unfortunately fine-tuning price of these corrections is larger than their contribution to the Higgs mass and the results are slightly more fine tuned than those from standard MGM or CMSSM soft terms.
A large qualitative difference between MGM and nMGM becomes visible for Higgs masses slightly higher than the observed one. This comes from the part of parameter space which predicts successful electroweak symmetry breaking in nMGM. As explained in the beginning of this section, the problem is a result of the exclusion appearing in nMGM for very small 1/M R. Where we cannot break electroweak symmetry because radiative correction to the unsuppressed soft Higgs mass coming from highly suppressed stop mass is to small, and the former never runs negative. This becomes visible for higher Higgs masses because very small 1/M R is the part of the parameter space where we obtain highest Higgs masses. Another very important feature of 5D models is the possibility to bring superpartner masses within the LHC reach for points predicting minimal fine-tuning. This is illustrated in Table 4 which shows spectra corresponding to lowest obtained fine tuning for m h = 125 GeV.
Conclusions
In this paper we explored the implementation of the five dimensional renormalisation group equations of a number of supersymmetric extensions of the MSSM, into a full C++ spectrum generator, along with self energy corrections for the Higgs mass.
Our key result is showing that modified five dimensional RGEs can result in spectra very different from the usual 4D case. The is because in 5D the heavy gluino does not necessarily dominate running of other soft terms during power law running, as in our model 2. Thus we can easily obtain maximal stop mixing and much less fine tuned spectra, even with standard sets of soft terms at the SUSY breaking scale. This is also very interesting because in 5D models the least fine tuned spectra with correct Higgs mass can easily Table 4. Masses of superpartners (in TeV) for spectra which minimize fine-tuning for m h = 125 GeV predict soft superpartner masses within LHC reach, even for standard patterns of soft terms. Interestingly, this means the most interesting parts of the parameter space can be probed during next run of the LHC, which is not usually the case in 4D models.
We explored models where the 1st and 2nd generation are in the bulk and a model in which the 1st and 2nd generation is on the same brane as the supersymmetry breaking sector and the 3rd generation is located on an opposite brane, resulting in a spectrum of stops lighter than other squarks. Obtaining lighter stop soft terms at the SUSY breaking scale did not result in a more natural spectrum. The reason is non negligible fine-tuning price of heavier first two generations and heavier Higgs sector which give only a subleading correction to the light Higgs mass.
The final advantage is a low scale of unification of gauge couplings and a low supersymmetry breaking scale. And also much better unification of Yukawa couplings (especially in model 2) which gives hope for a very interesting five dimensional UV completion of such -13 -models.
A Numerical procedure
In this appendix we outline the implementation of the RG-solver and spectrum generator used in this paper. The numerical procedure we use is similar to the ones used in existing codes [8][9][10]. We work with quantities renormalized in DR and use renormalization group equations (RGE), to iteratively find low energy parameters for a given set of high energy soft terms.
A.1 M Z Scale
At the scale M Z we include radiative corrections to couplings. We set Yukawa couplings using the tree-level relations where m t , m b , m τ are fermion masses and v is the Higgs vev. At the first iteration we use physical masses and SM Higgs vev v ≈ 246, 22 GeV. During subsequent iterations above quantities are renormalized in DR scheme and one-loop SUSY corrections are included.
To calculate the top mass we use 2-loop QCD corrections [31] and 1-loop corrections from super-partners from the appendix of [32]. While calculating the bottom mass we follow Les Houches Accord [33], starting from running mass in M S scheme in SM m b M S SM . Next applying the procedure described in [34] we find DR mass at M Z , from which we get MSSM value by including corrections described in appendix D of [32]. While calculating the tau mass we include only leading corrections approximated in [32]. We calculate the Higgs vev in the MSSM using where we include Z self interactions described in appendix D of [32]. To calculate g 1 , g 2 , g 3 in DR in the MSSM we use the procedure described in appendix C of [32].
A.2 RGE and M u scale
After calculating coupling constants at the scale M Z we numerically solve RGEs [35], [36], to find their values at the scale M u , at which we include the soft breaking terms. Then we solve RGEs again to find soft terms, coupling constants, tan β and Higgs vev v at the scale M EW SB = mt 1 (M EW SB )mt 2 (M EW SB ). At first iteration we take µ = sgn(µ) GeV and B µ = 0 and run to the scale at which the above equation is fulfilled.
A.3 Electroweak symmetry breaking
In order to obtain correct electroweak symmetry breaking we use minimization conditions for the scalar potential to find new values of µ and B µ . We include radiative corrections in these equations by the substitution We include full one-loop corrections to t u and t d presented in appendix E of [32] and leading two-loop corrections [37][38][39][40][41]. Since these corrections depend on sparticle masses which in turn depend on the µ parameter that we aim to calculate, an iterative calculation is performed to obtain new values of µ and B µ . If the new values differ significantly from the ones obtained in previous repetition of the whole algorithm described above, we run back to the M Z scale and repeat the whole calculation once again. If however the values of µ and B µ converged, we can move on to teh calculation of physical masses.
A.4 Calculation of physical masses
To calculate physical masses we use only leading corrections described in [32] everywhere but the Higgs sector. In the Higgs masses calculation we use full one-loop corrections from [32] and leading two-loop corrections described in [37][38][39][40][41].
A.5 Fine-tuning
After the calculation of the spectrum is finished, one has a whole set of parameters and couplings that predict correct electroweak symmetry breaking. In order to calculate finetuning we solve the RGEs from M u scale down to M EW SB with one of the fundamental parameters a i changed slightly at the high scale M u . Than at the scale M EW SB we recalculate the spectrum and use minimization conditions to calculate a new value of tan β and to obtain our new prediction for m 2 Z , which means that we calculate numerically the derivative in the definition of fine-tuning (5.4). We repeat that procedure for all parameters a i and obtain our final result as a maximum of results obtained for each of those parameters (as in (5.5)). | 5,408.2 | 2015-08-20T00:00:00.000 | [
"Physics"
] |
Taxonomic reassessment of Zale lunifera ( Hübner ) ( Erebidae , Erebinae )
In light of the recent discovery of an unrecognized species within nominal Zale lunifera (Hübner), the taxonomy of Z. lunifera is reassessed. Zale intenta (Walker), stat. rev. is the name that applies to the widespread species previously called Z. lunifera. Zale lunifera sensu stricto is the species previously thought to be undescribed; it occurs from the southern Atlantic coastal plain northward to the pine barrens of north-eastern United States. A diff erential diagnosis and adult illustrations of the two species are given.
Introduction
While curating Zale specimens collected on a fi eld trip to south-eastern Georgia, I became aware of two similar but apparently diff erent species collected on the same night.Further comparison to other specimens revealed that both species had traditionally been going under the name Zale lunifera (Hübner), but a second, apparently unnamed taxon had recently been fl agged as one of conservation concern in the north-eastern United States (Wagner et al. 2003, NatureServe 2009).Comparisons of genitalic struc-ture, phenotype, DNA barcodes of the putative species, and examination of the namebearing types of the subjective synonyms of Z. lunifera show that names are available for both taxa.Th e purpose of this paper is to clarify the taxonomy and provide a diagnosis of these species.
Methods and materials
Adult genitalia were prepared following the methods detailed by Lafontaine (2004).
Molecular variation was assessed based on the 658 base-pair 'barcode' region of the fi rst subunit of the cytochrome oxidase (cox1) gene (Hebert et al. 2003).DNA was extracted from one leg removed from a dried specimen, and processed at the Canadian Centre for DNA Barcoding, Guelph, Ontario.DNA extraction, amplifi cation and sequencing protocols for the Barcode of Life initiative are given in Hebert et al. (2003).Barcode haplotypes were compared with phylograms constructed using the neighbour-joining method as implemented on the Barcoding of Life Data Systems (BOLD) website (http:// barcodinglife.org;Ratnasingham and Hebert 2007).Phyletic distances were calculated using the Kimura-2-Parameter (K2P) distance model.Data for molecular voucher specimens, including trace fi les and photographs, are available at the BOLD website under the project: Lepidoptera of NA Phase II: "Zale lunifera" under the "Published Projects" tab).been validated (most of them inadvertently), I have found no evidence of this for dealbata.McDunnough (1938) listed it as a "form" (of Z. calycanthata (Smith)), not as a subspecies or subjective synonym.Th is name is not included in Franclemont and Todd (1983) or Poole (1989), presumably because it was deemed to be unavailable.Diagnosis.Th is species has long been confused with Z. lunifera, from which it diff ers by its larger size, more elongate forewing shape, the poorly defi ned or absent black orbicular spot, and the less sinuate black antemedial line on the forewing.Zale intenta also has a tendency to develop an overall striate pattern that is very poorly or not at all developed in Z. lunifera, particularly well developed in south-eastern populations (Fig. 4).Male genitalic diff erences are slight; the valves are more elongate and the aedeagus is longer with a slightly greater twist in Z. intenta than in Z. lunifera.In females, the distance between the ostium and the caudal margin of the antevaginal plate is equal to the diameter of the ostium; in Z. lunifera, this distance is 2.0-2.5 × the diameter of the ostium.
Abbreviations of collections
Redescription.Head -antenna ciliate in both sexes; palpi and head dark brown.Th orax -thoracic collar dark brown with a fi ne black basal line and light-grey distal border; middorsal area with a brown anterior and posterior tuft, scales prominently light grey distally bordered basad by fi ne black line; tegulae patterned similarly, but with a broad black basal patch; thorax fuscous grey brown ventrally.Abdomen -dorsum and ventrum brown grey; dorsum of segments four to seven with pale-tipped hair tufts; sexes similar.Forewing -length averaging 19.7 mm (n = 6) in males, 20.1 mm (n = 3) in females; ground colour greyish brown to dark chocolate brown, with a slight dark-purple tinge in fresh specimens; entire wing covered in fi ne, black striae (particularly developed in southern populations); basal area (basad of antemedial line) dark brown, contrasting with remainder of wing, with small paler brown patch at base of costa; antemedial line dark brown to black, sometimes paler brown medially; bordered distally by pale grey-brown shading; orbicular absent or small and black; reniform spot rust brown centrally with a fi ne black border and a broader pale-tan outer border; postmedial line fi ne, black and sinuate; subterminal area variously concolorous with postmedial area (usually) or paler grey-brown, particularly in south-eastern populations; ventrum even fuscous brown with slightly darker indistinct reniform and costal part of postmedial line; dark striae less distinct than on dorsum; sexes similar.Hindwing -ground colour greyish brown to dark chocolate brown, grading to lighter fuscous brown toward costal margin; entire wing covered in fi ne black striae; medial area with or without an indistinct double medial line; postmedial line absent or indistinct; ventrum even fuscous brown with slightly darker, indistinct, dark discal spot; dark striae less distinct than on dorsum; sexes similar.Male genitalia -valves symmetrical, apex (cucullus) distinctly lanceolate and curving about 90 degrees inward; saccular extension consisting of a low triangular process; saccular process an indistinct ridge; uncus long and cylindrical, approximately half length of base of valve, apex pointed and down curved; juxta slightly asymmetrical, with left caudal margin developed into a slight lobe; aedeagus curving dorsad and to right by approximately 90 degrees; aedeagus with a lobe-like process at distal margin; vesica roughly globose with numerous diverticula, very fi nely scobinate.Female genitalia -papillae anales bluntly triangular, lightly sclerotized; posterior apophysis 2.2 × length of papillae; anterior apophysis 1.0 × length of papillae; antevaginal plate deeply divided by a medial notch, forming a quadrate fl ange on each side; ostium originating near proximal margin of antevaginal plate, separated from caudal margin of plate by 2.0-2.5 × diameter of plate; ductus bursae short, 2.0-2.5 × as long as diameter of ostium; corpus bursae pear shaped, proximal, larger chamber with minute, internal spicules.
Distribution and biology.Distributed from Nova Scotia (Ferguson 1954) westward to Wisconsin (Forbes 1954) and Missouri and southward to Georgia.Likely also occurs in northern Florida, but literature records may apply to Z. lunifera.Th e southwestern range limit is not known.Larvae feed on Prunus species, including black cherry (Forbes 1954), beach plum and "cherry" (Wagner 2005).Th e fl ight period is from March to June depending on latitude and elevation.
Type material.Phaeocyma lunifera -Type locality: "Georgien" [USA: Georgia]; the type is apparently lost, but the illustration in the original description is most similar to the oak-feeding species, with a more brownish colouration, distinct orbicular spot, indistinct striations and even, slightly violaceous submedial forewing area.In contrast, specimens of Z. intenta from coastal Georgia tend to be heavily striate, greyish rather than brown, and with a contrastingly pale subterminal forewing area.To ensure the stability of the name, the following specimen is designated as neotype: "USA: GA [Georgia] Long Co., Ludowici, / 3 mi SW, Griffi n Ridge / WMA.Redescription.Markings, colouration and genitalic structure as for Z. intenta, but diff ering in the following characters.Forewing -length averaging 17.4 mm (n = 4) in males, 18.9 mm (n = 3) in females; ground colour greyish brown to dark chocolate brown with a slight violaceous tinge; entire wing covered in fi ne black striae, less developed and thinner than in Z. intenta; antemedial line with more pronounced medial angle than in Z. intenta; orbicular small and black, sharply contrasting; subterminal area concolourous with medial area, never contrastingly paler with strong striae.Hindwing -as for Z. intenta, but without variation toward more contrasting hindwing markings seen in pale specimens.Male genitalia -valves slightly more elongate compared to Z. intenta; aedeagus slightly shorter and less twisted than in Z. intenta.Female genitalia -ostium separated from caudal margin of antevaginal plate by diameter of ostium; proximal chamber of corpus bursae 1.9 × diameter of distal chamber.
Distribution and biology.Zale lunifera occurs primarily east and south of the Appalachian Mountains.Examined material and reliable records indicate a range from southern Maine (Wagner et al. 2003) south to Lee Co., Mississippi (D. Schweitzer, pers.comm.) and Florida.Not known from south-eastern Virginia or South Carolina, but the species may occur in these regions.Lack of suitable habitat in Maryland and Delaware make occurrence in these states unlikely (D.Schweitzer, pers.comm.).Occurs inland to the mountains of Virginia and Lebanon County, Pennsylvania (Nature-Serve 2009).
In southeastern Georgia this species inhabits open, sandy pine-oak forest.Wagner et al. (2003) record it from sand plain pitch pine / scrub oak barrens in northeastern United States.Larvae feed on Bear Oak (Quercus ilicifolia Wangenh.)(Wagner et al. 2003), and other scrub oak species (NatureServe 2009).Additional life history data are given by NatureServe (2009).
Remarks.DNA analysis of seven Z. lunifera specimens (New York, North Carolina, Florida) exhibited two 'barcode' haplotypes diff ering by one base-pair.Minimum divergence from Z. intenta haplotypes (fi ve specimens from Quebec and Tennessee) was 1.2 %.
Discussion
Th e taxonomy of Zale lunifera (in the broad sense) has not been clear.Forbes (1954) recognized one valid species, but correctly diagnosed "southern specimens of Z. cingulifera", i.e., Z. lunifera, as diff ering from Z. intenta in the more irregular forewing lines, stronger and more dentate subterminal line, and less striate pattern.All Z. lunifera group names were treated as synonyms of Z. lunifera in Franclemont and Todd (1983).Subsequently, Wagner et al. (2003) treated Z. lunifera as "Zale sp. 1 near lunifera," as it was thought that nominate Z. lunifera was the more common and widespread spe-cies.As discussed above, the name Z. intenta applies to the widespread species, whereas true Z. lunifera is the species with a more restricted occurrence east and south of the Appalachians.
Th e global conservation rank currently assigned to Z. lunifera is G3G4, or "Vulnerable" to "Apparently Secure" (NatureServe 2009).Additional surveys for this species should be carried out in the Appalachian Mountains (particularly the eastern portion), sand hills and coastal plain south of New Jersey, which would probably show this specise to be more widespread than currently known.
referred to herein are as follows: AMNH
American Museum of Natural History, New York, New York, USA BMNH Th e Natural History Museum (formerly British Museum [Natural History]), London, UK.CNC Canadian National Collection of Insects, Arachnids, and Nematodes, Ottawa, Ontario, Canada USNM National Museum of Natural History (formerly United States National Museum), Washington, D.C., USA.
Systematics Zale intenta (Walker), stat. rev.
Homoptera intenta -type locality St. Vincent [Florida?]acc.to type label.; holotype in BMNH [photograph examined].Th e wing pattern of the holotype is closest to that of southeastern United States populations of this species, which have a more greyish, contrasting pattern (particularly the hindwing) and more contrastingly pale subterminal forewing area than more northern specimens of this species and of Z. | 2,504.6 | 2010-03-18T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Gold nanoparticle clusters in quasinematic layers of liquid-crystalline dispersion particles of double-stranded nucleic acids.
The interaction between gold nanoparticles and particles of cholesteric liquid-crystalline dispersions formed by double-stranded DNA and poly(I)×poly(C) molecules is considered. It is shown that small-sized (~ 2 nm) gold nanoparticles induce two different structural processes. First, they facilitate the reorganization of the spatial cholesteric structure of the particles into a nematic one. This process is accompanied by a fast decrease in the amplitude of an abnormal band in the CD spectrum. Second, they induce cluster formation in a “free space” between neighboring nucleic acid molecules fixed in the structure of the quasinematic layers of liquid-crystalline particles. This process is accompanied by slow development of the surface plasmon resonance band in the visible region of the absorption spectrum. Various factors influencing these processes are outlined. Some assumptions concerning the possible mechanism(s) of fixation of gold nanoparticles between the neighboring double-stranded nucleic acid molecules in quasinematic layers are formulated.
INTRODUCTION
Metal and metal oxide nanoparticles are known to be characterized by their inherent ability to exhibit specific properties depending on the nanoparticle's size. these properties of nanoparticles differ substantially from those typical of a "bulky" sample of the initial material. nano-sized gold (Au) nanoparticles that are used both for research and applied purposes [1] (in particular, for diagnosis and treatment of certain diseases [2,3]) are among the most vivid examples of the existence of such differences. Although the in vitro and in vivo cytotoxicity of Au nanoparticles has been investigated by several research teams, the data pertaining to the biological effects induced by Au nanoparticles are rather controversial [4,5]. It is quite possible that the reason for this is that different biological systems have been used to study the effect of nanoparticles; in this case, it is difficult to compare their action mechanisms. the data [3,6] provide a background to assume that the in vitro and in vivo action of Au nanoparticles on spatially arranged DnA structures is similar to that of molecules that possess mutagenic activity. Particles of DnA cholesteric liquid-crystalline dispersion (cLcD) are known to be among the structures that model certain spatial features of DnA within biological objects [7]. Indeed, the physicochemical features of DnA cLcD particles indicate some properties, which are characteristic of Protozoan chromosomes (e.g., chromosomes of Dinoflagellate, etc.) and DnA-containing bacteriophages [8][9][10].
Hence, DnA cLcD is a system of undoubted interest both in terms of nano-and biotechnologies.
When studying the effect of Au nanoparticles on various biological macromolecules and systems, several facts should be borne in mind. Au nanoparticles, especially the small-sized ones, tend to spontaneously aggregate in water-salt solutions [1,11,12] and to form various complexes and aggregates with the solution components and dissolved macromolecules [13][14][15][16]. this process, accompanied by the approaching of neighboring Au nanoparticles, results not only in the enhancement of the so-called surface plasmon resonance (SPr) band typical of individual Au nanoparticles, but also in excitation of the collective vibrations of the electronic system and interaction between the neighboring "plasmons." the latter effect, known as plasmon overlapping, is accompanied [1,17,18] by a shift of the SPr band toward the shorter or longer wavelengths of the absorption spectrum depending on a number of parameters (interparticle distance, size and shape of the resulting aggregates, dielectric permittivity of the medium [19,20], existence of "interlayers" between the neighboring Au nanoparticles [21,22], etc.). It is obvious that the complex formation (and possible aggregation of neighboring Au nanoparticles) is dependent on the concentration and charge of Au nanoparticles, their size, and the properties of the solvent components. this means that when studying the interaction between Au nanoparticles and biopolymer molecules, control experiments are to be carried out which would prove the absence of "parasitic" optical effects induced by the formation of nonspecific aggregates between Au nanoparticles and the solvent components under the conditions used.
Hence, this work was aimed not only at proving the fact that there are no unnspecific aggregates between Au nanoparticles and the solvent components, but also at analyzing the interaction between Au nanoparticles and the double-stranded DnA molecules fixed in the spatial structure of the cLcD particles formed by phase exclusion of DnA molecules from water-salt solutions.
MATERIALS AND METHODS
colloid gold solutions (hydrosols) containing spherical nanoparticles of different sizes were used in this study. Au nanoparticles were synthesized according to the previously described procedures [23][24][25]. the first hydrosol was obtained using the procedure [23] and contained Au nanoparticles with a mean diameter of ~15 nm. the second hydrosol containing Au nanoparticles 5.5 nm in diameter was synthesized according to [24]. Finally, the third hydrosol containing quasi-metallic Au nanoparticles 2-3 nm in diameter was obtained according to the procedure described in [25]. the mean size of the Au nanoparticles in the initial solutions was determined via dynamic light scattering and electron microscopy. the numerical concentration of Au nanoparticles in the first, second, and third hydrosols was 10 12 , 10 13 , and 10 15 particles/cm 3 , respectively. the Au nanoparticles were negatively charged; their ξ-potentials were as follows: for 2-3 cm particles, -18 ± 7 mV (immediately after synthesis), -25 ± 5 mV (2 days after the synthesis) and -38 ± 5 mV (9 months after the synthesis); for 5 nm particles, -32 ± 4 mV; for 15 nm particles, -44 ± 3 mV.
the original solutions of Au nanoparticles were stored at 4°c in light-impermeable containers and used 2.5 months following the synthesis.
the absorption spectra were taken by cary 100 Scan (Varian, uSA) spectrophotometer. the circular dichroism (cD) spectra were recorded using an SKD-2 portable dichrometer. the cD spectra were represented as a dependence of the difference between the intensities of absorption of left-and right-handed polarized light (∆A; ∆A = (A L -A r )) on the wavelength (λ).
cLcD of DnA in PeG-containing water-salt solutions were prepared according to the previously described procedure [7].
A series of control experiments were carried rut to check the possible interaction between Au nanoparticles and biopolymer molecules (nucleic acids and proteins).
As has already been mentioned in Introduction, a number of questions pertaining to the behavior of negatively charged small-sized Au nanoparticles under the conditions used were to be answered. Are these Au nanoparticles capable of: a) forming aggregates in solutions of low or high ionic strength; b) interacting (form complexes) with a neutral polymer (PeG) used to form DnA cLcD particles; c) affecting single-stranded nucleic acid molecules in low-or high-ionic-strength solutions; and d) affecting double-stranded DnA molecules under conditions that prevent dispersion formation in a PeGcontaining water-salt solution.
Absorption spectra the absorption spectra of Au nanoparticles recorded at different times after PeG (c PeG = 150 mg/ml) addition to the solution are compared in Fig. 1A. It is clear that the absorption spectrum is characterized by a poorly pronounced band (I) at λ ~ 500 nm and a broadband in the short wave spectral region, which is caused by electron transitions both between the d orbitals and the sp hybridized orbitals of Au [26]. the amplitude constancy of the band at λ ~ 500 nm in the absorption spectrum and the absence of either red or blue shifts in this band unambiguously attest to the fact that negatively charged small-sized Au nanoparticles do not tend to aggregate near the surface of PeG molecules under the conditions used. Figure. 1B shows the absorption spectra recorded at different time intervals after Au nanoparticles addition to the water-salt solution of synthetic single-stranded polynucleotide poly(A). Figure. Circular dichroism spectra the cD spectra of water-salt solutions containing linear double-stranded DnA or poly(I)×poly(c) molecules attest to the fact that treatment of these molecule with Au nanoparticles causes no optical changes in them (spectra are not shown).
thus, the absence of any noticeable changes in the amplitude and position of the 500 nm band in the absorption spectra shown in Fig. 1A and in the cD spectra indicates that small-sized negatively charged Au nanoparticles neither undergo aggregation in aqueous solutions of low or high ionic strength nor form aggregates near PeG molecules under the selected conditions. Moreover, no changes in the amplitudes of the bands characterizing the optical properties of nitrogen bases or small-sized Au nanoparticles are observed under conditions when there is no phase separation of single-stranded polynucleotide molecules (Fig. 1C) or double-stranded DnA (Fig. 1D) and a biopolymer molecule dispersion is not formed [7].
the influence of small-sized Au nanoparticles on double-stranded DnA and the poly(I)×poly(c) molecules fixed in the spatial structure of cLcD particles has been investigated with allowance for the results of control experiments.
RESULTS AND DISCUSSION
Before analyzing the effect of Au nanoparticles on double-stranded DnA and the poly(I)×poly(c) molecules fixed in the spatial structure of cLcD particles, let's provide some illustrations of the structure of the initial liquid-crystalline dispersion particles. In physicochemical terms, each particle in the dispersion is a "droplet" of a concentrated DnA solution, whose structure and properties are determined by the osmotic pressure of the solution [7]. A "droplet" cannot be held in one's hands or immobilized on a substrate, since the "droplet" structure will change without the osmotic pressure of the solution, and DnA molecules will be converted from their condensed into an isotropic state. each cLcD particle consists of double-stranded nucleic acid molecules forming its neighboring (so-called quasinematic) layers [7]. Fiure. 2 illustrates certain features of the quasinematic layer consisting of ordered neighboring double-stranded molecules of nuclear acids (in particular, DnA). In the case of phase separation, the dispersion particles (hence, the quasinematic layer as well) do not contain molecules of a water-soluble polymer (PeG) molecule. there is "free space" both between the neighboring DnA molecules in the same layer and between the DnA molecules in the neighboring layers. the distance between two neighboring DnA molecules in a layer (d) can vary within the 2.5-5.0 nm range, depending on the osmotic pressure of the solution. under the conditions used (c PeG = 150 and 170 mg/ml), the distance between two DnA molecules determined via an X-ray diffraction analysis of the phases obtained by low-speed precipitation of the initial DnA cLcD particles [7] was 3.6 and 3.2 nm, respectively. DnA molecules ordered in layers retain almost all their diffusion degrees of freedom. Due to the anisotropic properties of DnA molecules, each subsequent quasinematic layer is rotated by a certain angle (approximately 0.1 о [7]) with respect to the previous one. the rotation gives rise to the helical (cholesteric) structure of a liquid-crystalline dispersion particle. the emergence of this structure can be easily detected according to the abnormal optical activity manifested as a characteristic intense band in the cD spectrum in the region of absorption of DnA chromophores (nitrogen bases). High local concentration of DnA and the ordered arrangement of these macromolecules in a layer provide conditions for a rapid interaction between molecules of various low-molecular-mass compounds ("guests") with DnA molecules (intercalation between base pairs, fixation in the grooves on the molecule surface, etc.). the distortion of the secondary DnA structure accompanying this interaction affects not only the properties of all quasinematic layers, but also the character of the interaction between them (hence, the structural features of any cLcD particle and its properties as well). Since the properties of the quasinematic layer(s) are determined by the physicochemical properties of DnA cLcD par-ticles, we will use this very term when reporting further results. Finally, complete separation of the chains of double-stranded DnA molecules in a quasinematic layer and their folding into individual random coils is infeasible for steric reasons [27,28].
these features of the quasinematic layer allow to hypothesize about the possible mechanisms of fixation of Au nanoparticles ("guests") near the double-stranded DnA molecules of the quasinematic layer (Fig. 2).
First, Au nanoparticles of any size (Figs. 2A-C) can interact both with the "surface" DnA molecules and with the base pairs of the terminal groups of DnA molecules in the quasinematic layers, thus forming complexes (ensembles) with them [13,[29][30][31]. Second, it is quite possible that Au nanoparticles, whose size is comparable to the distance between the DnA molecules in the quasinematic layer, can diffuse inside the layers (Fig. 2D), interact with the neighboring DnA molecules within the same quasinematic layer or neighboring quasinematic layers, and form linear clusters.
One can assume that binding even of a small number of negatively charged Au nanoparticles to DnA molecules (in particular, to the terminal groups in these molecules) results in dipole formation (it should be mentioned there is no need for penetration of Au particles into the quasinematic layer). Dipoles from the neighboring (DnA-Au) complexes within a quasinematic layer, as well as the layers, will tend to be organized in parallel fashion, which can eventually induce a change in the helical twisting of the neighboring quasinematic layers made of DnA molecules. the twist angle between these layers (~ 0.1 о [7]) can fall to zero, which is equivalent to untwisting of the cholesteric helical structure, and this process will manifest itself as the attenuation of the abnormal band in the cD spectrum of liquid-crystalline dispersion particles.
It is obvious that although it has no significant effect on the forces (sterical, etc.) that determine the tendency of the neighboring DnA molecules to organize in a parallel fashion, even a small number of negatively charged Au nanoparticles can induce changes in the contributions (in particular, anisotropic contribution to the van der Waals interaction) that control the helical twisting of the neighboring quasinematic layers of DnA molecules. In this case, the helical twisting of the neighboring quasinematic layers will be disturbed and the twist angle between these layers (~ 0.1 о [7]) can be equal to zero, which is equivalent to untwisting of the cholesteric helical structure accompanied by attenuation of the abnormal band in the cD spectrum of liquidcrystalline dispersion particles. therefore, it can be expected that if negatively charged Au nanoparticles somehow interact with double-stranded DnA molecules in cLcD particles, this interaction will be accompanied by changes in the abnormal optical activity, which is characteristic for this dispersion.
It is also quite possible that when neighboring Au particles localize near DnA molecules in a certain fash- The frame and wide arrows indicate the presence of osmotic pressure (π) in the PEG-containing solution; d -distance between the axes of the neighboring DNA molecules ion, interaction between these nanoparticles can result in the emergence of a surface plasmon resonance band in the absorption spectrum [1,13,19].
Changes in circular dichroism spectra caused by the treatment of DNA CLCD particles with Au nanoparticles treatment of DnA cLcD particles with Au nanoparticles results in a decrease in the amplitude of the abnormal negative band in the cD spectrum (Fig. 3). the fact that the band has a negative sign indicates that right-handed helical double-stranded DnA molecules give rise to a left-handed helical structure of the cLcD particles [7]. Due to the effect of Au nanoparticles, the amplitude of the abnormal band in the cD spectrum of DnA cLcD decreases within a rather short period of time. the decrease in the amplitude of the abnormal band in the cD spectrum of DnA cLcD particles becomes pronouncedly stronger as the concentration of Au nanoparticles in the solution increases. It should be mentioned that noticeable changes in the amplitude of the abnormal band in the cD spectrum of DnA cLcD starts at some critical concentration of Au nanoparticles in a solution of approximately 1,000 Au nanoparticles per DnA cLcD particle (Fig. 3, inset).
Similar data characterizing the decrease in the abnormal band in the cD spectrum of cLcD formed by synthetic double-stranded poly(I)×poly(c) molecules caused by treatment with Au nanoparticles were presented in [6]. It should be mentioned that the emergence of a positive band in the cD spectrum of this cLcD attests to the fact that the right-handed helices of double-stranded poly(I)×poly(c) molecules form cLcD particles with right-handed twisting of their spatial helical structure. the rapid decrease in the amplitude of the band in the cD spectrum of DnA cLcD depends on the size of Au nanoparticles. In particular, if Au nanoparticles are 2 nm in diameter, the amplitude of the abnormal band in the cD spectrum decreases by 75%, whereas when 15-nm diameter nanoparticles are used, it decreases by only 20% [32].
the decrease in the amplitude of the band in the cD spectrum of DnA cLcD is also dependent on the temperature of the solution where the dispersion particles are treated with Au nanoparticles [32].
In combination with the differences in the efficiency of the changes in the cD spectrum for nanoparticles of different sizes, the scheme shown in Fig. 2 allows to assume that there are two reasons for the decrease in the abnormal optical activity of DnA cLcD or poly(I)×poly(c) cLcD particles. First, individual Au nanoparticles of any size ( Figs. 2A-C) can interact with the "surface" DnA molecules to yield complexes or linear ensembles (clusters). In this case, small-sized Au nanoparticles can localize in the grooves of the "surface" DnA molecules [31,33] or form complexes with pairs of DnA nitrogen bases (in particular, with n7 atoms of purines [34][35][36][37]). Second, Au nanoparticles whose sizes are comparable to the distance between the DnA molecules in quasinematic layers can diffuse inside the layers to interact with DnA molecules. It is important to mention two aspects here. 1) It was found as early as in the first experiments [13,38,39] that Au nanoparticles can form ensembles near the surface of linear single-stranded DnA molecules. ensemble formation from Au nanoparticles was subsequently shown to be accompanied by the formation of planar suprastructures consisting of repeating double-stranded DnA molecules and Au nanoparticles. these results demonstrate unambiguously that, after they interact with Au nanoparticles, DnA molecules tend to form planar suprastructures [30,39,40], despite the fact that the original DnA molecules possess anisotropic properties [7]. 2) In case of cLcD particles of double-stranded DnA molecules, formation of an ensemble even of a small number of Au nanoparticles on "surface" DnA molecules or near DnA molecules in quasinematic layers will result in changes in the character of the interaction between neighboring quasinematic layers. this can result in the attenuation of the helical twisting of the neighboring layers; i.e., the spatial helical structure of cLcD particles will untwist.
With allowance for the formation of planar structures considered above, it can be stated that Au nanoparticles (in case of cLcD particles) initiate a parallel (rather than helical) arrangement of the neighboring quasinematic layers of DnA molecules. regardless of the aforementioned reasons, combination of the control experiments ( Fig. 1) and the results obtained (Fig. 3) allows one to suggest that the action of Au nanoparticles is directed towards the doublestranded DnA molecules fixed in the cLcD particles. Meanwhile, the rapid decrease in the abnormal band in the cD spectrum can be attributed to binding of an appreciably small number of Au nanoparticles to the DnA molecules in cLcD particles. this process is accompanied by the disturbance of the helical mode of ordering in the neighboring quasinematic layers; i.e., Au nanoparticles induce a transition similar to the known cholesteric → nematic transition [7]. thus, the changes in the cD spectra of DnA cLcD (or poly(I)×poly(c) cLcD) indicate that Au nanoparticles of different sizes can interact with the doublestranded molecules of nucleic acids or synthetic polynucleotides within cLcD particles (the efficiency of the interaction may vary), although most of the de-tails of the mechanism underlying the interaction remain unclear.
Changes in the absorption spectra caused by the treatment of DNA CLCD particles with Au nanoparticles the analysis of the absorption spectra of Au nanoparticles permits an assessment of the size of the ensembles formed by these particles under various conditions [41][42][43][44].
noticeable changes both in the visible and in the uV spectral regions are observed after DnA cLcD particles are treated with small-sized Au nanoparticles (Fig. 4A). this treatment is primarily accompanied by changes in band (I) at 550 nm (SPr band) [41,42]. Figure 4B shows the data obtained by treating cLcD formed by poly(I)×poly(c) molecules (their particles are characterized by left-handed twisting of the spatial structure) with Au nanoparticles. It is clear that treatment with Au nanoparticles in this case is also accompanied by the development of the plasmon effect.
the emergence of the SPr band is responsible for the pink-violet color of the solution containing DnA cLcD or poly(I)×poly(c) cLcD and treated with Au nanoparticles. the control experiments ( Fig. 1) have demonstrated that the band at ~505 nm is poorly pronounced in the absorption spectrum of Au nanopar- ticles and remains almost unchanged when solvent properties are varied. the intensity of the SPr band gradually increased over time; its maximum shifted from λ ~ 505 to ~ 550 nm. Meanwhile, the amplitude of band (II) in the uV region of the spectrum corresponding to the absorption of DnA chromophores decreases over time. It should be also mentioned that according to theoretical calculations [45], similar changes in bands (I) and (II) in the absorption spectrum are responsible for the increase in the volume fraction of Au nanoparticles in the ensemble formed by these particles. It is characteristic that the treatment of DnA cLcD particles with Au nanoparticles 5 and 15 nm in diameter does not result in any changes in the absorption spectra of these nanoparticles. this fact gives ground to hypothesize that there are noticeable differences in the mechanisms of action of small-and large-size Au nanoparticles on DnA cLcD particles. Indeed, it can be seen from the scheme shown in Fig. 2 that Au nanoparticles of any size (A-C) can localize near the "surface" DnA molecules of the quasinematic layer and form linear ensembles. Formation of these ensembles even from a small number of Au nanoparticles can be accompanied by the enhancement of the SPr band [1].
It is important to note that the emergence of the plasmon effect does not require direct contact between neighboring Au nanoparticles, and the plasmon effect can be observed as long as the distance between the neighboring nanoparticles is shorter than the wavelength of the incident light [1].
the absence of changes in the absorption spectrum of cLcD particles after they are treated with Au nanoparticles 5 and 15 nm in diameter, in combination with the scheme given in Fig. 2, allows one to assume that in addition to the known fact that Au nanoparticles are ordered near single-stranded or linear double-stranded DnA molecules [29-31, 39, 40], there is a different mechanism of arrangement of small-sized Au nanoparticles in DnA cLcD particles. the evolution of the SPr band during the treatment of DnA cLcD with Au nanoparticles lasts for ~100 min (Fig. 5); then, its saturation occurs. the direct proportional dependence between the amplitude of the SPr band (until the saturation point) and the t 0.5 value is retained. under the assumption that the amplitude of the SPr band is associated with the concentration of Au nanoparticles in the resulting ensemble, the dependence shown in the inset (Fig. 5) represents the diffusion of Au nanoparticles [46] into the quasinematic layers of cLcD particles. Figure 6 (inset) shows the dependence between the position of the SSr band maximum on the size of spherical Au nanoparticles, which was constructed by averaging the published data [40][41][42][43]. It was demon- Fig. 6. Position of the surface plasmon resonance (SPR) peak as a function of the size of the linear clusters of Au nanoparticles, which are formed in the spatial structure of DNA CLCD particles. Symbol (♦) shows the data for the linear cluster of Au nanoparticles formed within the spatial structure of poly(I)×poly(C) CLCD particles. Inset: dependence of the position of the SPR peak on the diameter of spherical Au nanoparticles (the average data are taken from [40][41][42][43]) strated by comparing the results shown in Fig. 4 with this dependence that the size of Au nanoparticles after their binding to DnA cLcD particles has the potential to increase from 2 to ~60 nm. Although this estimation is not consistent enough, since the dependence characterizes the properties of Au nanoparticles of spherical shape, it still can be used for comparative assessment of the size of Au nanoparticles formed under various conditions. the results presented in [6] and characterizing lowangle X-ray scattering from the phases formed by DnA cLcD particles treated with Au nanoparticles allow one to make a more accurate estimation of the particle size. these results indicate that linear clusters of Au nanoparticles with a maximum size of 40 nm are formed within the structure. the SPr band is characterized by a maximum at λ ~ 550 nm [6]. the dependence of the position of the SPr peak on the linear size of Au clusters (Fig. 6) can be constructed using these findings (i.e., it directly describes Au nanoparticle clusters formed upon interaction between Au nanoparticles and particles of cLcD of various nucleic acids). It is clear that the actual size of the resulting ensemble (the linear cluster of Au nanoparticles) for DnA increases from 2 to 40 nm. treatment of poly(I)×poly(c) cLcD with Au nanoparticles results in an increase in the size of Au nanoparticles up to 34 nm (these data are indicated by ♦ symbol on the X axis in Fig. 6).
It should also be mentioned that the size of the linear clusters of Au nanoparticles was never higher than 40 nm under the experimental conditions used (negatively t, min The neighboring DNA molecules forming the quasinematic layer are "cross-linked" via nanobridges, which do not allow Au nanoparticles to penetrate into the layer and to form clusters in the «free space» between DNA molecules (the probability of their interaction with the "surface" DNA molecules remains unchanged). The frame means the presence of osmotic pressure in the PEG-containing solution charged Au particles, high ionic strength of solutions [47,48], etc.). the results presented in Figs. 5 and 6 enable one to analyze more thoroughly the diffusion mechanism of formation of Au nanoparticle clusters. Since the concentration of Au nanoparticles "outside" DnA cLcD particles is higher than that "inside" (i.e., between the quasinematic layers), the concentration gradient induces the emergence of a diffusion flow of Au nanoparticles. the flow stops when the concentrations "outside" and "inside" DnA cLcD particles become equal. If the characteristic time of attaining this equilibrium is t, the size of a cluster formed by the diffused Au nanoparticles increases as the square root of time (i.e., as t 0.5 ). One can expect this process to be hindered by the lower translational entropy value of the Au nanoparticles concentrated inside a cluster (i.e., in the «free space» between the quasinematic layers) as compared to that of the Au nanoparticles which are freely distributed over the solution. Since the entropy factor is proportional to k B T, the size of the Au nanoparticle clusters formed in nucleic acid cLcD particles will decrease with increasing temperature of the solution.
thus, in our case the shift in the position of the SPr band is associated with the size of the linear Au nanoparticle clusters within nucleic acid cLcD particles the problem of estimating Au nanoparticles in a cluster based on the results of optical changes remains unsolved, since the position of the SPr peak depends on the number and distance between the Au nanoparticles in a cluster, the dielectric permittivity of the medium, and other parameters [19].
With allowance for these results and the hypothetic scheme (Fig. 2) showing all possible ways for Au nanoparticles to bind to the DnA molecules fixed in the structure of cLcD particles, as well as for the changes in the amplitudes of the bands localizing in different regions of the absorption spectrum (Fig. 4), which was not observed in the control experiments with singlestranded polynucleotide or double-stranded DnA molecules under conditions impeding their condensation (Fig. 1), one can consider that small-sized Au nanoparticles (2 nm) can form linear clusters in cLcD particles.
Although capable of interacting with the "surface" DnA molecules (Figs. 2A,B) or terminal groups of DnA molecules (Fig. 2C) in quasinematic layers, Au nanoparticles 5 and 15 nm in diameter are too large to be incorporated between the DnA molecules in these layers. Figure 7 shows the curves that characterize the rate of changes in the amplitude of the abnormal band in the cD spectrum of DnA cLcD, of the SPr band, and of the band located in the uV region of the absorption spectra after the dispersion is treated with small-sized Au nanoparticles. It is clear that the treatment of DnA cLcD with Au nanoparticles is accompanied by two simultaneous processes: a fast decrease in the abnormal optical activity of DnA cLcD and a slower evolution of the SPr band. the process recorded on the basis of the changes in the abnormal band in the cD spectrum lasts 10-15 min, whereas the evolution of the SPr band requires approximately 60 min.
thus, in addition to the fast interaction between Au nanoparticles (of any size) and DnA cLcD particles (which is required to change their abnormal optical activity to a certain extent), incorporation of small-sized Au nanoparticles in the structure of cLcD particles yielding Au nanoparticle clusters is also possible.
Absorption and CD spectra obtained for CLCD particles with DNA molecules cross-linked by nanobridges treated with Au nanoparticles
An important issue is where the Au nanoparticle clusters localize. It can be assumed that small-sized Au nanoparticles diffuse into the "free space" between neighboring DnA molecules in the quasinematic layers of cLcD particles to cluster there. this process is accompanied by the emergence and evolution of the SPr band (Fig. 4).
In order to verify this assumption, the "free space" between the neighboring DnA molecules in cLcD particles was filled with appreciably strong nanobridges [49] consisting of alternating antibiotic molecules and copper ions (Fig. 8). this process resulted in the formation of a DnA nanoconstruction. In this case, the "free space" becomes inaccessible for diffusion and clustering of Au nanoparticles.
If the assumption about the localization of Au nanoparticle clusters is valid, treatment of the DnA nano-construction with Au nanoparticles will not result in any changes in the bands located both in the uV and visible regions of the absorption spectrum. Indeed, it is clearly seen in Fig. 9A that no significant changes in the absorption spectrum of the nanoconstruction obtained from cLcD particles due to the formation of nanobridges between DnA molecules are observed and that SPr band (I) does not evolve in this case. Meanwhile, band (II) in the uV region of the spectrum remains virtually intact. this means that small-sized Au nanoparticles cannot insert themselves between the neighboring DnA molecules in quasinematic layers, since the "free space" is occupied by nanobridges [49].
One can focus on the fact that the nanobridges increase the rigidity of the spatial structure of the nanoconstructions [49]. Hence, although "surface" DnA molecules in particles of nanoconstructions are available for interacting with Au nanoparticles, the untwisting process (in the case when a nanoconstruction is treated with Au nanoparticles) accompanied by a decrease in the abnormal band in the cD spectrum of the nanoconstructions will require a longer period of time and can be terminated even at a smaller "depth" of this process. the cD spectra of the original DnA cLcD (dashed curve 6), DnA nanoconstruction (i.e., cLcD with the neighboring DnA molecules cross-linked via nanobridges; curve 1), and the same nanoconstruct treated with Au nanoparticles (curves 2-5) are compared in Fig. 9B. It is clear that the formation of a DnA nanoconstruction from the original cLcD is accompanied by amplification of the band in the uV region and the emergence of an additional band in the visible region of the spectrum, which is caused by the formation of nanobridges containing chromophores absorbing within this wavelength range [49]. the amplification indicates that the twist angle of the neighboring quasinematic layers increases due to the formation of nanobridges [7]. After the nanoconstruct is treated with Au nanoparticles at a high concentration (С nano-Au = 0.82 × 10 14 particles/ml), the amplitude of the bands in the uV and visible regions of the spectrum decreases despite the fact that the absorption spectrum does not contain the SPr band. Figure 10 shows a comparison of the kinetic curves characterizing the changes in the abnormal optical activity caused by treatment of the original DnA cLcD and DnA nanoconstructions with Au nanoparticles. It is clear that the depth and rates of these processes are different for the original DnA cLcD and DnA nanoconstructions, which supports the thesis that the bridges play a stabilizing role.
the results shown in Fig. 9 additionally demonstrate that small-sized Au nanoparticles can interact with the "surface" molecules of double-stranded DnA, thus inducing the cholesteric → nematic transition, even if nanobridges form between the neighboring DnA molecules, but cannot diffuse between DnA molecules in the quasinematic layers, since the "free space" is filled with nanobridges. thus, the SPr band can emerge and evolve only if there is "free space" between DnA molecules in quasinematic layers. It is in this very space that Au nanoparticle clusters are formed.
We previously demonstrated that the interaction between Au nanoparticles and the "surface" DnA molecules in cLcD particles induces changes in the helical spatial distribution of neighboring quasinematic DnA layers (i.e., formation of the nematic structure). It is possible that the probability of one (or several) right-handed helical double-stranded DnA molecule rotating 180 о with respect to its neighbor(s) due to rotational diffusion in the quasinematic layers located at nanodistances increases at this very moment. In this case, the reactive groups of a DnA molecule (1) localize in the "free space" facing the identical groups of its neighbor (2), which can be referred to as a type of face-to-face phasing of the reactive groups of DnA molecules. therefore, clustering of negatively charged Au nanoparticles in the "free space" between DnA molecules (Fig. 2) may result from two processes. First, Au nanoparticles may diffuse into the "free space" between the neighboring "phased" DnA molecules (1 and 2) (in this case, it is a one-dimensional diffusion of Au nanoparticles between these DnA molecules). Second, the interaction between a DnA t, min particle in the quasinematic layer and a negatively charged small-sized Au nanoparticle can be conditionally regarded as the equivalent interaction between a plane and a spherical particle [50]. In this case, the interaction of the Au nanoparticle can be determined by the so-called casimir effect [51][52][53][54].
For either version of the processes discussed above (provided that the experimental conditions are fixed), one can assume that Au nanoparticles can form linear clusters between DnA molecules (direct contact between neighboring Au nanoparticles in clusters can be absent) [55]. the clustering of Au nanoparticles is accompanied by the evolution of the SPr band.
thus, different processes can determine "sliding" ("retraction") of Au nanoparticles into the "free space" between neighboring DnA molecules in quasinematic layers.
thus, if one accepts the hypothesis of the ordering mechanism of negatively charged Au nanoparticles in quasinematic layers, it becomes clear why small-sized Au nanoparticles form clusters only in cLcD particles comprising double-stranded molecules of nucleic acids or synthetic polyribonucleotides (poly(I)×poly(c))., CONCLUSIONS these findings demonstrate that small-sized Au nanoparticles form clusters in the "free space" between the neighboring double-stranded DnA molecules fixed in the spatial structure of cLcD particles. this conclusion allows one to regard a DnA cLcD particle as a matrix that specifically adsorbs small-sized Au nanoparticles and provides conditions for the formation of linear clusters from these nanoparticles. the cytotoxicity of Au nanoparticles can presumably be attributed to their tendency to cluster. | 8,261.4 | 2012-10-01T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
A NEW EXPLANATION OF DEFLECTION RESULTS OF CHARGED PARTICLES IN HIGH-VELOCITY MOTION IN MAGNETIC FIELD·CORRECTION OF LORENTZ FORCE
It is incorrect to only apply mass change or time change in explanation of the deflection result of charged particle in high velocity motion in magnetic field. A scientific and correct method is to change mass and time at the same time. However, it is impracticable to necessitate force formula in simultaneous change of mass and time. The paper makes correction of Lorentz Force formula based on analysis method for acting force in electric field, and launches into a new understanding of deflection result of charged particles in high velocity motion in magnetic field according to the corrected Lorentz Force formula.
INTRODUCTION
It is known to all in the field of physics that charged particle moving through magnetic field shall undergo deflection. The general explanation of the phenomenon is that when a charged particle with a charge of Q moves through magnetic field at the velocity of V along the direction of x, Q shall undergo Lorentz Force BQV F (B-magnetic induction) along the direction normal to V. F shall make the charged particle with a mass of M and a charge of Q generate an acceleration a=F/M along the direction normal to V (direction of Y) to make Q generate migration Y along the direction normal to V, namely deflection Y. Suppose the time for Q to move through magnetic field is t, then the expression for deflection Y is as follows: Where (1) is derived under the condition that the direction of F be unchanged; although when particle moves through magnetic field, directions of velocity V and F (F is normal to V) are changed to a certain extent, since t is too short, the changes in F and V are also minor, accordingly, we can consider that there is no change in the directions of F and V and that the error of Y may be ignored.
Where (1) is practical on condition that velocity V be low, but if V is very high, there is obvious deviation between the calculated result of (1) and measured result, higher V shall bring about larger deviation, experiment shows that when V is very high, deflection distance shall be: , replace M in expression (1) with M to obtain the practical expression (2).
Another method is to carry out analysis based on change of particle momentum. Suppose the migration velocity of particle along direction of Y is u and particle mass is and particle momentum along direction of Y is and the acting force on particle along direction of Y is dt du It is observed that dt du in the analysis method is constant acceleration a in substance, the deflection it determined is still 2 2 at Y , and that the key to the analysis method , there is no substantial difference between this method and the first method. Compared with the first method, it is obvious that this method is lack of its physical significance.
There is another method specified in Berkeley Physics, the method defines that is particle clock time and that mass is rest mass M. Y is displacement of particle in t along direction of Y, momentum: . It is observed that the time for particle to undergo displacement along direction of Y is ,according to special relativity, there is , therefore there is ,the momentum given in the result is identical with that obtained in the previous method. It is natural to come to a conclusion identical with that obtained by expression (2) according to the deduction steps given in the previous method.
It is observed that the first and second methods only apply mass change, although they do not mention that they only adopt mass change, in fact, time is unchanged, being a constant; the third method only applies time change, specifying that mass is unrelated to motion, being a constant. However, according to special relativity, the mass of particle in high velocity motion and time are changed simultaneously, accordingly, it is improper to suppose mass is 3 ISSN 2055-6551(Print), ISSN 2055-656X(Online) unchanged and time is unchanged, because it is in violation of special relativity and practical result.
In accordance with special relativity, the correct method in analyzing the deflection of grain is to use both mass conversion and time . Take M and t into formula (1), and then 2 3 It is obvious that formula (3) is not in line with the result of (2). Why? It is certain that formula (1) is the basic formula of mechanics, and this shall be affirmed; besides, it is without doubt that quality and time shall be conversed at the same time. Therefore, the only possibility is that formula (3) does not coincide with reality, and it must be Lorentz ForceF that fails to show the actual power.
Lorentz Force BQV F is only the experimental conclusion that charged particles of low runner are stressed in uniform magnetic field. It doesn't have necessary theoretical explanation. To analyze Lorentz Force is in line with reality or not, it is necessary to know the reason and change rules of Lorentz Force. Therefore, we have to illustrate the root of Lorentz Force from the theory and then it can analyze and solve the actual problems in formula (3).
The original analysis of Lorentz Force
It is showed in experiments that electric charge will be influenced by force in electric field, and the electric field can be considered as the only origin of charge force. Therefore, the electric field force can be considered as the basic point in analyzing Lorentz Force.
It can be known from the superposition principle of static electric field that the distance between two positron and negatron in space, the total electric field of positron and negatron must be equal to the vector sum when the positron and negatron exist alone. It can be deduced that any free electron in conductor (hereinafter called negatron) and the proton that is equal with free electron in electric quantity (hereinafter called positron) has their own electric field. Therefore, when there is current in conductor, namely the macroscopic motion of negatron in conductor exists, the electric field of negatron must move with negatron. And the macroeconomic effect is that the negative electric field moving around conductor exists along with the static positive electric field. In terms of magnet, the electron that spins in same direction in magnetic domain is similar to the negatron of macroscopic motion in conductor. Therefore, there are negative electric field of macroscopic motion and static positive electric field in magnetic domain.
First of all, we shall analyze the relationship between magnetic field and electric field. It is showed in Biot-Savart theorem that, the feeling strength of any current element l Id (small lines on the wire that in the same direction of current) at the place of r is Since I is actually formed by free electron in the wire in the speed of V (in the opposite direction of current I). Taking as the linear charge density in the wire, then (the quantity of free electron in dQdl ) and because In the upper formula, E d is the electric field intensity of dQ at the place of R. and the formula can be written as (4) The formula shows that the feeling strength at some place is equal to the vector product of kinematic velocity V and the electric field intensity. The formula shows that magnetic field is a moving electric field.
Picture 1 Origin analysis of Lorentz Force
It is defined that flat consisting of wires with current is the current surface, in Picture 1, A and B indicates two sections of "infinity" current surface. A and B current surfaces are made up of two common wire with current, and the current of A and B is equal but in opposite direction. In Picture, It shall count out the feeling strength between two current surfaces A and B. It can be found out from formula (4) that, B is originated from moving electric field. The motion of It shall count out the feeling strength of A B from -A E , and from formula (4) The total feeling strength produced by current surface A and B is Picture 1, Q is the charge particle with positive electricity in the speed of V paralleling two electricity surfaces. The Lorentz Force that is bore by grant with Q charge is When we make analysis of the source of Lorentz Force, it will be our natural selection for us to start with the source of force Q. There is no doubt that the only source of Q force is under the effect from electric field. There are not only electric field - in the direction of A V and the filed density is inversely proportional to the distance of power line, therefore the filed density shall be magnified as: . Under general condition, V and u is far less than C, therefore -A E in the above formula can be converted into: Let us turn to -E B , it is can be known that the relative velocity of and it can be got from the explanations mentioned above as: For the moving Q, the total field intensity between the current surface A and B is The force on Q is: It is obvious that the formula is just the Lorentz Force Formula(5) mentioned before. It can be concluded from the above analysis that: when Q is moving in magnetic field at the velocity V, one moving electric field with two current surface A E and B E will be formed; for Q, their velocity are V+u and V-u respectively and the electric field of V+u will shrink greater with more intensive filed intensity; when the electric filed of V-u shrinks with less intensive filed intensity, one synthesis electric field will be formed by their differentiated filed intensity, whose force on Q will be just Lorentz Force.
Correction of Lorentz Force
It can be known from the analysis on the derivation formula of above-mentioned Lorentz Force that the derivation process will not hold when the velocity of Q is very large (almost reaching the velocity of light C); there are two reasons, one is that when V is very large, 4 4 C V and 6 6 C V …… can not be neglected, and the above derivation will not hold naturally; the other is that when V is very large, its velocities can not be added or subtracted ; therefore, the above derivation will not hold. Now, let us make further analysis on the force of charged particle in high velocity motion (expressed with electric charge Q) in accordance with the synthesis velocity method given in Special Relativity.
The synthesis velocity mentioned in Special Relativity refers to that in the direction of X, Y and Z. For the condition given in picture 1, the velocity on direction Z does not exist and migration velocity on direction Y is very slow, that's to say they can be neglected owing to the comparison with V. Therefore, it can be considered that there is also no velocity component on direction Y and we only need to make analysis of the synthesis velocity on direction X whose formula is given as: In the formula, v is the moving velocity of the other inertial system Z observed from inertial system Z with the motionless of relative subject M; x u is the moving velocity relative to subject N, x V is the relative velocity speed of M and N. For picture 1, we would like to conclude the force on Q (subject M), namely Q is of inertial system of Z and two current surface is of the inertial system of Z , The velocity of The movement velocity of negative electron is far smaller than C ( electron in magnetic domain is also smaller) in the moving electric field formed in two current surfaces A and B; therefore 2 2 C u is very small, which can be reckoned as 1 1 , and then the above formula can be converted into: The field intensity felt by Q is: , the above formula can be converted into: (11)( The first capitalized E, bold) For the sake of telling the force F, namely Lorentz Force F, exerted by electric field when Q is moving at medium and low speed, the force V F on Q exerted by electric field can be concluded from formula (11): It is shown from former formula (5) that Lorentz Force on Q is when it is moving at medium and low speed, and put it into formula (12): The formula (13) can be used for the correction of the formula of Lorentz Force. It is obvious that formula (13) is also qualified for that at both high and low speed when Q is moving in magnetic field at medium and low speed.
Interpretation of charged particles' deflection with the help of corrected formula for Lorentz Force
The corrected Lorentz Force V F on Q got from above analysis is just the electric field force, which is relatively static to Q; it will move together with Q at the same speed V. For the observer with static relative magnetic field (two current surfaces), force V F is a moving one. I have mentioned in "Set up Invariable Axiom of Force Equilibrium and Solve Problems about Transformation of Force and Gravitational Mass" (References. 3) that the force on moving subject will also change like its length, mass and time. The conversion formula for the force moving at the speed V is: In the formula, is the included angle between F and V.
F is just the real force exerted on charged particle we would like to discuss, put real V F into formula (3) instead of F: It is obvious that the deflection distance Y is just as determined in experiment room, namely true deflection distance formula (2).
It is shown from the above analysis that the problem in connection with the deflection of charged particle moving at high speed can be solved reasonably provided that its mass and time are changed at the same time with the help of corrected Lorentz Force in combination with force transformation.
Making analysis of particles' deflection from the inertial system of relative static of charged particles
According to the relativity principle given in Special Relativity, for the observers of inertial system Z (inertial system Z moving at the same speed of Q ) and inertial system Z (relative magnetic field or static surface), it is of course that they will have the same result of the deflection on Q passing through the magnetic field on direction Y.
The former analysis is the conclusion of the observer relatively static to Z , for this conclusion, the real force V F is got with the help of corrected Lorentz Force and changed combined power, moreover the analysis result compliance with fact is concluded. Z is moving relatively to Z at the speed V, it can be reckoned that V F is static relatively to Z (with slight movement of V F on direction Y neglected) there is no need for changing V F any longer…; Whether the change of this kind of conditions will have any influence on our analysis of the corresponding deflection? Let us make detailed analysis for it.
For observer Z, Q is static and one magnetic field (inertial system Z ) is passing through Q at the speed V , one downward force V F is be exerted on Q, and moreover distance Y is moved downwards (deflection); because the speed of Q on direction Y is very slow, the mass of Q is still static one, namely no change is occurred to it. Besides the force on Q is also a static one, namely 2 What is the relationship between V t and Z observed by Z? First of all, we shall make sure that the absolute moving speed V and V of Z and Z is the same and Z observes t is the time Q passing the magnetic field at the speed V, namely V L t ; for Z, the magnetic in L wide is moving at the speed of V ,according to the Relativity Principle given in Special Theory, the width of the magnetic field moving at the speed V is ; therefore the time used by the magnetic passing through Q from the observation of Z is , put it into formula(15), It is obvious that the relationship formula is also that for real deflection distance (2).
For the above analysis, the deflection results of charged particles moving at high speed got by Z and Z shall be the same, which is in compliance with Relativity Principle.
Application scope of Lorentz Force
I hereby point out that the force on Q will be the same only in the uniform magnetic field through the analysis with the help of electric magnetic field and Lorentz Force; under normal condition, the two results derived from the two methods will not be the same in non-uniform magnetic field. | 4,068.4 | 2015-08-12T00:00:00.000 | [
"Physics"
] |
Noncanonical Inflammasomes: Caspase-11 Activation and Effector Mechanisms
Inflammasomes are cytosolic, multiprotein complexes assembled by members of the NOD-like receptor (NLR) and PYHIN protein families in response to pathogen-associated molecular patterns (PAMPs) and danger signals, and serve as activation platforms for caspase-1. Recently, a new noncanonical inflammasome pathway has been described that activates caspase-11, an understudied pro-inflammatory caspase. Despite new insights into the signaling events that control caspase-11 activation, a number of unanswered questions remain...
Introduction
Inflammasomes are cytosolic, multiprotein complexes assembled by members of the NOD-like receptor (NLR) and PYHIN protein families in response to pathogen-associated molecular patterns (PAMPs) and danger signals, and serve as activation platforms for caspase-1. Recently, a new noncanonical inflammasome pathway has been described that activates caspase-11, an understudied pro-inflammatory caspase. Despite new insights into the signaling events that control caspase-11 activation, a number of unanswered questions remain.
What Are the Signals That Trigger Noncanonical Inflammasome Activation?
Activation of the noncanonical inflammasome pathway has been observed in response to a number of Gram-negative bacteria (Citrobacter rodentium, Escherichia coli, Vibrio cholerae, Salmonella typhimurium, and others), but not with Gram-positive bacteria [1][2][3]. This distinction indicates that maybe lipopolysaccharide (LPS), a component of the outer membrane of Gram-negative bacteria, could be the activator of caspase-11. Nevertheless, although LPS plays an important part in the activation of the noncanonical inflammasome [2,3], LPS alone is not sufficient to activate this pathway (discussed in detail below), indicating that an additional bacteria-derived signal must exist.
In this regard, it is intriguing that cholera toxin B (CTB) together with LPS was shown to be another activator of caspase-11 [1]. Although membrane damage could be a signal for caspase-11 activation, it is unlikely since other pore-forming toxins such as Clostridium difficile toxin B, listeriolysin O, and adenylcyclase (AC) toxin do not activate the noncanonical pathway [1]. Another possible activator would be bacterial RNA, which was recently proposed to trigger IL-1b maturation and cell death during E. coli or S. typhimurium infections. Interestingly, inflammasome activation by bacterial RNA required Trif (TIR-domain-containing adaptor inducing interferon-b) and NLRP3 [4], which are also involved in noncanonical inflammasome activation [1][2][3]. However, the observation that LPS alone is sufficient to induce caspase-11dependent septic shock in vivo [1] would argue against a role for bacterial RNA. Thus, further experiments will be required to identify the bacterial signals that trigger the noncanonical inflammasome pathways and to understand the roles of LPS and bacterial RNA in this process.
How Does Canonical and Noncanonical Inflammasome Signaling Differ?
Although both caspase-1 and caspase-11 eventually initiate cell lysis and the release of processed cytokines and danger signals, the hallmarks of inflammasome signaling [5], their underlying mechanisms differ significantly ( Figure 1A). Caspase-1 activation by canonical stimuli induces a pro-inflammatory, lytic cell death called pyroptosis. Although caspase-11 activation also induces lysis of the host cell, caspase-11-dependent cell death has features that distinguish it from pyroptosis. Pyroptosis is accompanied by the release of mature, processed cytokines (IL-1b and IL-18) that are secreted by a caspase-1-dependent mechanism called unconventional secretion [6]. In contrast to this, caspase-11 lacks the ability to cleave these cytokines by itself, since macrophages deficient in Nlrp3, ASC, or Casp1 still activate caspase-11 and initiate cell death but do not release mature IL-1b or IL-18. This suggests that caspase-11 acts in conjunction with the NLRP3 inflammasome to promote cytokine maturation [1]. The exact mechanism of this interaction is controversial, which in part could be accounted for by the different assays that have been used to monitor NLRP3 inflammasome assembly. Microscopic analysis of ASC speck formation suggests that caspase-11 acts upstream of NLRP3 [2], which is consistent with observations reported by the Yuan group [7], while biochemical enrichment of inflammasomes indicates that caspase-11 is downstream of NLRP3 [3]. In conclusion, since caspase-11-mediated cell death lacks associated cytokine maturation, it resembles a programmed lytic cell death more like necroptosis than pyroptosis.
Another difference between canonical and noncanonical inflammasomes is in the release of IL-1a and the danger signal high-mobility group box 1 (HMGB1). The release of IL-1a and HMGB1 by canonical inflammasomes requires active caspase-1mediated secretion [6], while caspase-1 is not required for their release in response to CTB and E. coli [1], suggesting that lysis is the release mechanism for these factors following caspase-11 activation ( Figure 1A). Whether caspase-11 also initiates novel caspase-1 effector mechanisms like the release of eicosanoids [8] and the secretion of growth factors [5] remains to be determined. Further work will be required to identify and characterize additional effector mechanisms of caspase-11 in vitro, such as phagosome-lysosome fusion (discussed separately below) [9], and to determine how these affect the role of caspase-11 in pathogenesis in vivo.
How Is Pro-Caspase-11 Expression Controlled?
Resting macrophages or dendritic cells (DCs) express very low levels of pro-caspase-11. The pro-caspase-11 promoter contains NFkB and STAT binding sites, and expression is highly inducible by LPS, IFN-c and TNFa treatment [10]. Recently, we and others have linked caspase-11 expression and activation to the signaling through Toll-like receptor 4 (TLR4) and Trif [2,3]. The induction of pro-caspase-11 mRNA and protein expression is significantly delayed in macrophages from TLR4 2/2 or Trif 2/2 mice following infection with S. typhimurium ([2] and unpublished results). Signaling via MyD88 is also involved, since MyD88-deficient macrophages show a slight delay in pro-caspase-11 induction. Nevertheless, pro-caspase-11 expression was not totally abolished in MyD88 2/2 /Trif 2/2 macrophages, suggesting that additional pathways contribute to the transcriptional induction of Casp-11 [2]. Similarly, Rathinam et al. show that LPS treatment or EHEC infections result in lower levels of pro-caspase-11 induction in Trif 2/2 macrophages [3]. Unexpectedly, their results do not show a contribution of MyD88 to caspase-11 induction, but their study did not directly compare Trif 2/2 to MyD88 2/2 /Trif 2/2 macrophages [3].
Trif-dependent induction of pro-caspase-11 expression could occur either by activating NFkB or through IRF3-mediated production of type-I-interferon (type-I-IFN). Irf3, Ifnar1 (interferona/b receptor), or STAT-1 deficiency delays pro-caspase-11 induction in S. typhimurium-infected macrophages, suggesting that the type-I-IFN pathway contributes to pro-caspase-11 expression. However, pro-caspase-11 expression is not completely abolished in the absence of type-I-IFN signaling [2]. In contrast, Rathinam et al. report that Ifnar1 deficiency completely abolishes pro-caspase-11 expression in response to LPS, IFN-b, or EHEC infection [3]. In addition, they show that IFN-b treatment significantly increases pro-caspase-11 expression compared to mock-treated macrophages [3], which is consistent with the observation that IFN-b treatment slightly increases pro-caspase-11 expression in DCs [11]. Conversely, IFN-b treatment of macrophages does not enhance pro-caspase-11 levels during Salmonella infections [2]. In conclusion, different pathogens or stimuli seem to induce pro-caspase-11 expression via distinct signaling pathways. Since the pathways that lead to induction of pro-caspase-11 expression are likely important for the different models of caspase-11 activation (discussed below), further analysis of caspase-11 induction is required.
What Is the Mechanism of Caspase-11 Activation?
TLR4/Trif-mediated type-I-IFN production is essential for caspase-11 activity; macrophages deficient in Tlr4, Trif, Irf3, ifnar, STAT-1, or Irf9 do not initiate cell death and the release of processed caspase-11 and cytokines in response to noncanonical inflammasome stimuli [2,3]. However, the mechanism through which IFN-b controls caspase-11 activation remains controversial, and two opposing models have been proposed ( Figure 1B, C).
One model for caspase-11 activation suggests a receptor/ scaffold-mediated activation mechanism. We observed that signaling via IFNaR and STAT-1 is crucial for caspase-11 activity in macrophages infected with S. typhimurium [2], yet this does not result from a lack of pro-caspase-11 expression, since significant levels of pro-caspase-11 are present in cells deficient for components of the type-I-IFN signaling cascade. Consistently, MyD88 2/2 /Trif 2/2 macrophages, which are deficient for type-I-IFN production, do express pro-caspase-11 when infected with S. typhimurium, but do not activate caspase-11. Restoring type-I-IFN signaling by adding exogenous IFN-b rescues this defect in Salmonella-infected MyD88 2/2 /Trif 2/2 macrophages, and this is independent of pro-caspase-11 induction [2]. Importantly, treatment with exogenous type-I-IFN in the absence of an infection does not result in caspase-11 activation in primary BMDMs, as exemplified by a lack of active caspase-11 p30 in the cell supernatant and absence of cell death [2]. These results indicate that in primary macrophages type-I-IFN is required for the expression of an interferon-inducible activator or receptor for caspase-11 ( Figure 1B). They also raise the possibility that a yetunknown bacterial signal is required for the activation of caspase-11.
In contrast to the receptor/scaffold-mediated activation mechanism, Rathinam et al. have suggested that pro-caspase-11 expression is both necessary and sufficient to induce pro-caspase-11 autoactivation ( Figure 1C) [3]. This conclusion was based on the observation that treating macrophages with LPS, IFN-b, and IFN-c for 16 h simultaneously induced pro-caspase-11 expression and activated caspase-11, as judged by the appearance of a processed caspase-11 p30 band in the lysates of these cells. However, the significance of this processed cytoplasmic caspase-11 is unclear, since the authors did not provide corresponding celldeath data for this timepoint and did not show the release of the caspase-11 p30 subunit into the cell supernatant [3]. To support the model, Rathinam et al. showed that IFN-b and IFN-c treatment indeed induced cell death, but only at far later timepoints (40 h posttreatment) and in BMDMs immortalized with v-myc/v-raf-expressing J2 retrovirus [3]. However, they did not show if LPS treatment also induces cell death at 40 h in immortalized cells. It must be noted though that data obtained with immortalized macrophages have to be interpreted carefully, since these cells are known to produce replication-proficient viruses (unpublished observation).
In conclusion, since LPS pretreatment is routinely used to prestimulate macrophages and was reported not to induce cell death even as late as 24 h poststimulation [4], it is unlikely that LPS treatment alone is sufficient to activate caspase-11. Similarly, IFN-b treatment in the absence of infection does not induce cell death in primary cells [12].
Another argument brought forward in support of autoactivation is the observation that expression of pro-caspase-11 in 293T cells or cell-free systems results in autoprocessing of caspase-11. cooperates with components of the NLRP3 inflammasome to induce caspase-1-dependent maturation of pro-IL-1b and pro-IL-18. It remains to be determined if caspase-11 activates NLRP3 directly or if additional signals are required. Active caspase-11 also induces cell lysis, resulting in the release of danger signals such as IL-1a and HMGB-1. Finally, during L. pneumophila infections, caspase-11 controls phagosome-lysosome fusion through the phosphorylation state of cofilin. (B, C) Two distinct models for caspase-11 activation. (B) Receptor/scaffold-mediated activation. Detection of Gram-negative bacteria by TLR4 results in the activation of NFkB and subsequent expression of pro-IL-1b and pro-caspase-11. Signaling through Trif and IRF3 induces expression of type-I-IFNs. Type-I-IFN signaling through IFNaR contributes to pro-caspase-11 expression and induces the expression of an uncharacterized receptor/activator of caspase-11. Activation of caspase-11 by this factor might require an additional undefined signal, stemming from the bacterial infection. (C) Autoactivation of pro-caspase-11. Detection of Gram-negative bacteria by TLR4 results in the activation of NFkB and subsequent expression of pro-IL-1b. Signaling through Trif and IRF3 induces expression of type-I-IFNs, which induces pro-caspase-11 expression. Pro-caspase-11 autoactivates, presumably once a concentration threshold is reached. doi:10.1371/journal.ppat.1003144.g001 However, the usefulness of these systems for the study of caspase activation is limited, since even caspase-1 (which is activated in a receptor-mediated manner) autoactivates in the 293T cell expression system [13,14]. In addition, pro-caspase-11 expression has been successfully restored in casp1/casp11-deficient macrophages, and autoactivation has not been reported [9,15].
Given the importance of caspase-11 in pathogenesis, a better understanding of the mechanism of caspase-11 activation is definitely required. The identification of a specific caspase-11 receptor and/or a bacterial ligand (other than LPS) required for caspase-11 activity would resolve this issue.
What Is the Role of Caspase-11 in Phagosome Maturation?
The Nlrc4/Naip5 inflammasome restricts the replication of intracellular Legionella pneumophila by activating caspase-1 and caspase-7. Caspase-11 was recently shown to also restrict the growth of Legionella in macrophages and in the lungs of infected mice [9]. Since this was also dependent on a functional dot/Icm system, NLRC4, and flagellin, the authors of that study have suggested that NLRC4 could activate caspase-11 [9], which is in contrast to data reported previously [1]. Finally, the authors showed that caspase-11 might promote the fusion of the L. pneumophila vacuole with lysosomes by modulating actin polymerization through cofilin ( Figure 1A). These results suggest that caspase-11 has other effector mechanisms besides cell death and NLRP3/caspase-1-dependent cytokine maturation. However, the modulation of phagosome-lysosome fusion might be specific for L. pneumophila infections, since caspase-1, -7, or -11-dependent growth restriction has so far not been reported for other bacteria activating the noncanonical inflammasome [2,3].
What Are the Functions of Caspase-11 In Vivo?
The noncanonical inflammasome pathway has been shown to be activated by a range of different Gram-negative bacteria but not by Gram-positive bacteria, suggesting that it is a conserved mechanism for the detection of these pathogens. But it is less evident how the activation of caspase-11 benefits the host in the context of an infection. Growth restriction has been shown to control bacterial number in the lungs of mice infected with L. pneumophila [9]; however, caspase-11-mediated cell death often results in detrimental effects for the host. For example, caspase-11mediated cell lysis was responsible for the lethal effects of LPS in a mouse model of sepsis, which occurred independently of NLRP3, ASC, and caspase-1, thus excluding an involvement of cytokine production [1,7]. In addition, caspase-11-mediated cell death increased susceptibility of mice to S. typhimurium infections in the absence of caspase-1 [2]. This result was surprising, since caspase-1-induced pyroptosis is important for the clearance of intracellular bacterial pathogens [16]. Further analysis showed that in the absence of caspase-1-initiated innate immune defenses, caspase-11-dependent cell lysis promotes spread and extracellular replication of Salmonella, a facultative intracellular bacterial pathogen. It is conceivable that caspase-11 evolved to support caspase-1 by providing additional means of inducing cell lysis. But since caspase-11 has lost the ability to promote cytokine maturation, caspase-11 activation can be exploited by intracellular pathogens as a silent way to egress from infected host cells and to spread within the host. Future work will address how caspase-11 supports innate immune defenses against other Gram-negative bacterial pathogens or whether caspase-11 has detrimental effects for the host in other infectious disease models. | 3,287.2 | 2013-02-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Fast physical repetitive patterns generation for masking in time-delay reservoir computing
Albeit the conceptual simplicity of hardware reservoir computing, the various implementation schemes that have been proposed so far still face versatile challenges. The conceptually simplest implementation uses a time delay approach, where one replaces the ensemble of nonlinear nodes with a unique nonlinear node connected to a delayed feedback loop. This simplification comes at a price in other parts of the implementation; repetitive temporal masking sequences are required to map the input information onto the diverse states of the time delay reservoir. These sequences are commonly introduced by arbitrary waveform generators which is an expensive approach when exploring ultra-fast processing speeds. Here we propose the physical generation of clock-free, sub-nanosecond repetitive patterns, with increased intra-pattern diversity and their use as masking sequences. To that end, we investigate numerically a semiconductor laser with a short optical feedback cavity, a well-studied dynamical system that provides a wide diversity of emitted signals. We focus on those operating conditions that lead to a periodic signal generation, with multiple harmonic frequency tones and sub-nanosecond limit cycle dynamics. By tuning the strength of the different frequency tones in the microwave domain, we access a variety of repetitive patterns and sample them in order to obtain the desired masking sequences. Eventually, we apply them in a time delay reservoir computing approach and test them in a nonlinear time-series prediction task. In a performance comparison with masking sequences that originate from random values, we find that only minor compromises are made while significantly reducing the instrumentation requirements of the time delay reservoir computing system.
Discussion on noise effects of the SL-OF system
The presence of laser noise, as well as other noise sources and instabilities that may be present in possible physical implementations of the SL-OF system, can make it impossible to observe some of the dynamical responses of this system. For example, period-4 (P4) or even period-2 (P2) dynamics that are found numerically to exist in this system, might not be possible to observe experimentally. In this study we have considered a low-noise laser emission, which is introduced by a value of D = 3ns -1 in the Lang-Kobayashi model (Methods). At this laser noise level, we can observe clearly the gradual increase of the number of frequency tones (e.g. Figure 2, points (b), (c) and (d)). Moreover the fully-deployed frequency tones appear always in a wide {rc,φc} parameter space. When increasing the laser noise parameter D by one order of magnitude (30ns -1 ) the gradual increase of the number of frequency tones (points (b), (c) and (d) in Figure 2) can be hardly observed. However, the parameter space {rc,φc} for which we observe the fully-deployed 11 frequency tones is still very wide. Regarding the conditions for which we observe the integer relation between the high order and the first frequency tones (as shown in Figure 3b), this is also significantly reduced. In this case, the feedback parameters of the SL-OF system must be defined with higher precision in order to obtain repetitive patterns without any periodicity drift.
Pattern repetitions for large reservoirs
In the presented analysis we select a feedback delay time of τ = 200ps and a given set of parameters for describing the laser operation. These result in the generation of repetitive patterns of τe = 226ps duration for the pre-selected feedback conditions. When sampling this pattern with 50 sample values, this physically defines a sampling distance of 4.52ps. Thus, for a reservoir with Vn = 50 virtual nodes, these 50 sampled values from the analogue patterns will serve as the masking values (Supplementary Figure 1). For reservoirs with larger number of virtual nodes, we use multiple patterns to reach the appropriate masking sequence length. For example, for Vn = 250, we use χ = 5 repetitions of the selected sampled pattern to obtain the desired 250 masking values.
Increment entropy for ordinal patterns with m>3
In permutation entropy calculations, longer ordinal pattern lengths consider more distant neighbouring samples. In this case, the generated multi-dimensional vectors are mapped into a larger number of unique permutations. In Figure 5 we presented the PEinc metric computation with m=3 and R=4. In this case, there are (2R+1) m = 729 possible unique ordinal patterns that are considered. This number increases exponentially when increasing the value of m. Even though the highest value of the PEinc metric increases slightly with the ordinal pattern length, the qualitative interpretation of our results remains the same (Supplementary Figure 2). The conclusion that high PEinc values can be obtained from the repetitive patterns of the SL-OF system when at least one of the 2 nd and 3 rd amplification stages has significant gain is still valid for m>3.
Supplementary Figure 2. Increment entropy PEinc of the repetitive patterns obtained for different amplification gain conditions at the second and the third spectral regime, when considering an increased length of ordinal patterns, from m=4 to m=7.
Masked input and time delay reservoir response
While the response of the time delay reservoir is computed for every time step t, as given by equation (8), we only use the specific samples which are allocated to the virtual nodes of the reservoir and create a matrix representation for implementing the input layer of the TDRC. The schematic of Supplementary Figure 3 shows how the masking sequence and the input are applied in the operation of the TDRC. The masking sequence M obtained from the repetitive patterns of the SL-OF system -consists of Vn values, with temporal distance θ and has a repetitive temporal duration of T. A change of the masking value is applied only after a temporal duration of θ, under a sample and hold operation. The input Y with dimension n x 1, with n being the total number of samples, changes its value only after every temporal duration of T. From equation (8) we obtain the temporal nonlinear transformation of the masked input, from which we retain the reservoir's responses X, in a form of matrix with dimension n x Vn, to the input samples Y. Thus, in the representation showed in Supplementary Figure 3, the first input sample Y(1) is expanded -after the masking and the reservoir transformation -into the vector X(1) with Vn values. From the trained classifier and the defined weights, a prediction value Ỹ(2) is calculated from the X(1) and compared to the initial input value Y(2). Eventually, the first n-1 responses of X are used to make the prediction Ỹ, shifted by one sample.
Supplementary Figure 3. Masking methodology and reservoir operation. The input sequence Y is masked by the repetitive pattern M and it goes through a nonlinear transformation by the time delay reservoir, resulting in the response X. These responses are used to calculate the classifier's one-step-ahead prediction value of Ỹ.
NMSE performance versus PEinc
Another approach to visualize the NMSE performance of the computational task for the different masking sequences, as presented in Figure 7, is versus the increment entropy (PEinc) of the evaluated pattern. The connection between the patterns' profile and their corresponding PEinc value can be easily extracted from Figure 5. Here we associate the pattern ID, with the calculated PEinc value, as extracted from Figure 5:
Pattern ID PEinc value
Initial pattern from SL-OF system 2 Pattern A 6 Pattern B 5.9 Pattern C 5.8 Pattern D 5.9 Pattern E 4 In a new visualization of the presented results in Figure 7, we show in Supplementary Figure 4 the NMSE performance of the different patterns A-E, including also the initial pattern from the SL-OF system, versus their PEinc value. In order to keep a clear visualization, we only show here the NMSE performance for Vn=50 and Vn=400, for all the investigated NF conditions (Supplementary Figure 4, a -d) of the amplification stages.
Focusing first on the case where no amplification noise is considered (Supplementary Figure 4, a), the use of the initial pattern from the SL-OF system as masking sequence with a much lower PEinc=2, results in much higher NMSE for the computing task. Especially for large reservoirs (Vn=400) where lower error values are obtained, the masking patterns with high PEinc (≥5.8) always provide lower errors compared to the pattern with PEinc=4. However, this relation of NMSE versus PEinc is not always linear when we zoom at the region of high entropy patterns (≥5.8); for example, there are cases, such as in Supplementary Figure 4, a and d, that the pattern C exhibits lower NMSE error than the pattern A. The discussion on the impact of amplification noise, versus the reservoir size and the masking pattern properties is included in the manuscript.
Supplementary Figure 4. NMSE performance of the Santa-Fe timeseries one-step-ahead prediction benchmark task with TDRC versus the increment entropy PEinc of the different evaluated patterns, as they appear in Figure 5, after their sampling to masking sequences. The NMSE performance is shown here versus two reservoir sizes (Vn=50 and Vn=400) and under different amplification noise conditions: (a) NF=0dB, (b) NF=1dB, (c) NF=2dB and (d) NF=3dB. | 2,108.4 | 2021-03-23T00:00:00.000 | [
"Computer Science"
] |
Interactive facility management, design and planning
Societal developments show that future demands for visualization can be expected to grow. In many areas of organized human activities organizations may turn away from textual and numerical flatlands, and rely on the convenient and multidimensional digital worlds. Virtual worlds for facility management, design, and planning are no exception, it has an enormous potential to help organizations finding the right spaces that fit the human activities they perform. However, a major take-up of virtual worlds in this context allowing a comparison between present and future, is yet to come. Perhaps such applications, interweaving virtual and real worlds in order to design better facilities are at its beginning stages. One thing is clear: sophisticated applications may have remained absent until today, but it will come to us. Digital worlds start to normalize and the design of organizational spaces can benefit from that development. In this current article the effects of the proposed integration of visualization with facilities were studied in a case study design. It was assessed whether the participants would actually change the design, without data on the organizational performance, and to what extent this affected staff satisfaction. This study however showed no design changes and no statistically significant changes in the affective responses of participants between pre-test and post-test stages. However, in this current case the sample size may have been too small for generalization purposes. The connection of virtual worlds with organizational data, which were not applied in this current case but were in fact applied in our earlier studies, may be vital for the efficacy of interactive facility management, design, and planning. It is concluded that data on organizational performance serve as a linking pin between facility management and virtual worlds. Interaction can thus be improved by using organizational data as ‘subtitles’ which stimulate a more active use of the visualization.
Introduction
For thousands of years humans have commended their thoughts to paper.These paper flatlands have successfully captured the ideas of our ancestors in texts and images, many of them beautifully crafted and alas only some of them well preserved.They are our collective memory.The invention of typography gave a significant impulse to the proliferation of thoughts that were committed to paper.It also allowed for efficient distribution of ideas.The dissemination of reason and emotion had become relatively cheap, allowing emancipation of thought for many.At the time, it was a revolution.Ideas and lines of argument could easily be dispersed and, in reaction to this exchange, agreed upon, disputed, or refuted.It became a true impulse for the advancement of art, science, and practice.
Today, many of us still rely on paper.It is a silent legacy of a historical innovation, spreading out its tentacles until today in our modern life.But the tide is turning.The last century the power of digital images has grown.It started with photography, filming industry, and television at the beginning of the century and in the last decades continued with internet, multimedia, gaming, and other emerged forms of visualization.These developments have pushed texts, numbers, and paper more and more into the background.After the sudden and unexpected increase of paper, soon after the introduction of the personal computer, the influence of images has grown and currently starts to successfully provide societies with a viable escape from paper.Our existence is more and more immersing in digital worlds.We inhabit a world in which such artificial worlds have become of the utmost importance.Digital worlds have become a normal, almost natural, human habitat.
In this context, gaming of young children may not only be regarded as play [1].It also allows them to communicate with their friends and to build new friendships.Moreover, social networking sites stimulate teenagers to communicate within their peer group, regardless of national or cultural background.It is also irrefutable that digitals worlds have brought new forms of communication and relationship building to contemporary organizations.With increasing virtualization of their operations and networks, organizations can sprout wings and serve a globalize world.The digital society starts to normalize.It has become a society with unleashed fun, business, and love, but also with unrestrained hate, abuse, and serious crime.It has grown beyond naivety.The borders between the digital and real worlds are increasingly blurred.
At the twenty-first of October 2008, a Dutch criminal Court convicted two teenagers for theft of virtual goods of another teenager in the online role-playing game Runescape.The transfer of virtual goods was preceded with serious real world assault and battery; an evident blurring of the two worlds.In her sentence the Court equated virtual goods with physical goods, which was based on three criteria.Virtual goods represent value.They are sold or bought for money on the internet or schoolyard.Moreover, the tenure can transfer from one to the other.One can obtain or lose actual power by transferring virtual goods to another account.Finally, for criminal law goods do not need to be tangible, just like electricity and money of account [2].
With this sentence the Dutch Court has broken the walls that seemingly divided virtual from real worlds.It is notable that the judgment was passed on teenagers.These children that are currently pampered with visualization will populate our future organizations.As digital illiterates current generations will stay behind, ignorant and unwittingly they will opt out in the end.In contrast, the demands of future generations will grow.Our children will expect three dimensional footage replacing boring reports, articles, and books on paper flatlands.In organizations their pull for visualization will soon start to emerge and grow seriously.It will appear in the farthest and most unexpected corners of modern societies and organizations.Somehow, these future needs will be fulfilled.There are, however, serious barriers to be taken.
Visualization always has had a great appeal to practice [3].Organizations love it, but in science any visualization is subject to attitudes of caution and suspicion.Visualization is the Cinderella of science: hard to become generally accepted as a solid scientific approach.But this neglect is undeserved.Any visualization may be rooted in sound scientific evidence.By doing so, it can integrate positivist with interpretative scientific perspectives and open up new methods of inquiry for facility management.Moreover, it has the capabilities to improve the relation between science and practice.
In this paper, the above supposed capabilities of visualization will be substantiated.First, we will elaborate on the need to escape flatlands and some related problems that can be expected.Next, the meeting of organization with facilities will be introduced as a domain where our imagination is exercised.The subject of organization and facilities will prove to be a rewarding area for the advancement of facility management in this respect.Third, the meaning of virtual facilities will be explored.Fourthly, the possible impact on participation and emancipation will be discussed.Finally, the supposed advantages of interactive facility management, design and planning are tested in practice.
A need to escape flatland
Edward Tufte was the first to coin paper as flatland, because it is static and flat (similar to this text on paper or screen).In contrast, our real world is complex, dynamic, and multidimensional.Unfortunately, paper cannot display these properties, paper is static.Escaping this flatland is the essential task of envisioning information [4].
However, history shows us that emotive and animated images were not always well received by science.American ornithologist, hunter, and painter John James Audubon painted astonishing colored, lively, and life-size series of prints of American birds, which were issued in his famous 'Birds of America'.With his approach he had not only brought the birds alive on paper, but also leading scientists in Philadelphia and New York.His work was challenged as being not objective by scientists from the Academy of Natural Sciences.It resulted in an aggressive campaign to discredit his work and scientific credentials.Our ancestral colleagues managed to put his work on a black list [5].Only in London, he found the necessary cooperation for production and publication of his work.
In practice, the world has changed much.The success of Al Gore's presentation 'An Inconvenient Truth', larding multimedia with facts and estimations about global warming and its expected consequences, shows a possible future that may lie ahead of us.Not only in terms of climate change, but also in terms of what kinds of knowledge transfer an audience expects from their leaders.He showed maps of countries and polar caps mixed with virtual flooding and melting of ice, and even an animation of a drowning ice bear.Moreover, a film about personal life experiences, facts from news items, and hard scientific graphs were cleverly interwoven.All together his presentation provided the public with a supposedly convincing visual scene of fact and value.In interactive design similar developments can be seen, such as organizations using virtual reality (VR) for terminal container simulation creating a vivid terminal operating environment to review logistics [6], product design based on VR, gaming, and scenarios [1], and tools for collaborative environments, real time interaction of people to people from various geographical locations [7].
In contrast, science may not have changed that much.If we try to ignore the deceiving signs of high-tech progress like computers, phones and laptops, many of the then emerging scientific outlets are remarkably similar to the ones used by our great-great-grandparents.With most of us safe in our ivory towers, scientists are still remarkably unadventurous and inept in their alignment with modern graphics [8].Until today, many scientists have largely disregarded the fact that visualization can convey a vivid impression of events and processes.We may expect scientists to be still highly text-bound seldom considering other relevant artifacts, such as apparatus, buildings, furniture, graphics, or computer graphics [9].
So the yawning gap between the paper and the real world remains.On one side we have the world unfolding itself in our everyday lives with three spatial dimensions.On the other we have endless two-dimensional flatlands of paper.Should we stay where we are, representing the rich visual world of experience and measurement on mere flatland with text and numbers?Or should we apply new methods and techniques, ones that perhaps even delude our senses but, by doing so, create new multidimensional worlds?In this paper we choose to follow the latter line of thought.We concentrate on new visualization techniques that may change the way we study organizations and facilities in the near future.
Facilities
The facilities of organizations deal with the mutual influences of the physical environment and human behavior.In many cases it has a focus on the effects that architectural structures have on social structures and behavior.It may also be regarded as an interwoven system of organization, physical structures, services, and spatial experiences.Facility design and planning combine architectural design with organizational design [10].Such interdisciplinary design is not limited to an architectural, technological, or structural perspective.Facilities produce meaning, and may be regarded as an expression of organization culture [11].But such spaces also have different degrees of functionality.It may either hinder or facilitate organization processes, and by doing so, it may affect performance [12].
Visualization of facilities
It is pre-supposed here that the design and planning of facilities is an important management task.However, general managers do not like to make decisions about real estate and facilities [13].It is therefore not surprising that currently experts from, for instance, architecture, facility management, and real estate development dominate the decision-making process with respect to a building's interior or exterior.VR has the potential to capture the attention of other important decision-makers for this interdisciplinary task.
It was mentioned earlier that societal developments are expected to pull technological developments of VR to a higher plane.To our children and the generations to come, VR will be a normal part of their lives.The scene in which Elijah Wood, the actor that plays Frodo Baggins in the award wining trilogy 'The Lord of the Rings', touches the digitalized Gollum, illustrates the blurring boundaries between the virtual and the real world.Agent technology is being increasingly absorbed into the tangible world; it is becoming part of what we all take for granted [14,15].As a result, a new generation of managers may emerge: people to whom VR is normal, and who are spoilt by the hyperrealism created by the animations in the gaming and filming industries.
It goes without saying that in the gaming and filming industries virtual worlds have become visually very realistic.Creative companies, including Massive Software, CAT, HIT Lab NZ, Media Design School and Right Hemisphere, display New Zealand's new thinking in computer graphics, interactive media, animation and visual effects.Classical data-graphical representations use series of still images to depict motion: to resolve discontinuity in spatial representations of continuous temporal activity on paper, viewers must interpolate between images, closing the gaps [16].An application of modern computer graphics, however, allows for motion and real time intervention.For instance, at crowd simulations of the Award winning trilogy 'The Lord of the Rings', animators were able to design characters with a set of reactions to what is going on around them.The reactions of the characters determined what they did and how they did it.Currently, their reactions can even simulate emotive qualities such as bravery, weariness, or joy.These experiences can be used to implant multi-agent behavior in a virtual built environment.Organization science may profit from this advancement by using these artificial life forms in a virtual world of organization and space for a new inspiring debate about a possible future [17][18][19].It should be investigated what hard algorithms are needed as plug-ins to feed the behavior of such agents and where agents can and should decide for themselves as intelligent life forms from which we can learn in reality.
However, currently when using VR, architects hardly ever include intelligent agents or avatars [20] into their VR models and only very rarely do they integrate factual information on the performance of an organization.Just a few exceptions can be found [21,22].In 2005 and 2006 the Virtual Concept conferences [23,24] confirmed that the integration of architectural models, business information, and multi-agent behavior remained limited [25,26].This is remarkable since, as it was mentioned earlier, the visual display of data and spatial situations has many advantages over classical forms of presentation, such as common 'flat' paper business reports and architectural drawings.Connecting visualization with expected human behavior does not only allow advancements in multi-agents (or avatars) behavior in the digital world, but also new forms of participatory design.Theories from different functional areas such as facility management, design, and planning, organization design, organization behavior, culture, marketing, and systems approaches can be combined with empirical evidence, perceived realities and experiences.
In this context, a relevant distinction of sir Geoffrey Vickers [27] between reality judgments, value judgments, and instrumental judgments may be helpful for our understanding.Reality judgments are concerned with what is or what is not the case, whereas value judgments are concerned with what ought or ought not to be.Instrumental judgments deal with the best means available to reduce the mismatch between is and ought.VR has outstanding possibilities to make coherent iterations between these three judgments.Advanced computer graphics, as representations of a possible future, could support organizations in assessing what facilities and services should be performed, added, changed or struck.Walls can be removed, human behavior can be simulated and changed, plants and trees can be added, colors can be refreshed, and furniture can be selected, changed, and compared; all presto.The right decision can be confirmed and wrong decisions can easily be reversed: it remains just a computer programme.Consequently, with these ingredients, a debate about the aura and atmosphere of facilities, organizational change and culture can also be included.Such visualization allows organizations to make appreciative judgments: a set of readinesses to distinguish some aspects of facilities rather than others and to classify and value these in this way rather than in that [27].
In contrast, effective layering of information on paper is often difficult; there are all sorts of unplanned and lushly cluttered interacting combinations turn up [4].Lifting a paper flap, lay over a spatial scene with a before/after presentation of a facility redesign, brings about a nearly simultaneous visual comparison of the old and new facilities exactly in position [16], but that is about it.In contrast, immersive VR in a multi-screen theatre allows for a comparison of organizational data and digital images of existing facilities with that of a new facility design and expected human behavior.This allows instant iterations to be made.In this way the viewers can refresh their minds occasionally, making it easier for them to perform a sharp analysis of the differences between old and new.Such use of VR can be easily combined with complex data, such as organizational data.An application of VR showing a possible future state of facilities and services is a surprisingly scarce phenomenon.Following the argument for visual data graphics of Tufte [8], perhaps the diversity of necessary skills is too broad.It would at least require computer-graphical, facility-managerial, empirical, strategic decision-making, and participatory design skills by a research team.But perhaps there is also a significant timelag between the developments in the gaming and filming industries and science.
Can science use the enthusiasm of the gaming generation to improve the realism of virtual worlds?Recently, the American mathematician Luis von Ahn has shown that it can be done, when applied cleverly.For instance, search engines of computers cannot distinguish between images of a frog and an iguana.Von Ahn invented games, such as the ESP game (http://www.espgame.org)where people associate images with wording allowing search engines to improve their performance.The interesting thing is that if this knowledge transfer is packed in a game, the public is even prepared to pay money for it [28].Perhaps the enthusiasm of the gaming generation could also be used to create realistic human behavior in virtual worlds by making online games of it, and, by doing so, test the quality of the facility design and services and the underlying assumptions.For instance, by allowing participants to navigate through future spaces and to actually use them as workers, customers or general public.Such an approach may indeed have the potential to improve the quality of facilities in the real world.
A major take-up of such developments in multi-agents behavior, allowing both digital agents and human participants to experience, use, and value (changes in) the physical environment in a virtual world and respond to it, is yet to come.Perhaps such scientific applications, interweaving digital and real worlds and using the vast potential of gaming generations are at its beginning stages.One thing is clear: sophisticated applications within facility management have remained absent until today, but it will come to us.
As such, virtual worlds can represent highly realistic facilities of the future without a need of being present [29].In the words of Schutz such worlds project 'the act, which is the goal of the action and which is brought into being by the action' [30].The act, being the virtual facilities and representing an expected organizational future, will revert to present time and present action.It will allow organizations to assess the act and to use action to improve the effectiveness of the act.
The meaning of virtual facilities
I will use the 'future perfect tense' as developed by Schutz [30], worked out by Pitsis et al. [31], and criticized by Kreiner and Winch [32] to position virtual worlds for facility management, design, and planning in organization science.I will use these ideas to present a way in which the essence of the meaning of virtual worlds for organizations can be revealed and understood.Following the work of Schutz we may regard virtual worlds as worlds in which "… the actor projects his action as if it were already over and done with and lying in the past."[30].What organizations can see is a new building and its services.They can 'walk' through that building as if it were real."Strangely enough, therefore, because it is pictured as completed, the planned act bears the temporal character of pastness."[30].This temporary pastness is a mindset which organizations can use to reflect on their facility plans and to learn from it.Virtual worlds allow organizations to create a so-called 'future perfect tense' of facilities, according to Schutz this is a situation as if it were simultaneously past and future.I prefer the terms present and future, because in virtual worlds for facilities the supposed past is still present.This simultaneousness is an essential constituent, which allows organizations to iterate and learn.
From earlier studies [33,34] we have learned that the technical possibilities to make such mixes of present and future of facilities and services are available and can be used.Organizations can not only visualize new facilities, but they can also plug-in digital images and facts from the 'old' current facilities.Imagine, for instance, a virtual theatre allowing a group of people from an organization, which together assess the quality of a new workplace design.They may use visualizations projected on a cylindrical screen by means of three projectors creating one (fused) image of the future.In this case, images and presentation slides can also be imported into the virtual worlds and put on one of the screens, while the new workplace design can still be projected on the remaining two screens.This approach makes it possible to iterate between the digital images of the present situation, enriched with an implant of other relevant organizational and spatial data and representations that were derived from the present, and the proposed new virtual facilities.Such an approach allows for constant iterations to be made in the minds of the actors between present and future.In this way the actors can refresh their minds occasionally, making it easier to perform a sharp analysis of the differences between old and new, present and future, and to determine and express their desires to change the plans.
However, the virtual world of these intended facilities is still nothing more than a fantasy, a highly realistic and perhaps even useful one, but a fantasy: "… the phantasy is a real lived experience which in turn can be reflected upon in all its modifications."[30].Especially these reflections and modifications are essential for the learning in organizations.As such virtual worlds allow organizations to evaluate their facility plans and change them whenever necessary.
In this context it is relevant to note that Pitsis et al. [31] used Schutz' work to track the development of the future perfect in a large infrastructural construction project for the Olympic Games in Sydney 2000.They concluded that imagining a future already accomplished, guides the actors to current action and stimulates them to take the necessary steps in getting there.They also argued that the completion of the construction work unfolded itself from a continuous re-scope of the future perfect; only with some criteria on which the entire project would be judged.Without any reference to any original guiding design, this large construction project grew from only 28 pages without design and without clauses.Pitsis et al. proposed to use the future perfect as an alternative to the traditional project management approach for construction projects.If such a complex project can be mastered only with a few criteria and an imagination of a possible future, then what could possibly hinder an application in virtual worlds for facility management?
Kreiner and Winch [32] warn us that Schutz' notion of human action was only based on trivial everyday activities like visiting a friend and mailing a letter.Therefore it must be treated with caution when applied to large design or construction projects.The reflections on these simple situations may not be 1:1 translatable to facility design.In the view of Kreiner and Winch the translation of projects into reality is never a trivial thing to do.Imaginations will probably always be incomplete and may also be less than honest: it can even serve as a tool for manipulating others.In their view "what needs to be done, and which consequences that follow, will only transpire in the subsequent process of implementation."[32].
For an application of virtual worlds for facility management this conclusion of Kreiner and Winch is, however, in need of some moderation.In this specific context, I disagree with their argument for two reasons.Firstly, the technical possibilities have shown us that virtual worlds for facility management allow actors to experience the future they get, assess its quality, and agree on what needs to be changed, before implementation.Therefore the necessary work to be done and its consequences will not only transpire in the implementation.It can and will emerge before implementation, because virtual worlds for facilities management are in an alleged twilight zone where present seems past and future seems present.What needs to be done and which consequences can be expected from the intended facilities unfolds itself in the complexity of the debate that emerges from organizations being immersed in virtual worlds.In the end it is simple: after a debate in the virtual worlds for facility management there is or is not a desire to change the act with actions and there is or is not a possibility to do so.Be that as it may, the work to be done does in any case not solely depend on implementation.
Secondly, Kreiner and Winch focus on project management "in the sense that the actor imagines the future state of affairs to have arisen already, enabling him of her look back on the present situation and the steps connecting the present with the future."[32].This deviates from an approach in virtual worlds for facility management because for organizations it is not always necessary to look at the steps connecting the present with the future.In virtual worlds organizations just compare present and future.From within this twilight zone of present and future, a spin-off with proposed actions may emerge, but these actions are worked out in steps by design and construction experts.
For organizations the application of virtual worlds rather functions as a black box.It is a system which is viewed in terms of its input and output, but not necessarily with any knowledge of its internal workings.Ashby [35] argued that the way not to proceed in approaching an exceedingly complex system, like a facility design, is analysis.In stead of analysis, the input (in this case the 'old' present facilities) should be manipulated, and the output (in this case the 'new' future in virtual facilities) should be assessed.A process which can be repeated until a satisfying result is achieved.In this way actors may in the end discover facilities that meet their wishes, desires, and hopes best, given the available knowledge at a given time.The work of Schutz has allowed us to come to the essence of virtual worlds for facility management: it invites organizations to make, what I will call an 'intelligent exploitation' of the twilight zone between present and future.This intelligent exploitation, in which it is essential to learn from facts and value of present and future, may ensure an improved fit between organization and facilities and, by doing so, can improve the effectiveness of organizations [12].
Participation and emancipation
In many cases people discuss what only some of them have invented before.In such discussions seemingly dichotomous issues can be dealt with.In the context of virtual worlds for facility management hard and soft are not dichotomous but dialectical, not conflicting but complementary.Just as the concept of holism is blurred with multiple layers, current states can be represented together with future states, fact with meaning, and organization with facilities.VR has the potential to do all this together.By combing these developments in participatory design new knowledge nodes may emerge [33].
In this context it is argued that participatory design gives the people who are affected by change in facilities a chance to influence the design [36].It gives them a voice.What emerges is a design process that can be regarded as a co-operation between all sorts of stakeholders.This design approach can be fruitfully combined with VR [37].Studies in architecture have also suggested that participatory design results in financial and qualitative advantages for all participants [38,39].Generally four advantages can be expected.The first one is that the involvement of workers is a serious design test.The workers of the work floor have the expertise and the motivation to make a serious assessment.The changes planned will affect their future work and not necessarily that of those in the higher organizational layers [36].Secondly, worker involvement increases the organization's commitment with the design [36].The participant becomes a 'faith holder', which creates high degrees of trust [40].The participant also becomes an 'accessory' in the decision-making process.Once the decision is made, it is harder for the participants to reconsider it.Thirdly, the participatory design approach leads to emancipation [41].It allows the non-experts to understand the facility design and imagine possible consequences.Finally, participatory design offers those who are affected by the design but initially not involved in it the possibility to discuss the consequences [42], for example, the workforce and visitors of an organization or the inhabitants of a city.
A possible pitfall of participatory design in the context of facility management is that the debate remains top-down; a diktat issued by the architect and/or the management.In order to make the session run smoothly a facilitator is required [36], a person who is in control of the quality of the debate.A debate should not only include the opinions of the powerful decision-makers.Space for the opinions of the less powerful can be ensured by creating a genuine group.A group where there is trust and that discusses sincere and pure, and in which people have opened their minds for mutual understanding and learning [43].
Introduction
In this study the participatory design approach comprised a session in which experts (six designers) and neighbors (five inhabitants, seven businesses representatives) of the proposed design discussed and decided co-operatively on changes to be made.The approach only included VR depicted in this case immersive virtual reality presented in a virtual theatre at the University of Groningen in the Netherlands in 2009.In this current study, the organization of the meeting as such and the virtual theatre, the displayed physical objects, and the technical installations were exactly the same as an earlier studies [33].However, this current study did not combine organizational facts, figures, and photos of the current states with images of a future state.It solely visualized different scenarios of a possible future of buildings and infrastructure which were used as input for a free discussion on a possible future.The study conducted at the virtual theatre was action research: the researcher actively participated in the discussion with the participants.
Introduction
All participants were invited to the virtual theatre in Groningen for three hours in the evening.The participatory design session consisted of a short introduction in a classroom, followed by a full immersion in the virtual world, and a free unstructured discussion.The programme concluded with an evaluation in a classroom later in the afternoon.The basic idea was to immerse the participants in the facilities.The researcher actively participated in this discussion in order to stimulate a fair debate and facilitate a critical reflection on practical assumptions or conclusions.
Introduction in the class room
At 7 p.m. all participants had gathered in the class room.Next, the researcher briefly explained the research and the evening schedule.The participants started by completing a custom-made questionnaire about the quality of the design.The main underlying question was: "What is your opinion of the proposed design?"This questionnaire consisted of 34 items.A 10-point scale was used to give a report mark per item.At the end of this introduction the participants were invited to go to the virtual theatre.
Immersion in the virtual room
From 7.30 to 9.30 p.m. the participants were immersed in the design and discussed without restrictions or structures imposed on them; there was no discussion agenda set.Be reminded that organizational data from the pre-test were absent.The main questions addressed during this immersion were: "What will the new design look like exactly?" and "Does it improve the current situation?"The participants made a full screen virtual walkthrough.At the start the neighbors were invited to 'just' listen to the ideas of the experts.Later, the participants discussed the supposed quality of the design in the virtual theatre.
Evaluation in the class room
From 9.30 to 10.00 p.m. the participants gave their opinion about the usefulness of this virtual session.The main questions were: "Did VR change your opinion of the design?","Do you want to change the design, and if yes, what is it you want to change?", and "Was it a useful session?".The questionnaire about the quality of the design, which was used during the introduction, was repeated in order to determine whether the participants had developed different opinions during the evening.In addition, the general satisfaction with the participatory design was measured.A 10-point scale was used to judge this item with a report mark.
The role of virtual reality
The software by which the virtual model was created allowed the computer programmer in the cockpit to create different spaces in real time, enabling the participants to experience them instantly.The virtual model was displayed in the virtual theatre.The theatrical structure allowed the 18 participants to assess the design quality simultaneously.In this theatre stereo images were projected on a cylindrical screen by means of three projectors.The Open Scene Graph Reken-Centrum (OSG-RC) software, developed in-house by the HPC/V, made it possible to visualize the proposed design interactively.It allowed scenarios for different routing to be changed interactively.The projection was located at a position above the audience, like in a cinema.The stereo effect was created with shutter glasses, which could be switched from fully transparent to opaque.If the right glass was transparent, the left was opaque, and vice versa.The right glass was transparent when the right eye stereo image was projected, while the left glass was transparent when the left eye stereo image was projected.The refresh rate of the projectors and the shutter glasses was 96 Hz.This frequency was sufficiently high to create an illusion of three dimensions.
Expected effects of absent organizational data
The approach described was set up to determine the effects of VR in participatory design for facility management, design, and planning.This current application of VR was a particular form of participatory design because the organizational data were absent.For decision-making the participants could only rely on the visualization itself, the explanation of the experts, and the emerging discussion.It was expected that this approach would allow the participants to assess the supposed qualities of the virtual design less critically and also would be less satisfactory than a situation in which organizational data would be provided together with VR as in an earlier study [33].
Introduction
The aim of this study was to determine the effects of this particular use of VR.These effects emerged in the design changes proposed as well as in the evaluation of the participatory design session itself.It was surprising to see that this combination of participatory design, virtual worlds, and facilities, in combination with the absence of organizational data, had hardly any effects on the real world of the participants.
Changes and affective responses of participants
The design was debated thoroughly, but the participants made no proposals to change the design.
Introduction
The questionnaire had a Cronbach's alpha of 0.95, indicating that the internal consistency of the questions was good and which also shows that they served as a relatively reliable source of information.The response was 100 % (n = 18).Table 1 summarizes and compares the results before and after the participatory design session.
Satisfaction with the facility design before the virtual session
The results of the questionnaire before the participatory design session show a report mark of 6.62 for the design.The standard deviation of the mean was 0.93.The design was mostly appreciated by the six design-experts (6.97) and the seven neighboring businesses representatives (6.88), and least by the five neighboring inhabitants (6.15).Moreover, most respondents did not have problems assessing the design qualities in advance: • Six of the eighteen respondents (33 %) could not estimate the quality of all 34 design items (100 %) in advance.
-One expert had problems assessing 1 of the 34 items (3 %).-Four neighboring businesses representatives could not assess the quality of 22 items (65 %).-One neighboring inhabitant could not assess the quality of 1 item (3 %).
• Twelve of the eighteen respondents (67 %) could assess the quality of all design items in advance.
Satisfaction with the facility design after the virtual session
The results of the questionnaire after the participatory design session show a lower report mark of 6.49 for the design; the standard deviation of the mean was relatively stable at 0.89.The design was mostly appreciated by the six design-experts (6.78) and the seven neighboring businesses representatives (6.53), and least by the five neighboring inhabitants (6.21).Moreover, most respondents did also not have problems assessing the design qualities after the virtual session: • Seven of the eighteen respondents (39 %) could again not estimate the quality of all 34 design items (100 %) after the virtual session.
-Three experts had problems assessing 3 of the 34 items (9 %).-Four neighboring businesses representatives could not assess the quality of 7 items (21 %).-None of the neighboring inhabitants had problems assessing the quality of design items (0 %).
• Eleven of the eighteen respondents (61 %) could assess the quality of all design items after the virtual session.
Comparison of satisfaction before and after the virtual session
The difference between the total results for all respondents of the questionnaire before and after the virtual session showed a small decrease in the mean report marks for all items and respondents.Apart from the neighboring inhabitants, at all participants a depreciation of the report mark for the design was observed.A similar decrease was observed at the number of participants that did not experience problems assessing the design qualities after the virtual session.Given the outcome of an earlier study [33], overall the results were unexpected and rather disappointing.However, none of the above differences were statistically significant (Wilcoxon matched pairs, two-related samples, p-value <0.05).In addition, the standard deviation was only subject to minor changes and remained relatively stable.Therefore it can be concluded that the virtual design session did not influence the design satisfaction or the participants' possibilities to assess the design qualities.Neither did it seem to have influenced the internal agreement or disagreement within the group of participants.
Satisfaction with the session
The evaluation showed that the participants were still rather satisfied with this particular form of participatory design.
• Sixteen out of eighteen respondents indicated that the travel to the theatre had been worthwhile.• One respondent argued that the session was not worthwhile the travel; another one did not fill in this question.• The general satisfaction with the visit was an 7.02.
The satisfaction with the virtual session showed a mean report mark of 7.02 at all participants.The standard deviation of the mean was 1.47.The design was mostly appreciated by the seven neighboring businesses representatives (7.65) and the five neighboring inhabitants (7.14), and least by the group of six experts (6.00).Even though the participants were relatively positive about the virtual session and did find it worthwhile their travel, it cannot be concluded that the virtual session did clearly provide a positive and useful basis for a discussion about facilities and the confirmation or refutation of the design decisions made.In fact, it is doubtful if it had any impact at all since no changes were made.
Conclusion
Societal developments show that future demands for visualization can be expected to grow.Organizations may turn away from textual and numerical flatlands, and turn to the convenient and multidimensional digital worlds.This may proof to have an improved fit with the needs of our current gaming generations (which will proof to be the future leading generation of our organizations), and provides more convenience for this spoilt future business audience.Moreover, the borders between the virtual and the real are increasingly blurred.The enthusiasm of the gaming generation may be mobilized to improve the realism of existing digital worlds.Virtual facilities can be packed in online games that may allow the quality of facilities in the real world to improve.Public spaces as well as facilities of organizations can be easily tested and assessed in this way.It was also argued that visualization has significant potential to improve our future facilities.It has the possibilities to support and professionalize the debate between stakeholders and to create commitment to a joint future: an intelligent exploitation on the twilight zone between present and future.
The results of this current study were however disappointing in this respect, the virtual session did hardly have any impact.This was in strong contrast with earlier observations at similar cases, in which a complex system of business performance indicators was integrated with virtual reality [33].However, in this current case the sample size may have been too small for generalization purposes.Even though the statistics show that the results were not obtained by chance, it seems also fair to argue that this current study has its empirical limitations in terms of generalizability.At the same time however, the results also suggest that not all virtual sessions may have the same positive impact on decision-making processes as illuminated in the earlier study from above.Some ingredients may perhaps be vital for facility management purposes.The connection of virtual worlds with organizational data, which were not applied in this current case but were in fact applied in our earlier studies, may be vital for the efficacy of interactive facility management, design, and planning.Moreover, this approach may also have positively influenced the satisfaction of participants with both the virtual ses-sion and the design as such, and their possibilities to improve their understanding of the design.
It is expected that a stronger connection between visualization and organizational data can help organizations to iterate between fact and value, and, by doing so, create better discussions and better facilities in the real world.Future studies should reveal if the supposed benefits of virtual worlds for facilities actually do have a direct causal relationship with business data.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Table 1
Observations at the virtual session | 9,676.8 | 2012-03-29T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
How to hide a clique?
In the well known planted clique problem, a clique (or alternatively, an independent set) of size $k$ is planted at random in an Erdos-Renyi random $G(n, p)$ graph, and the goal is to design an algorithm that finds the maximum clique (or independent set) in the resulting graph. We introduce a variation on this problem, where instead of planting the clique at random, the clique is planted by an adversary who attempts to make it difficult to find the maximum clique in the resulting graph. We show that for the standard setting of the parameters of the problem, namely, a clique of size $k = \sqrt{n}$ planted in a random $G(n, \frac{1}{2})$ graph, the known polynomial time algorithms can be extended (in a non-trivial way) to work also in the adversarial setting. In contrast, we show that for other natural settings of the parameters, such as planting an independent set of size $k=\frac{n}{2}$ in a $G(n, p)$ graph with $p = n^{-\frac{1}{2}}$, there is no polynomial time algorithm that finds an independent set of size $k$, unless NP has randomized polynomial time algorithms.
Introduction
The planted clique problem, also referred to as hidden clique, is a problem of central importance in the design of algorithms.We introduce a variation of this problem where instead of planting the clique at random, an adversary plants the clique.Our main results are that in certain regimes of the parameters of the problem, the known polynomial time algorithms can be extended to work also in the adversarial settings, whereas for other regimes, the adversarial planting version becomes NP-hard.We find the results interesting for three reasons.One is that they concern an extensively studied problem (planted clique), but from a new direction, and we find that the results lead to a better understanding of what aspects of the planted clique problem are made use of by the known algorithms.Another is that extending the known algorithms (based on semidefinite programming) to the adversarial planted setting involves some new techniques regarding how semidefinite programming can be used and analysed.Finally, the NP-hardness results are interesting as they are proven in a semi-random model in which most of the input instance is random, and the adversary controls only a relatively small aspect of the input instance.One may hope that this brings us closer to proving NP-hardness results for purely random models, a task whose achievement would be a breakthrough in complexity theory.
The random planted clique model
Our starting point is the Erdos-Renyi G(n, p) random graph model, which generates graphs on n vertices, and every two vertices are connected by an edge independently with probability * Part of the work was done while the author was a visiting student in the Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, and a full-time undergraduate student in the Faculty of Computer Science, Higher School of Economics, Moscow, Russia.
p. We start our discussion with the special case in which p = 1 2 , and other values of p will be considered later.Given a graph G, let ω(G) denote the size of the maximum clique in G, and let α(G) denote the size of the maximum independent set.Given a distribution D over graphs, we use the notation G ∼ D for denoting a graph sampled at random according to D. The (edge) complement of a graph G ∼ G(n, 1 2 ) is by itself a graph sampled from G(n, 1 2 ), and the complement of a clique is an independent set, and hence the discussion concerning cliques in G(n, 1 2 ) extends without change to independent sets (and vice versa).It is well known (proved by computing the expectation and variance of the number of cliques of the appropriate size) that for G ∼ G(n, 1 2 ), w.h.p. ω(G) ≃ 2 log n (the logarithm is in base 2).However, there is no known polynomial time algorithm that can find cliques of size 2 log n in such graphs.A polynomial time greedy algorithm can find a clique of size (1 + o(1)) log n.The existence of ρ > 1 for which polynomial time algorithms can find cliques of size ρ log n is a longstanding open problem.
In the classical planted clique problem, one starts with a graph G ′ ∼ G(n, 1 2 ) and a parameter k.In G ′ one chooses at random a set K of k vertices, and makes this set into a clique by inserting all missing edges between pairs of vertices with K. We refer to K as the planted clique, and say that the resulting graph G is distributed according to G(n, 1 2 , k).Given G ∼ G(n, 1 2 , k), the algorithmic goal can be one of the following three: find K, find a clique of maximum size, or find any clique of size at least k.It is not difficult to show that when k is sufficiently large (say, k > 3 log n), then with high probability K is the unique maximum size clique in G ∼ G(n, 1 2 , k), and hence all three goals coincide.Hence in the planted clique problem, the goal is simply to design polynomial time algorithms that (with high probability over the choice of G ∼ G(n, 1 2 , k)) find the planted clique K.The question is how large should k be (as a function of n) so as to make this task feasible.
For some sufficiently large constant c > 0 (throughout, we use c to denote a sufficiently large constant), if k > c √ n log n, with high probability the the vertices of K are simply the k vertices of highest degree in G (see [Kuc95]), and hence K can easily be recovered.Alon, Krivelevich and Sudakov [AKS98] managed to shave the √ log n factor, designing a spectral algorithm that recovers K when k > c √ n.They also showed that c can be made an arbitrarily small constant, by increased the running time by a factor of n O(log( 1 c )) (this is done by "guessing" a set K ′ of O(log( 1 c )) vertices of K, and finding the maximum clique in the subgraph induced on their common neighbors).Subsequently, additional algorithms were developed that find the planted clique when k > c √ n.They include algorithms based on the Lovasz theta function, which is a form of semi-definite programming [FK00], algorithms based on a "reverse-greedy" principle [FR10,DGGP14], and message passing algorithms [DM15].There have been many attempts to find polynomial time algorithms that succeed when k = o( √ n), but so far all of them failed (see for example [Jer92,FK03,MPW15]).It is a major open problem whether there is any such polynomial time algorithm.Planted clique when p = 1 2 was not studied as extensively, but it is quite well understood how results from the G(n, 1 2 , k) model transfer to the G(n, p, k) model.For p much smaller that 1 2 , say p = n δ−1 for some 0 < δ < 1 (hence average degree n δ ), the problem changes completely.Even without planting, with high probability over the choice of G ∼ G(n, p) (with p = n δ−1 ) we have that ω(G) = O( 1 1−δ ), and the maximum clique can be found in polynomial time.This also extends to finding maximum cliques in the planted setting, regardless of the value of k. (We are not aware of such results being previously published, but they are not difficult.See Section 2.2.)For p > 1 2 , it is more convenient to instead look at the equivalent problem in which p < 1 2 , but with the goal of finding a planted independent set instead of a planted clique.We refer to this model as Ḡ(n, p, k).For G ∼ G(n, p) (with p = n δ−1 ) we have that with high probability α(G) = Θ(n 1−δ log n).For G ∼ Ḡ(n, p, k) the known algorithms extend to finding planted independent sets of size k = cn 1− δ 2 in polynomial time.We remark that the approach of [AKS98] of making c arbitrarily small does not work for such sparse graphs.
The adversarial planted clique model
In this paper we introduce a variation on the planted clique model (and planted independent set model) that we refer to as the adversarial planted clique model.As in the random planted clique model, we start with a graph G ′ ∼ G(n, p) and a parameter k.However, now a computationally unbounded adversary may inspect G ′ , select within it a subset K of k vertices of its choice, and make this set into a clique by inserting all missing edges between pairs of vertices with K. We refer to this model as AG(n, p, k) (and the corresponding model for planted independent set as A Ḡ(n, p, k)).As shorthand notation shall use G ∼ AG(n, p, k) to denote a graph generated by this process.Let us clarify that AG(n, p, k) is not a distribution over graphs, but rather a family of distributions, where each adversarial strategy (where a strategy of an adversary is a mapping from G ′ to a choice of K) gives rise to a different distribution.
In the adversarial planted model, it is no longer true that the planted clique is the one of maximum size in the resulting graph G.Moreover, finding K itself may be information theoretically impossible, as K might be statistically indistinguishable from some other clique of size k (that differs from K by a small number of vertices).The three goals, that of finding K, finding a clique of maximum size, or finding any clique of size at least k, are no longer equivalent.Consequently, for our algorithmic results we shall aim at the more demanding goal of finding a clique of maximum size, whereas for our hardness results, we shall want them to hold even for the less demanding goal of finding an arbitrary clique of size k.
Our results
Our results cover a wide range of values of 0 < p < 1, where p may be a function of n.For simplicity of the presentation and to convey the main insights of our results, we present here the results for three representative regimes: p = 1 2 , p = n δ−1 for 0 < δ < 1, and p = 1 − n δ−1 .For the latter regime, it will be more convenient to replace it by the equivalent problem of finding adversarially planted independent sets when p = n δ−1 .
Informally, our results show the following phenomenon.We consider only the case that p ≤ 1 2 , but consider both the planted clique and the planted independent set problems, and hence the results can be translated to p > 1 2 as well.For clique, we show (Theorem 1.1 and Theorem 1.2) how to extend the algorithmic results known for the random planted clique setting to the adversarial planted clique setting.However, for independent set, we show that this is no longer possible.Specifically, when p is sufficiently small, we prove (Theorem 1.3) that finding an independent set of size k (any independent set, not necessarily the planted one) in the adversarial planted independent set setting is NP-hard.Moreover, the NP-hardness result holds even for large values of k for which finding a random planted independent set is trivial.
Theorem 1.1.For every fixed ε > 0 and for every k ≥ ε √ n, there is an (explicitly described)
k). The statement holds for every adversarial planting strategy (choice of k vertices as a function of
2 )), and the probability of success is taken over the choice of Theorem 1.2.Let p = n δ−1 for 0 < δ < 1.Then for every k, there is an (explicitly described) algorithm running in time n O( 1 1−δ ) which almost surely finds the maximum clique in a graph G ∼ AG(n, p, k).The statement holds for every adversarial planting strategy, and the probability of success is taken over the choice of G ′ ∼ G(n, p).
Theorem 1.3.For p = n δ−1 with 0 < δ < 1, 0 < γ < 1, and cn 1−δ log n ≤ k ≤ 2 3 n (where c is a sufficiently large constant, and the constant 2 3 was chosen for concreteness -any other constant smaller than 1 will work as well) the following holds.There is no polynomial time algorithm that has probability at least γ of finding an independent set of size k in G ∼ A Ḡ(n, p, k), unless NP has randomized polynomial time algorithms (NP=RP).(The algorithm is required to succeed against every adversarial planting strategy, and the probability of success is taken over the choice of G ′ ∼ G(n, p).)
Related work
Some related work was already mentioned in Section 1.1.Our algorithm for Theorem 1.1 is based on an adaptation of the algorithm of [FK00] that applied to the random planted clique setting.In turn, that algorithm is based on the theta function of Lovasz [Lov79].
A work that is closely related to ours and served as an inspiration both to the model that we study, and to the techniques that are used in the proof of the NP-hardness result (Theorem 1.3) is the work of David and Feige [DF16] on adversarially planted 3-colorings.That work uncovers a phenomenon similar to the one displayed in the current work.Specifically, for the problem of 3-coloring (rather than clique or independent set) it shows that for certain values of p, algorithms that work in the random planted setting can be extended to the adversarial planted setting, and for other values of p, finding a 3-coloring in the adversarial planted setting becomes NP-hard.However, there are large gaps left open in the picture that emerges from the work of [DF16].For large ranges of the values of p, specifically, n −1/2 < p < n −1/3 and p < n −2/3 , there are neither algorithmic results nor hardness results in the work of [DF16].Unfortunately, the most interesting values of p for the 3-coloring problem, which are p ≤ c log n n , lie within these gaps, and hence the results of [DF16] do not apply to them.Our work addresses a different problem (planted clique instead of planted 3-coloring), and for our problem, our analysis leaves almost no such gaps.We are able to determine for which values of p the problem is polynomial time solvable, and for which values it is NP-hard.See Section 3 for more details.Our model is an example of a semi-random model, in which part of the input is determined at random and part is determined by an adversary.There are many other semi-random models, both for the clique problem and for other problems.Describing all these models is beyond the scope of this paper, and the interested reader is referred to [Fei20] and references therein for additional information.
Overview of the proofs
In this section we provide an overview of the proofs for our three main theorems.Further details, as well as extensions to the results, appear in the appendix.
The term almost surely denotes a probability that tends to 1 as n grows.The term extremely high probability denotes a probability of the form 1 − e −n r for some r > 0. By exp(x) for some expression x we mean e x .
Finding cliques using the theta function
In this section we provide an overview of the proof of Theorem 1.1.Our algorithm is an adaptation of the algorithm of [FK00] that finds the maximum clique in the random planted model.We shall first review that algorithm, then describe why it does not apply in our setting in which an adversary plants the clique, and finally explain how we modify that algorithm and its analysis so as to apply it in the adversarial planted setting.
The key ingredient in the algorithm of [FK00] is the theta function of Lovasz, denoted by ϑ.Given a graph G, ϑ(G) can be computed in polynomial time (up to arbitrary precision, using semidefinite programming (SDP)), and satisfies ϑ(G) ≥ α(G).As we are interested here in cliques and not in independent sets, we shall consider Ḡ, the edge complement of G, and then ϑ( Ḡ) ≥ ω(G).The theta function has several equivalent definitions, and the one that we shall use here (referred to as ϑ 4 in [Lov79]) is the following.
Given a graph G = G(V, E), a collection of unit vectors s i ∈ R n (one vector for every vertex i ∈ V ) is an orthonormal representation of G, if s i and s j are orthogonal (s i • s j = 0) whenever (i, j) ∈ E. The theta function is the maximum value of the following expression, where maximization is over all orthonormal representations {s i } of G and over all unit vectors h (h is referred to as the handle): (1) The optimal orthonormal representation and the associated handle that maximize the above formulation for ϑ can be found (up to arbitrary precision) in polynomial time by formulating the problem as an SDP (details omitted).Observe that for any independent set S the following is a feasible solution for the SDP: choose s i = h for all i ∈ S, and choose all remaining vectors s j for j ∈ S to be orthogonal to h and to each other.Consequently, ϑ(G) ≥ α(G), as claimed.
The main content of the algorithm of [FK00] is summarized in the following theorem.We phrased it in a way that addresses cliques rather than independent sets, implicitly using α( Ḡ) = ω(G).We also remind the reader that in the random planted model, the planted clique K is almost surely the unique maximum clique.
Theorem 2.1 (Results of [FK00]).Consider G ∼ G(n, 1 2 , k), a graph selected in the random planted clique model, with k ≥ c √ n for some sufficiently large constant c.Then with extremely high probability (over choice of G) it holds that ϑ( Ḡ) = ω(G).Moreover, for every vertex i that belongs to the planted clique K, the corresponding vector s i has inner product larger than 1 − 1 n with the handle h, and for every other vertex, the corresponding inner product is at most 1 n .
Given Theorem 2.1, the following algorithm finds the planted clique when G ∼ G(n, 1 2 , k), and k ≥ c √ n for some sufficiently large constant c.Solve the optimization problem (1) (on Ḡ) to sufficiently high precision, and output all vertices whose corresponding inner product with h is at least 1 2 .The algorithm above does not apply to G ∼ AG(n, 1 2 , k), a graph selected in the adversarial planted clique model, for the simple reason that Theorem 2.1 is incorrect in that model.[Juh82]), and consequently one would expect the value of ϑ( Ḡ) to be roughly k + √ log n.
Summarizing, it is not difficult to come up with strategies for planting cliques of size k that result in the maximum clique having size strictly larger than k, and the value of ϑ( Ḡ) being even larger.Consequently, the solution of the optimization problem (1) by itself is not expected to correspond to the maximum clique in G.
We now explain how we overcome the above difficulty.A relatively simple, yet important, observation is the following.Proposition 2.1.Let G ∼ AG(n, p, k) with p = 1/2 and k > √ n, and let K ′ be the maximum clique in G (which may differ from the planted clique K).Then with extremely high probability over the choice of G ′ ∼ G(n, 1 2 ), for every possible choice of k vertices by the adversary, K ′ contains at least k − O(log n) vertices from K, and at most O(log n) additional vertices.
Proof.Standard probabilistic arguments show that with extremely high probability, the largest clique in G ′ (prior to planting a clique of size k) is of size at most k 2 .When this holds, K ′ contains at least k 2 vertices from K. Each of the remaining vertices of K ′ needs to be connected to all vertices in K ′ ∩ K. Consequently, with extremely high probability, K ′ contains at most 2 log n vertices not from K.This is because a G ′ ∼ G(n, 1 2 ) graph, with extremely high probability, does not contain two sets of vertices A and B, with |A| = 2 log n, |B| = Ω( √ n), such that all pairs of vertices in A × B induce edges in G.
As |K ′ | ≥ k, we conclude that all but O(log n) vertices of K must be members of K ′ .
A key theorem that we prove is: with extremely high probability over the choice of G ′ ∼ G(n, 1 2 ), for every possible choice of k vertices by the adversary.
We now explain how Theorem 2.2 is proved.The bound ϑ( Ḡ) ≥ k was already explained above.Hence it remains to show that ϑ( Ḡ) ≤ k + O(log n).In general, to bound ϑ(G) from above for a graph G(V, E), one considers the following dual formulation of ϑ, as a minimization problem.
Here M ranges over all n by n symmetric matrices in which M ij = 1 whenever (i, j) ∈ E, and λ 1 (M ) denotes the largest eigenvalue of M .(Observe that if G has an independent set S of size k, then M contains a k by k block of 1 entries.A Rayleigh quotient argument then implies that λ 1 (M ) ≥ k, thus verifying the inequality ϑ(G) ≥ α(G).)To prove Theorem 2.2 we exhibit a matrix M as above (for the graph Ḡ) for which we prove that We first review how a matrix M was chosen by [FK00] in the proof of Theorem 2.1.First, recall that we consider Ḡ, and let E be the set of edges of Ḡ (non-edges of G).We need to associate values with the entries M ij for (i, j) ∈ E (as other entries are 1).The matrix block corresponding to the planted clique K (planted independent set in Ḡ) is all 1 (by necessity).For every (i, j) ∈ E where both vertices are not in K one sets M ij = −1.For every other pair (i, j) ∈ E (say, i ∈ K and j ∈ K) one sets M i,j = − k−d i,K d i,K , where d i,K is the number of neighbors that vertex i has in the set K. In order to show that λ 1 (M ) = k, one first observes that the vector x K (with value 1 at entries that correspond to vertices of K, and value 0 elsewhere) is an eigenvector of M with eigenvalue k.Then one proves that λ 2 (M ), the second largest eigenvalue of M , has value smaller than k.This is done by decomposing M into a sum of several matrices, bounding the second largest eigenvalue for one of these matrices, and the largest eigenvalue for the other matrices.By Weyl's inequality, the sum of these eigenvalues is an upper bound on λ 2 (M ).This upper bound is not tight, but it does show that λ 2 (M ) < k.It follows that the eigenvalue k associated with x K is indeed λ 1 (M ).Further details are omitted.
We now explain how to choose a matrix M so as to prove the bound ϑ( Ḡ) ≤ k + O(log n) in Theorem 2.2.Recall (see Example 1) that we might be in a situation in which ϑ( Ḡ) > α( Ḡ) > k (with all inequalities being strict).In this case, let K ′ denote the largest independent set in Ḡ, and note that K ′ is larger than K.In M , the matrix block corresponding to K ′ is all 1.One may attempt to complete the construction of M as described above for the random planting case, but replacing K by K ′ everywhere in that construction.If one does so, the vector x K ′ (with value 1 at entries that correspond to vertices of K ′ , and value 0 elsewhere) is an eigenvector of M with eigenvalue α( Ḡ) > k.However, M would necessarily have another eigenvector with a larger eigenvalue, because ϑ( Ḡ) > α( Ḡ).Hence we are still left with the problem of bounding λ 1 (M ), rather than bounding λ 2 (M ).Having failed to identify an eigenvector for λ 1 (M ), we may still obtain an upper bound on λ 1 (M ) by using approaches based on Weyl's inequality (or other approaches).However, these upper bounds are not tight, and it seems difficult to limit the error that they introduce to be as small as O(log n), which is needed for proving the inequality For the above reason, we choose M differently.For some constant 1 2 < ρ < 1, we extend the clique K to a possibly larger clique Q, by adding to it every vertex that has ρk neighbors in K. (In Example 1, the corresponding clique Q will include all vertices of K ∪ T .In contrast, if K is planted at random and not adversarially, then we will simply have Q = K.) Importantly, we prove (see 2 ), then with high probability |Q| < k + O(log n) (for every possible choice of planting a clique of size k by the adversary).For the resulting graph G Q , we choose the corresponding matrix M in the same way as it was chosen for the random planting case.Now we do manage to show that the eigenvector x Q (with eigenvalue |Q|) associated with this M indeed has the largest eigenvalue.This part is highly technical, and significantly more difficult than the corresponding proof for the random planting case.The reason for the added level of difficulty is that, unlike the random planting case in which we are dealing with only one random graph, here the adversary can plant the clique in any one of n k locations, and our analysis needs to hold simultaneously for all n k graphs that may result from such plantings.Further details can be found in Appendix A.
Having established that ϑ( ḠQ ) = |Q| ≤ k + O(log n), we use monotonicity of the theta function to conclude that ϑ( Ḡ) ≤ k + O(log n).This concludes our overview for the proof of Theorem 2.2.
Given Theorem 2.2, let us now explain our algorithm for finding a maximum clique in ), the first step in our algorithm is to solve the optimization problem (1) on the complement graph Ḡ.By Theorem 2.2, we will have ϑ( Ḡ) ≤ k + c log n for some constant c > 0. Let {s i } denote the orthonormal representation found by our solution, and let h be the corresponding handle.
The second step of our algorithm it to extract from G a set of vertices that we shall refer to as H, that contains all those vertices i for which (h Lemma 2.1.For H as defined above, with extremely high probability, at least k − O(log n) vertices of K are in H, and most O(log n) vertices not from K are in H.
Proof.Let T denote the set of those vertices in K for which (h Remove T from G, thus obtaining the graph G T .This graph can be thought of as a subgraph with n − |T | vertices of the random graph G ′ ∼ G(n, 1 2 ), in which an adversary planted a clique of size k − |T |.We also have that ϑ 4 between the size of the planted clique and the value of the theta function contradicts Theorem 2.2 for the graph G T .(Technical remark: this last argument uses the fact that Theorem 2.2 holds with extremely high probability, as we take a union bound over all choices of T .) Having established that T is small, let R be the set of vertices not in K for which (h•s i ) 2 ≥ 3 4 .We claim that every such vertex i ∈ R is a neighbor of every vertex j ∈ K \ T .This is because in the orthogonal representation (for Ḡ), if i and j are not neighbors we have that s i • s j = 0, and then the fact that s i ,s j and h are unit vectors implies that (h
Having this claim and using the fact that
2 ) graph, with extremely high probability, does not contain two sets of vertices A and B, with The third step of our algorithm constructs a set F that contains all those vertices that have at least 3k 4 neighbors in H.
Lemma 2.2.With extremely high probability, the set F described above contains the maximum clique in G, and at most O(log n) additional vertices.
Proof.We may assume that H satisfies the properties of Lemma 2.1.Proposition 2.1 then implies that with extremely high probability, every vertex of the maximum clique in G has at least 3k 4 neighbors in H, and hence is contained in F .A probabilistic argument (similar to the end of the proof of Lemma 2.1) establishes that F has at most O(log n) vertices not from K.
As K itself has at most O(log n) vertices not from the maximum clique (by Proposition 2.1), the total number of vertices in F that are not members of the maximum clique is at most Finally, in the last step of our algorithm we find a maximum clique in F , and this is a maximum clique in G.This last step can be performed in polynomial time by a standard algorithm (used for example to show that vertex cover is fixed parameter tractable).For every non-edge in the subgraph induced on F , at least one of its end-vertices needs to be removed.Try both possibilities in parallel, and recurse on each subgraph that remains.The recursion terminates when the graph is a clique.The shortest branch in the recursion gives the maximum clique.As only O(log n) vertices need to be removed in order to obtain a clique, the depth of the recursion is at most O(log n), and consequently the running time (which is exponential in the depth) is polynomial in n.
This completes our overview of our algorithm for finding a clique in √ n for a sufficiently large constant c > 0. To complete the proof of Theorem 1.1 we need to also address the case that k > ε √ n for arbitrarily small constant ε.This we do (as in [AKS98]) by guessing t ≃ 2 log c ǫ vertices from K (there are n t possibilities to try, and we try all of them), and considering the subgraph of G induced on their common neighbors.This subgraph corresponds to a subgraph of The many details that were omitted from the above overview of the proof of Theorem 1.1 can be found in in the appendix.Specifically, in Appendix A we present the proof of Theorem 2.2, generalized to values of p other than 1/2, and k ≥ c √ np.(A technical lemma that is needed for this proof appears in Appendix D.) In Appendix B we present the proof of Theorem 1.1, first addressing the case that c is sufficiently large, and then extending the results to the case that c can be arbitrarily small.
Finding cliques by enumeration
In this section we prove Theorem 1.2.Let p = n δ−1 for 0 < δ < 1, and consider first G ′ ∼ G(n, p) (hence G ′ has average degree roughly n δ ).For every size t ≥ 1, let N t denote the number of cliques of size t in G ′ .The expectation (over choice of G ′ ∼ G(n, p)) satisfies: The exponent is maximized when t = 3−δ 2(1−δ) .For the maximizing (not necessarily integer) t, the exponent equals (3−δ) 2 8(1−δ) .We denote this last expression by e δ , and note that e δ = O( 1 1−δ ).The expected number of cliques of all sizes is then: (The last inequality holds for sufficiently large n.)By Markov's inequality, with probability at least 1 − 1 n , the actual number of cliques in G ′ is at most n e δ +1 .(Stronger concentration results can be used here, but are not needed for the proof of Theorem 1.2.)Now, for arbitrary 1 ≤ k ≤ n, let the adversary plant a clique K of size k in G ′ , thus creating the graph G ∼ G(n, p, k).As every subgraph of K is a clique, the total number of cliques in G is at least 2 k , which might be exponential in n (if k is large).However, the number of maximal cliques in G (a clique is maximal if it is not contained in any larger clique) is much smaller.Given a maximal clique C in G, consider C ′ , the subgraph of C not containing any vertex from K. C ′ is a clique in G ′ (which is nonempty, except for one special case of C = K).C ′ uniquely determines C, as the remaining vertices in C are precisely the set of common neighbors of C ′ in K (this is because the clique C is maximal).Consequently, the number of maximal cliques in G is not larger than the number of cliques in G ′ .
As all maximal cliques in a graph can be enumerated in time linear in their number times some polynomial in n (see e.g.[MU04] and references therein), one can list all maximal cliques in G in time n e β +O(1) (this holds with probability at least 1− 1 n , over the choice of G ′ , regardless of where the adversary plants clique K), and output the largest one.
This completes the proof of Theorem 1.2.
Proving NP-hardness results
In this section we provide an overview of the proof of Theorem 1.3.Our proof is an adaptation to our setting of a proof technique developed in [DF16].
Recall that we are considering a graph G ∼ A Ḡ(n, p, k) (adversarial planted independent set) with p = n δ−1 and 0 < δ < 1.Let us first explain why the algorithm described in Section 2.1 fails when k = cn 1− δ 2 (whereas if the independent set is planted at random, algorithms based on the theta function are known to succeed).The problem is that the bound in Theorem 2.2 is not true anymore, and instead one has the much weaker bound of ϑ(G) ≤ k + n 1−δ log n.Following the steps of the algorithm of Section 2.1, in the final step, we would need to remove a minimum vertex cover from F .However, now the upper bound on the size of this vertex cover is O(n 1−δ log n) rather than O(log n).Consequently, we do not know of a polynomial time algorithm that will do so.It may seem that we also do not know that no such algorithm exists.After all, F is not an arbitrary worst case instance for vertex cover, but rather an instance derived from a random graph.However, our NP-hardness result shows that indeed this obstacle is insurmountable, unless NP has randomized polynomial time algorithms.We remark that using an approximation algorithm for vertex cover in the last step of the algorithm of Section 2.1 does allow one to find in G an independent set of size k − O(n 1−δ log n) = (1 − o(1))k, and the NP-hardness result applies only because we insist on finding an independent set of size at least k.
Let us proceed now with an overview of our NP hardness proof.We do so for the case that k = n 3 (for which we can easily find the maximum independent set if the planted independent set is random).Assume for the sake of contradiction that ALG is a polynomial time algorithm that with high probability over choice of G ′ ∼ G(n, p), for every planted independent set of size k = n 3 , it finds in the resulting graph G an independent set of size k.
We now introduce a class H of graphs that, in anticipation of the proofs that will follow, is required to have the following three properties.(Two of the properties are stated below in a qualitative manner, but they have precise quantitative requirements in the proofs that follow.) 1. Solving maximum independent set on graphs from this class is NP-hard.
2. Graphs in this class are very sparse.
3. The number of vertices in each graph is small.Given the above requirements, we choose 0 < ε < min[ δ 2 , 1 − δ], and let H be the class of balanced graphs on n ǫ vertices, and of average degree 2 + δ. (A graph H is balanced if no subgraph of H has average degree larger than the average degree of H.) Given a graph H ∈ H and a parameter k ′ , it is NP-hard to determine whether H has an independent of size at least k ′ or not (see Theorem C.1).We will reach a contradiction to the existence of ALG by showing how ALG could be used in order to find in H an independent set of size k ′ , if one exists.For this, we use the following randomized algorithm ALGRAND.
1. Generate a random graph G ′ ∼ G(n, p).If H does not have an independent set of size k ′ , ALGRAND surely fails to output such an independent set.But if H does have an independent set of size k ′ , why should ALGRAND succeed?This is because ALG (which is used in ALGRAND) is fooled to think that the graph GH generated by ALGRAND was generated from A Ḡ(n, p, k), and on such graphs ALG does find independent sets of size k.And why is ALG fooled?This is because the distribution of graphs generated by ALGRAND is statistically close to a distribution that can be created by the adversary in the A Ḡ(n, p, k) model.Specifically, consider the following distribution that we refer to as A H G(n, p, k).
The computationally unbounded adversary finds within G ′ all subsets of vertices of size
|H| such that the subgraph induced on them is H. (If there is no such subset, fail.)Choose one such copy of H uniformly at random.
3. As H is assumed to have an independent set of size k ′ , plant an independent set K of size k as follows.k ′ of the vertices of K are vertices of an independent set in the selected copy of H.The remaining k − k ′ vertices of K are chosen at random among the vertices of G ′ that have no neighbor at all in the copy of H. (Observe that we expect there to be at least roughly n − |H|n δ ≥ n 2 such vertices, and with extremely high probability the actual number will be at least Theorem 2.3.The two distributions, GH ∼ G H (n, p, k) generated by ALGRAND and G ∼ A H G(n, p, k) generated by the adversary, are statistically similar to each other.
The proof of Theorem 2.3 appears in Appendix C.4.Here we explain the main ideas in the proof.A minimum requirement for the theorem to hold is that G ′ ∼ G(n, p) typically contains at least one copy of H (otherwise A H G(n, p, k) fails to produce any output).But this by itself does not suffice.Intuitively, the condition we need is that G ′ typically contains many copies of H. Then the fact that G H (n, p) of ALGRAND adds another copy of H to G ′ does not appear to make much of a difference to G ′ , because G ′ anyway has many copies of H. Hopefully, this will imply that G ′ ∼ G(n, p) and G H ∼ G H (n, p) come from two distributions that are statistically close.This intuition is basically correct, though another ingredient (a concentration result) is also needed.Specifically, we need the following lemma (stated informally).
Lemma 2.3.For G ′ ∈ G(n, p) (with p and H as above), the expected number of copies of H in G ′ is very high (2 n η for some η > 0 that depends on δ and ǫ).Moreover, with high probability, the actual number of copies of H in G ′ is very close to its expectation.
The proof of Lemma 2.3 is based on known techniques (first and second moment methods).It uses in an essential way the fact that the graph H is sparse (average degree barely above 2) and does not have many vertices (these properties hold by definition of the class H).See more details in Appendix C.3.Armed with Lemma 2.3, we then prove the following Lemma.
Lemma 2.4.The two distributions G(n, p) and G H (n, p) are statistically similar to each other.Lemma 2.4 is proved by considering graphs G ′ ∼ G(n, p) that do contain a copy of H (Lemma 2.3 establishes that this is a typical case), and comparing for each such graph the probability of it being generated by G H (n, p) with the probability of it being generated by G(n, p).Conveniently, the ratio between these probabilities is the same as the ratio between the actual number of copies of H in the given graph G ′ , and the expected number of copies of H in a random G ′ ∼ G(n, p).By Lemma 2.3, for most graphs, this ratio is close to 1.For more details, see Appendix C.4.
Theorem 2.3 follows quite easily from Lemma 2.4.Consequently ALG's performance on the distributions G H (n, p, k) and A H G(n, p, k) is similar.By our assumption, ALG finds (with high probability) an independent set of size k in G ∼ A H G(n, p, k), which now implies that it also does so for GH ∼ G H (n, p, k).But as argued above, finding an independent set of size k in GH ∼ G H (n, p, k) implies that ALGRAND finds an independent set of size k ′ in H ∈ H, thus solving an NP-hard problem.Hence the assumption that there is a polynomial time algorithm ALG that can find independent sets of size k in G ∼ A Ḡ(n, p, k) implies that NP has randomized polynomial time algorithms.
Additional results
In the main part of the paper we only described what we view as our main results.The appendix contains all missing proofs, and some additional results and extensions, not described above.For example, one may ask for which value of p ≤ 1 2 the transition occurs from being able to find the maximum independent set in G ∼ A Ḡ(n, p, k) in polynomial time, to the problem becoming NP hard.Our results show a gradual transition.For constant p the problem remains polynomial time solvable, and then, as p continues to decrease, the running time of our algorithms becomes super polynomial, and grows gradually towards exponential complexity.Establishing this type of behavior does not require new proof ideas, but rather only the substitution of different parameters in the existing proofs.Consequently, some theorems that were stated here only in special cases (e.g., Theorem 2.2 that was stated only for p = 1 2 ) are restated in the appendix in a more general way (e.g., replacing 1 2 by p), and a more general proof is provided.
Though this is not shown in the appendix, our hardness results (for finding adversarially planted independent sets) also imply a gradual transition, providing NP-hardness results when p = n δ−1 , and as p grows (e.g., into the range p = 1 (log n) c ) the NP-hardness results are replaced by hardness results under stronger assumptions, such as (a randomized version of) the exponential time hypothesis.This is because for p = 1 (log n) c we need to limit the size of the graphs H ∈ H to be only polylogarithmic in n, as for larger sizes the proofs in Section 2.3 fail.
An interesting range of parameters that remains open is that of p = d n for some large constant d.The case of a random planted independent set of size c d n (for some sufficiently large constant c > 0 independent of d) was addressed in [FO08].In such sparse graphs, the planted independent set is unlikely to be the maximum independent set.The main result in [FO08] is a polynomial time algorithm that with high probability finds the maximum independent set in that range of parameters.It would be interesting to see whether the positive results extend to the case of adversarial planted independent set.We remark that neither Theorem 1.1 nor Theorem 1.3 apply in this range of parameters.
A Bounding the theta function
In this section we will prove Theorem 2.2.Instead of proving exactly this theorem, we will prove a generalization to other values of p.Let c ∈ (0, 1) be an arbitrary constant.
This theorem has a very important corollary, which follows from the Lipschitz property of Lovasz theta function [Lov79].
Corollary A.1.Let p and k be as in Theorem A.2, and let K ⊂ V be the vertices belonging to the planted clique of G ∼ AG(n, p, k).Then, with probability at least 1 − exp(−2k log n), where G \ T denotes the graph G with vertices from T deleted; (ii) for every subset S ⊂ V \ K, if we "add" S to the planted clique by drawing all edges between S and S ∪ K, for the resulting graph We now prove Theorem A.2.For G ∼ AG(n, p, k), its complement graph Ḡ contains a planted independent set of size k, so ϑ( Ḡ) ≥ α( Ḡ) ≥ k.It remains to prove the upper bound.We will use the formulation of the theta function as an eigenvalue minimization problem: Here M ranges over all n by n symmetric matrices in which M ij = 1 whenever (i, j) ∈ E, and λ 1 (M ) denotes the largest eigenvalue of M .
The following proposition will be used in the proof of Theorem A.2. Proof.For convenience, we will consider the size of Q to be exactly µk, and consider the set of vertices that have at least νk neighbors in Q, as addition of o(1)-function does not affect anything in the proof.We shall also use g(n, p, t) as shorthand notation for g(n, p, µ, ν, t).
Proposition A.1. Let k and p be as in Theorem
Fix some set Q ⊂ V of size µk, and a set I ⊂ V \ Q of size m.Let T (I, Q) denote the event that every vertex in I has at least νk neighbors in Q.Consider a random bipartite graph with parts I and Q and edge probability p, and let e(I, Q) be the number of edges between I and Q.It is clear that E[e(I, Q)] = mµkp, and the event T (I, Q) implies the event {e(I, Q) ≥ mνk}.Hence There are n m ≤ ne m m ≤ exp(2m log n) possible vertex sets I, and n k ≤ exp(2k log n) possible subsets Q.Let T m be the event that for at least one such choice of I and Q the event T (I, Q) holds.By union bound, We derive an upper bound on ϑ( Ḡ) by presenting a particular matrix M , for which ϑ( Ḡ) ≤ λ 1 (M ) ≤ k ′ ≤ k + a(n, p).We use d(i, Q) to denote the number of edges between the vertex i ∈ The symmetric matrix M we choose is as follows.
• The upper left k ′ × k ′ block is all-ones matrix of order k ′ .
• The lower right block of size (n Observe that that every row of B sums up to zero.
• The upper right block is the transpose of the lower right block B.
We rewrite b ij for (i, j) / ∈ E in the following way: .
The vector with 1 in its first k ′ entries and 0 in other n − k ′ coordinates is an eigenvector of M with eigenvalue k ′ .To show that k ′ is the largest eigenvalue, it suffices to prove that λ 2 (M ) < k ′ .We represent M as a sum of three symmetric matrices M = U + V + W , and apply Weyl theorem [HJ12]: Matrices U , V and W are as follows.
• The matrix U is derived from the adjacency matrix of the original graph G ′ ∼ G(n,p).
U ii = 0 for all i, U ij = 1 if (i, j) ∈ E (in G ′ ), and U ij = −p/(1 − p) for all other i = j.
• Matrix V describes the modification that G ′ undergoes by planting the clique K and extending it to Q.For i, j ≤ k ′ we have was not an edge of G ′ .All other entries are 0.
• The matrix W is the correction matrix for having the row sums of B equal to 0. In its lower left block (i Its upper right block is the transpose of the lower left block.All other entries are 0. Claim A.1.With probability at least 1 − exp(−2k log n), for every possible choice of k vertices by adversary, we have To bound the eigenvalues of U , V and W , we shall use upper bounds on the eigenvalues of random matrices, as appear in [Vu07].
Theorem A.3.There are constants C ′ and C ′′ such that the following holds.Let a ij , i, j ∈ [n] be independent random variables, each of which has mean 0 and variance at most σ 2 and is bounded in absolute value by L, where σ ≥ C ′′ L log 2 n √ n .Let A be the corresponding n × n matrix.Then with probability at least 1 − O(1/n 3 ),
The bound holds regardless of what the diagonal elements of A are, since by subtracting the diagonal we may decrease the eigenvalues at most by L.
The matrix U is a random matrix, as it is generated from the graph G ′ ∼ G(n, p).The entries of matrix U have mean zero, |U ij | = O(1) since p is bounded by constant c < 1, and the variance is for all non-negative t.Hence, to show that λ 1 (U ) does not exceed λ U by too much with extremely high probability, it suffices to show that the probability of λ 1 (U ) to deviate from its mean is exponentially small in k log n ≃ w(n) 1/2 log n.The result by Alon, Krivilevich and Vu [AKV02] ensures that eigenvalues of U are well-concentrated around their means.
Theorem A.4 (Concentration of eigenvalues).For 1 ≤ i ≤ j ≤ n, let a ij be independent, real random variables with absolute value at most 1.Define a ji = a ij for all i, j, and let A be the n × n matrix with and for all t = ω( √ s): The same estimate holds for λ n−s+1 (A).
Taking t = Θ(w(n) 1/4 log n), from Theorem A.4 we get log n with probability at least 1 − exp − Ω k log 2 n .Note that the bound holds for any choice of the adversary, as matrix U does not depend on the vertices of the planted clique and is determined by initial graph G(n, p) only.
As for the matrix V , we shift it so that all its entries have mean 0. Precisely, we consider matrix V ′ such that for all i, j > k ′ we have , and for i < j ≤ k ′ we have V ′ ij = V ij − 1, which is either −1 with probability p and p/(1 − p) with probability (1 − p).Basically, V ′ is a copy of matrix U of order k ′ , so from Theorem A.3 we can obtain the bounds for λ 2 (V ), which is for some constant C ′ > 0, we will denote this bound by Λ V ′ .Similarly to λ 1 (U ), we have We would like these bounds hold for any choice of the adversarial k-clique.There are possible choices, so by setting t = Θ(w(n) 1/4 log n) in the bound above and applying union bound over all possible choices of k-clique, we prove for any choice of the adversary with probability at least 1 − exp − Ω k log 2 n .It remains to bound λ 1 (W ).We will use the trace of W 2 . .
By definition of set Q, for every It turns out that we can always bound the sum above.
Theorem A.5.With probability at least 1 − exp(−2k log n), for every possible choice of k vertices by the adversary, The proof is rather technical and is presented in Appendix D. From Theorem A.5 we get Combining the bounds for λ 1 (U ), λ 2 (V ) and λ 1 (W ), we get By choosing C ≥ 5 1−p in k = Cw(n) 1/2 , we guarantee that the expression above is less than k ′ .Therefore, k ′ is indeed the largest eigenvalue of matrix M , and ϑ( Ḡ) ≤ k ′ ≤ k + a(n, p) for every choice of adversarial k-clique with extremely high probability.This finishes the proof of Theorem A.2.
B Main algorithm
In this section we prove Theorem 1.1.G ′ ∼ G(n, 1 2 )), and the probability of success is taken over the choice of
k). The statement holds for every adversarial planting strategy (choice of k vertices as a function of
As with Theorem 2.2 and Theorem A.2, we will prove a more general version of the theorem, considering G ∼ AG(n, p, k) for a wide range of values of p, and not just p = 1 2 .We first prove such a theorem when k ≥ C √ np for a sufficiently large constant C. Afterwards, we shall extend the proof to the case that C can be an arbitrarily small constant.Theorem B.2.Let c ∈ (0, 1) be an arbitrary constant.Consider an arbitrary function w(n), such that n 2/3 ≪ w(n) ≤ cn.Let G ∼ AG(n, p, k), where p = w(n)/n and k ≥ 5 1−p w(n) 1/2 .There is an (explicitly described) algorithm running in time n O(1) which almost surely finds the maximum clique in G, for every adversarial planting strategy.
Proof.As described in Section 2.1, we solve the optimization problem finding the optimal orthonormal representation {s i } and handle h, using the SDP formulation.Suppose that we solved ϑ( Ḡ) in (4) for G ∼ AG(n, p, k) (with p and k as in Theorem B.2).By Theorem A.2, k ≤ ϑ( Ḡ) ≤ k + a(n, p).Let G = (V, E), let K denote the set of vertices chosen by the adversary.
As h and s i are unit vectors, we have that for all i ∈ V , (h it must be connected to the whole set K 3/4 .The set K 3/4 has size at least k − 4a(n, p), so by Corollary A.2 there are less than a(n, p) vertices i ∈ V \ K with (h • s i ) 2 ≥ 3/4.As a result, for the set H of vertices i ∈ V with (h Let F ⊂ V be the set of all vertices that have at least 3k/4 neighbors in H. Similarly to Lemma 2.2, with extremely high probability F contains the maximum clique in G.Moreover, by Proposition A.1 there are at most O(a(n, p)) vertices from V \ H that have at least 3/4k neighbors in H, implying If follows that the maximum clique of G[F ], the subgraph of G induced on F , is the maximum clique of G.Moreover, K ⊆ F , so F contains a clique of size at least k, and |F | ≤ k +O(a(n, p)).The maximum clique in G[F ] can be found in polynomial time by a standard algorithm (used for example to show that vertex cover is fixed parameter tractable).For every non-edge in the subgraph induced on F , at least one of its end-vertices needs to be removed, so we try both possibilities in parallel, and recurse on each subgraph that remains.Each branch of the recursion is terminated either when the graph is a clique, or when k vertices remain (whichever happens first).At least one of the branches of the recursion finds the maximum clique.The depth of the recursion is at most . This running time is polynomial if p is upper bounded by a constant smaller than 1.This finishes the description of the algorithm, proving Theorem B.2.
By the above claim and our choice of s we now have that k − s > 5 1−p |N (S)|p, where k − s is the size of the clique planted in G ′ S,K .Consequently, we are in a position to apply Theorem B.2 on G[N (S)], and conclude that the algorithm given in the proof of the theorem finds the maximum clique in G[N (S)].This indeed holds almost surely for every particular choice of K ⊂ V and S ⊂ K, but we are not done yet, as we want this to hold for all choices of K and S in G ′ ∼ G(n, p).To reach such a conclusion we need to analyse the failure probability of Theorem B.2 more closely, so as to be able to take a union bound over all choices of K and S.This union bound involves n k • k s ≃ exp(k log n) events (the term k s is negligible compared to n k , because s is a constant).Indeed the failure probability for Theorem B.2 can withstand such a union bound.This is because the proof of Theorem B.2 is based on earlier claims whose failure probability is at most exp(−2k log n).This upper bound on the failure probability is stated explicitly in Theorem A.2 and Corollary A.2, and can be shown to also hold in claims that do not state it explicitly (such as Proposition 2.1, Lemma 2.1 and Lemma 2.2, and versions of them generalized to arbitrary p), using analysis similar to that of the proof of Proposition A.1.
C.1 Maximum Independent Set in balanced graphs
Definition C.1.Given a graph H, denote its average degree by α.A graph H is balanced if every induced subgraph of H has average degree at most α.
Theorem C.1.For any 0 < η ≤ 1, determining the size of the maximum independent set in a balanced graph with average degree 2 < α < 2 + η is NP-hard.
Proof.It is well known that given a parameter k and a 3-regular graph H, determining whether H has an independent set of size k is NP-hard.For simplicity of upcoming notation, let 2n denote the number of vertices in H.Given a positive integer parameter t, we describe a polynomial time reduction R such that given a 3-regular graph H it holds that: • R(H) is a balanced graph with average degree 2 + 1 3t+1 .
• R(H) has an independent set of size k + 3nt if and only if H has an independent set of size t.
By choosing t > 1 3η−6 , the theorem is proved.Let H be a 3-regular graph on 2n vertices.The graph R(H) is obtained from H by replacing every edge (u,v) of H by a path with 2t intermediate vertices that connects between u and v.There are 3n edges in H, so by doing so we add 2t • 3n vertices of degree 2. The average degree of the resulting graph R(H) is To see that the graph R(H) is balanced, consider a subset of vertices S * ⊆ R(H), and let α * > 2 denote the average degree of the induced subgraph R(H)[S * ].W.l.o.g., we can assume that R(H)[S * ] has minimum degree at least 2 (because if R(H)[S * ] has a vertex of degree at most 1, removing it would result in a subgraph of higher average degree).Let V 3 be the set of vertices of degree 3 in R(H)[S * ].All remaining vertices of R(H)[S * ] have degree 2. As no two degree 3 vertices in R(H) are neighbors, R(H)[S * ] is composed of degree 3 vertices, and non-empty disjoint paths connecting between them.As no path connecting two degree 3 vertices in R(H) has fewer than 2t vertices (it may have more than 2t vertices, if it goes through original vertices of H), the number of degree 2 vertices in R(H)[S * ] is at least 3|V 3 | 2 • 2t.Hence α * ≤ 2 + 1 3t+1 , as desired.Every independent set I of size k in H gives rise to an independent set of size k + 3nt in R(H), because in R(H) we can take the vertices of I and t vertices from each of the 3n length t paths (at least one of the two end vertices of each path is not adjacent to a vertex in I).Likewise, every independent set of size k + 3nt in R(H) gives rise to an independent set of size k in R(H).Note that I contains at most t vertices from any single path of R(H), and moreover, can be assumed to contain exactly t vertices from any single path of R(H) (if I contains fewer than t vertices from the path connecting u and v, then by taking all even vertices of the path one gains a vertex, and this compensates for the at most one vertex that is lost from I due to the possible need to remove v from I).As I contains 3nt path vertices, its remaining k vertices are from H.Moreover, they form an independent set in H (no two vertices u and v adjacent in H can be in this set, because then the path connecting them in R(H) cannot contribute t vertices to I).
C.2 Notation to be used in the proof of Theorem 2.3
In the coming sections we prove Theorem 2.3.For simplicity of the presentation (and without affecting the implications towards the proof of Theorem 1.3), we describe the distributions G H (n, p), G H (n, p, k) and A H G(n, p, k) in a way that differs from their description in Section 2.3.Based on these descriptions, we will present X H (G), a key random variable associated with these distributions.This random variable is easier to work with than the random variable referred to in Lemma 2.3, and hence we shall later slightly change the formulation of Lemma 2.3 (without affecting the correctness of Theorem 2.3).
It will be convenient for us to think of G as an n vertex graph with vertices numbered from 1 to n, and of H as an m vertex graph with vertices numbered from 1 to m.For simplicity, we assume that m divides n (this assumption can easily be removed with only negligible effect on the results).Given an n-vertex graph G, we partition the vertex set of G into m disjoint subsets of vertices, each of size n m .Part i for 1 ≤ i ≤ m contains the vertices [(i − 1) n m + 1, i n m ].A vertex set S of size m that contains one vertex in each part is said to obey the partition.Definition C.2.Let H be an arbitrary m-vertex graph, and let n be such that m divides n, let k ′ ≤ m be a parameter (specifying the conjectured size of the maximum independent set in H), and let k satisfy k ′ ≤ k ≤ n − m.We say G H is distributed by G H (n, p) (for p ∈ (0, 1)) and that GH is distributed by G H (n, p, k) if they are created by the following random process.
1. Generate a random graph G ′ ∼ G(n, p), with a partition of its vertex set into m parts.
2. Choose a random subset M of m vertices from G ′ that obeys the partition.Though the description is different, it is not difficult to show that the distributions G H (n, p) and G H (n, p, k) are identical to the corresponding distributions described in Section 2.3.
For every
We also change the description of distribution A H G(n, p, k) from Section 2.3 in a way analogous to the above, by fixing a partition of the vertices of G ′ ∼ G(n, p) and requiring the adversary to choose in G ′ an induced copy of H that obeys the partition (vertex i of H must be in part i of the partition, for every 1 ≤ i ≤ m).As in Section 2.3, the adversary also plants a random independent set of size k − k ′ among the non-neighbors of H.If either G ′ does not have an induced copy of H that obeys the partition, of there are too few non-neighbors of H, we say that the adversary fails, and we revert to the default procedure of planting a random independent set of size k in G ′ .
We note that there is a (negligible) difference in the probability of failure in the above description of A H G(n, p, k) compared to that of Section 2.3, because it might be that G ′ has an induced copy of H, but no induced copy of H that obeys the partition.
For a graph G and a given partition, X H (G) denotes the number of sets S of size m obeying the partition, such that the subgraph of G induced on S is H (with vertex i of H in part i of the partition, for every 1 ≤ i ≤ m).For a graph G chosen at random from some distribution, X H (G) is a random variable.
C.3 Proof of Lemma 2.3
As noted in Appendix C.2, we slightly change Lemma 2.3.Instead of referring to all induced copies of H, we refer only to induced copies of H that obey the partition.The random variable X H (G) denotes their number.The main technical content of this modified Lemma 2.3 is handled by the following lemma.
Lemma C.1.Let 0 < ε < 1/7 be a constant, and let G ∼ G(n, p) be a random graph with p ∈ (0, 1).Let H be a balanced graph on m vertices with average degree 2 < α < 3.If m ≤ min[ ε p , 2 −1/4 p α/4 √ εn] (or equivalently, ε ≥ m 2 p and ε 2 ≥ 2 m 4 n 2 p α ), then for every β ∈ [0, 1) Proof.Let w(n) := np, so p = w(n)/n.Let Y H (G) be a random variable counting the number of sets S obeying the partition that have H as an edge induced subgraph of G, but may have additional internal edges.By definition, X H (G) ≤ Y H (G) and if it has no internal edges beyond those of H.This happens with probability 2 .We will now compute E Y H (G) 2 .Given the occurrence of H, consider another potential occurrence H ′ that differs from it by t vertices.Since H is balanced graph, Hence, the probability that H ′ realized conditioned on H being realized is at most p αt 2 .The number of ways to choose t other vertices is m t n m t (first choose t groups out of m in the partition, then choose one vertex in each group).Hence, the expected number of such occurrences is When w(n) α ≥ 2m 4 n α−2 the term t = m − 1 dominates, and hence the sum is at most roughly 2 , and The last inequality holds since ε < 1/7.We get that . By Chebyshev's inequality we conclude that ] the following holds for large enough n.Let G ∼ G(n, p) be a random graph with p = n δ−1 , and let H be a balanced graph on m = n ρ vertices and with average degree α.Then E[XH (G)] n→∞ −−−→ +∞, and for every β ∈ [0, 1) Proof.We first note that α < 2 1−δ implies that 2 − α(1 − δ) > 0, and hence we can take ρ > 0 in the above Corollary.The inequality ρ < (1 − δ)/2 implies (for large enough n) that The above bounds on m satisfy the requirements of Lemma C.1, and hence
We now restate and prove Theorem 2.3.Recall that now G H (n, p, k) and AG H (n, p, k) refer to the distributions as defined in Appendix C.2, rather that those defined in Section 2.3.
Theorem C.2 (Theorem 2.3 restated). Let f be an arbitrary function that gets as input an
n vertex graph and outputs either 0 or 1.Let p A denote the probability that f (G) = 1 when G ∼ A H G(n, p, k), and let p H denote the probability that f The following lemmas establish that with high probability the graph G ∼ G H (n, p, k) has no independent set that has more than k − k ′ vertices outside the induced copy of H.The notation used in these lemmas is as in Definition C.2.
Lemma C.4. Let
Proof.By first moment method the probability that there exists an independent set of size t is at most Proof.To prove this, view the process of generating GH in a following way.Initially, we have the graph H and n − m isolated vertices.Then, for every pair of vertices u, v where u ∈ M and v ∈ V \ M , draw an edge (u, v) with probability p.By doing so, we determine the set W ⊆ V \ M of vertices that have no neighbors in H. Select a random subset I ′ ⊂ W of size k − k ′ .For every pair of vertices from V \ M , if at least one of them does not belong to I ′ , draw an edge with probability p.
There are at most n t ≤ n t possible choices for the set Q.There are at most k−k ′ t ≤ k t ≤ n t possible choices for the set Y of at most t neighbors of Q within I ′ .The probability that Q has no neighbors in . By a union bound the probability that some subset Q ⊂ V \ (M ∪ I ′ ) of size t has at most t neighbors in I ′ is at most n −t .The probability of this happening for some value t ≤ k−k ′ 2 is at most n , as desired.
Combining the above lemmas we have the following Corollary.
Corollary C.2. Let
Then with probability at least 1 − 4/n over the choice of graph G ∼ G H (n, p, k), every independent set of size k in G contains at least k ′ vertices in the planted copy of H.
Proof.There are three events that might cause the Corollary to fail.
• G H (n, p, k) fails to produce an output.By Lemma C.3 and the upper bound on k, the probability of this event is smaller than 1 n .
• Even before planting I ′ , there is an independent set larger than . By Lemma C.4 the probability of this event is smaller than 1 n .
• After planting I ′ , one can obtain an independent set larger than ) with some of the vertices of I ′ .As we already assume that Lemma C.4 holds, Q can be of size at most k−k ′ 2 .Lemma C.5 then implies that the probability of this event is at most 2 n .
The sum of the above three failure probabilities is at most 4 n .
Now we restate and prove Theorem 1.3.
Theorem C.3.For p = n δ−1 with 0 < δ < 1, 0 < γ < 1, and 6n 1−δ log n ≤ k ≤ 2 3 n the following holds.There is no polynomial time algorithm that has probability at least γ of finding an independent set of size k in G ∼ A Ḡ(n, p, k), unless NP has randomized polynomial time algorithms (NP=RP).
Proof.Suppose for the sake of contradiction that algorithm ALG has probability at least γ of finding an independent set of size k in the setting of the Theorem. Choose ]. Let H be the class of balanced graphs of average degree α on m = n ρ vertices.By Theorem C.1, given a graph H ∈ H and a parameter k ′ , it is NP-hard to determine whether H has an independent set of size k ′ .We now show how ALG can be leveraged to design a randomized polynomial time algorithm that solves this NP-hard problem with high probability.
Repeat the following procedure 10 log n γ times.
• Sample a graph G ∼ G H (n, p, k) (as in Definition C.2).
• Run ALG on G.If ALG returns an independent set of size k that has at least k ′ vertices in the planted copy of H, then answer yes (H has an independent set of size k ′ ) and terminate.
If 10 log n γ iterations are completed without answering yes, then answer no (H probably does not have an independent set of size k ′ ).
Clearly, the above algorithm runs in random polynomial time.Moreover, if it answers yes then its answer is correct, because it actually finds an independent set of size k ′ in H.It remains to show that if H has an independent set of size k ′ , the probability of failing to give a yes answer is small.
We now lower bound the probability that a single run of ALG on G ∼ G H (n, p, k) fails to output yes.Recall that ALG succeeds (finds an independent set of size k) with probability at least γ over graphs with adversarially planted independent sets, and in particular, over the distribution A H G(n, p, k).
In Corollary C.1, choose ε = γ 25 and β = 1 5 .Our choice of m = n ρ satisfies the conditions of Lemma C.1, and hence we can apply Theorem C.2.In Theorem C.2 use the function f that has value 1 if ALG succeeds on G.It follows from Theorem C.2 that ALG succeeds with probability at least β(γ − 4ε (1−β) 2 ) = 3γ 20 over graphs G ∼ G H (n, p, k).Corollary C.2 implies that there is probability at most 4 n that there is an independent set of size k in G that does not contain k ′ vertices in the induced copy of H. Hence a single iteration returns yes with probability at least 3γ 20 − 4 n ≥ γ 10 (for sufficiently large n).Finally, as we have 10 log n γ iterations, the probability that none of the iterations finds an independent set of size k is at most (1 − γ 10 )
D Probabilistic bound
In this section we prove Theorem A.5.Let c ∈ (0, 1) and C > 0 be arbitrary constants.Let G ∼ G(n, p), G = (V, E), where p = w(n)/n for log 4 n ≪ w(n) < cn, and let k = Cw(n) 1/2 .Let K ⊂ V be arbitrary, |K| = k.We number the vertices of G so that V = [n], K = [k] and V \ K = [n] \ [k].For k + 1 ≤ i ≤ n let X i be a random variable equal to the number of edges from i to vertices in K.It is clear that X i ∼ Bin(k, p), so E[Xi] = kp and the variance V[Xi] = E (X i − kp) 2 = kp(1 − p).Since In other words, the probability that there exists such choice of k-subset and such 1 ≤ r ≤ R − 1 that for the corresponding set of vertices M r we have |M r | = |M ′ r | > 2 r+1 w(n) 1/4 tends to zero.Earlier we assumed that M r = M ′ r , but in general M r = M ′ r ⊔ M ′′ r , and m r = m ′ r + m ′′ r .The opposite case is M r = M ′′ r , and the analysis transfers without any changes, and |M ′′ r | ≤ 2 r+1 w(n) 1/4 with probability at least 1 − exp − Ω(nw(n) −1/4 ) .Hence, with probability of at least 1 − exp − Ω(nw(n) −1/4 ) for every choice of K and every 1 ≤ r ≤ R − 1 we have Since w(n) ≫ log 4 n, log n ≪ w(n) 1/4 , and we set the number of groups R = R = log Cn − log(w(n) 1/2 log n).By Lemma D.1, with probability at least 1 − exp − Ω(nw(n) −1/4 ) , Now we move to the first sum, for i ∈ M R with R = R we have We need to prove that with extremely high probability for any choice of k-subset i∈M R U i ≤ (n − k)kp(1 − p) + o(nkp(1 − p)).
We will do this by applying the Bernstein inequality [Ber46].
and a planted clique of size ε √ n − t ≃ c √ n ′ .Now on this new graph G" we can invoke the algorithm based on the theta function.(Technical remark.The proof that ϑ( Ḡ") ≤ k + O(log n) uses the fact that Theorem 2.2 holds with extremely high probability.See more details in Appendix B.)
n→∞ −−−→ +∞, recall the notation w(n) = np and the following bound from the proof of Lemma C.1
The following example illustrates what might go wrong, Example 1. Consider a graph G ′ ∼ G(n, 1 2 ). In G ′ first select a random vertex set T of size slightly smaller than 1 2 log n. Observe that the number of vertices in G ′ that are in the common neighborhood of all vertices of T is roughly 2 −|T | n > √ n. Plant a clique K of size k in the common neighborhood of T . In this construction, K is no longer the largest clique in G. This is
because T (being a random graph) is expected to have a clique K ′ of size 2 log |T | ≃ 2 log log n, and K ′ ∪ K forms a clique of size roughly k + 2 log log n in G.Moreover, as T itself is a random graph with edge probability 1 2 , the value of the theta function on T is roughly |T | (see G ′ a random copy of H (that is, pick |H| random vertices in G ′ and replace the subgraph induced on them by H).We refer to the resulting distribution as G H (n,p), and to the graph sampled from this distribution as G H . Observe that the number of vertices in G H that have a neighbor in H is with high probability not larger than |H|n δ ≤ n 2 .3. Within the non-neighbors of H, plant at random an independent set of size k−k ′ .We refer to the resulting distribution as G H (n,p,k), and to the graph sampled from this distribution as GH .Observe that with extremely high probability, α( GH \ H) = k − k ′ .Hence we may assume that this indeed holds.If furthermore α(H) ≥ k ′ , then α( GH ) ≥ k.
4. Run ALG on GH .We say that ALGRAND succeeds if ALG outputs an independent set IS of size k.Observe that then at least k ′ vertices of H are in IS, and hence ALGRAND finds an independent set of size k ′ in H.
1] be arbitrary constants.For any t ≥ 0, for every set Q ⊂ V of size (µ + o(1))k, there are at most g(n, p, µ, ν, t) := 6µ (ν−pµ) 2 p log n + t vertices from V \ Q that have at least (ν − o(1))k neighbors in Q, with probability at least 1 − exp − (ν−pµ) 2 tk associate vertex i of H with the vertex of M in the ith part, and replace the induced subgraph of G ′ on M by the graph H.This gives G H ∼ G H (n, p).4.Within the non-neighbors of M , plant at random an independent set I′ of size k − k ′ , giving the graph GH ∼ G H (n, p, k).(If M has fewer than k − k ′ non-neighbors in G H ,an event that will happen with negligible probability for our choice of parameters, then we say that this step fails, and instead we plant a random independent set of size k in G H .)
Lemma 2.4 and Theorem 2.3
Lemma C.2 (Lemma 2.4 restated).Let p(G) denote the probability to output G according to G(n, p), and let p H (G) denote the probability to output G according to G H (n, p).For every constant β ∈ [0, 1), with probability at least 1 − 4ε (1−β) 2 over the choice of graph G ∼ G(n, p), it holds that p H (G) ≥ βp(G). .Let e be the number of edges in G and consider p H (G). Out of the n m m options to choose a subset M in G H (n, p), only X H (G) options are such that the subgraph induced on M is H, so that the resulting graph could be G.Since H has average degree α, it has exactly αm/2 edges.Note that | 19,240.2 | 2020-04-01T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Genetic characteristics, antimicrobial susceptibility, and virulence genes distribution of Campylobacter isolated from local dual-purpose chickens in central China
Food-borne antibiotic-resistant Campylobacter poses a serious threat to public health. To understand the prevalence and genetic characteristics of Campylobacter in Chinese local dual-purpose (meat and eggs) chickens, the genomes of 30 Campylobacter isolates, including 13 C. jejuni and 17 C. coli from Jianghan-chickens in central China, were sequenced and tested for antibiotic susceptibility. The results showed that CC-354 and CC-828 were the dominant clonal complexes of C. jejuni and C. coli, respectively, and a phylogenetic analysis showed that three unclassified multilocus sequence types of C. coli were more closely genetically related to C. jejuni than to other C. coli in this study. Of the six antibiotics tested, the highest resistance rates were to ciprofloxacin and tetracycline (100%), followed by lincomycin (63.3%), erythromycin (30.0%), amikacin (26.7%), and cefotaxime (20.0%). The antibiotic resistance rate of C. coli was higher than that of C. jejuni. The GyrA T86I mutation and 15 acquired resistance genes were detected with whole-genome sequencing (WGS). Among those, the GyrA T86I mutation and tet(O) were most prevalent (both 96.7%), followed by the blaOXA-type gene (90.0%), ant(6)-Ia (26.7%), aac(6’)-aph(3’’) (23.3%), erm(B) (13.3%), and other genes (3.3%). The ciprofloxacin and tetracycline resistance phenotypes correlated strongly with the GyrA T86I mutation and tet(O)/tet(L), respectively, but for other antibiotics, the correlation between genes and resistance phenotypes were weak, indicating that there may be resistance mechanisms other than the resistance genes detected in this study. Virulence gene analysis showed that several genes related to adhesion, colonization, and invasion (including cadF, porA, ciaB, and jlpA) and cytolethal distending toxin (cdtABC) were only present in C. jejuni. Overall, this study extends our knowledge of the epidemiology and antibiotic resistance of Campylobacter in local Chinese dual-purpose chickens.
Introduction
According to the report of the World Health Organization (WHO), food-borne diseases, ranging from diarrhea to cancer, are a major cause of human morbidity and mortality and affect one in 10 people worldwide every year (WHO, 2022).Campylobacteriosis is one of the most frequently reported food-borne diseases throughout the world (EFSA BIOHAZ Panel [EFSA Panel on Biological Hazards] et al., 2020).The acute infectious diarrhea caused by Campylobacter is mainly treated with antibiotics, such as fluoroquinolones and macrolides (Pham et al., 2016).However, the use of antibiotics in both human treatments and animal breeding hisolated from poultry meat samples.ascaused antimicrobial resistance in Campylobacter to become an increasingly serious problem, and has posed a serious threat to public health over the past two decades (Luangtongkum et al., 2009).In 2017, fluoroquinolone-resistant Campylobacter was listed as one of the six high-priority antimicrobial-resistant pathogens by WHO (Romanescu et al., 2023).In China, bacterial antibiotic resistance monitoring data show that Campylobacter has maintained a high level of resistance to ciprofloxacin (> 90%) in various regions, (Li et al., 2016;Wang et al., 2016;Ju et al., 2018).
Poultry is the most important natural host of Campylobacter.In the European Union, the average prevalence of Campylobacter in birds and contaminated broiler carcasses is 71.2% and 75.8%, respectively (Soro et al., 2020), and more than 90% of commercial laying hens are colonized with Campylobacter (Jones et al., 2016).The breed of chicken is directly related to Campylobacter infection.Brena (Brena, 2013) reported that chickens reared indoors under higher welfare standards with decreased stocking density, the prevalence of Campylobacter was lower in a slower-growing breed (Hubbard JA57) than in a standard fast-growing breed (Ross 308).However, Humphrey et al. (Humphrey et al., 2014) demonstrated no intrinsic difference in the susceptibility of broiler breeds to C. jejuni under their experimental conditions.
China has many indigenous poultry resources, and many local chickens are dual-purpose (meat-egg) producers, with a longer growth cycle than broiler chickens.In general, traditional commercial broilers, such as AA broiler, Ross 308, are slaughtered in about 42 days (Fortuoso et al., 2019).However, some of the Chinese local chickens, such as Jianghan-chickens, usually start laying eggs at 140-150 days and then are slaughtered as food around 300 days.The life cycle of this type of production differs from that of commercial chickens, which may make the ecology (including antibiotic resistance) of Campylobacter different in production cycle.Previous studies have reported that under the same breeding conditions, the Huainan partridge chicken had a lower rate of Campylobacter infection than Heihua chickens or Ni-ke hon chickens, but a higher rate than AA+ chickens (Huang et al., 2009).Bai et al. (Bai et al., 2021) found that the isolation rate of Campylobacter was lower in slaughterhouses processing yellow feather broilers (14.2%) than in those processing white feather broilers or turkeys (from 26.3 to 100%).However, there are still few data on the prevalence of Campylobacter in local chickens in China.
The prevalence of antibiotic-resistant Campylobacter in poultry also cannot be ignored.Bacteria usually acquire antimicrobial resistance (AMR) by two main pathways.One involves chromosomal mutations at the target sites of antibiotic action, such as the point mutation in the gyrA gene that causes resistance to fluoroquinolone antibiotics (Iovine, 2013).The second involves the horizontal gene transfer of mobile genetic elements that contain resistance genes (Aksomaitiene et al., 2021).In the past few years, antibiotic-resistant Campylobacter in chicken house environment, eggshell, carcasses, poultry production, and the processing chain have been reported in many countries (Modirrousta et al., 2016;Tang et al., 2020b;Habib et al., 2023).Although several studies have detected antimicrobial-resistant Campylobacter in dual-purpose chickens (Foster-Nyarko et al., 2021;Metreveli et al., 2022;Rangaraju et al., 2022), Jianghan-chicken is a unique resource, which distributed in Central China.At present, the research on Jianghan-chicken is mainly focused on the eradication of Salmonella pullorum and avian leukosis, the overall resistance and virulence of Campylobacter in this chicken are unclear.Notably, the prevalence of Campylobacter, the generation and spread of its antibiotic resistance, and the complexity of its pathogenesis are probably related to the diversity of the Campylobacter genome.Many virulence genes have undergone expansion or contraction in specific lineages, resulting in differences in the content of virulence genes and ultimately leading to the specificity of their pathogenicity (Zhong et al., 2022).Fortunately, DNA sequencing technologies provide efficient methods with which to understand the antibioticresistance and pathogenic mechanisms of Campylobacter.
In this study, we investigated the genetic diversity, antibiotic resistance, and the distributions of the resistance and virulence genes of Campylobacter in local dual-purpose Jianghan-chickens in four regions of central China.We also used whole-genome sequencing (WGS) to evaluate the genetic diversity of Campylobacter and the phenotypic and genetic determinants associated with its intrinsic resistance.This data from this study extends our understanding of the prevalence and genomic characteristics of food-borne Campylobacter in local chickens in China.
Bacterial isolates and culture conditions
In this study, 30 Campylobacter isolates were isolated from 312 samples collected from eight chicken farms breeding local dualpurpose (meat-egg) chickens in four regions of central China in 2022 (Supplementary Table 1).Freshly collected cloacal swabs were stored in Cary-Blair Modified Transport Medium (Amresco, Englewood, USA) and transported to the laboratory at 4°C for Campylobacter isolation.The samples were pre-enriched in Bolton broth containing Campylobacter growth supplement (Oxoid, Basingstoke, UK) and Campylobacter Bolton broth selective supplement (Oxoid), and cultured at 42°C for 24 h under microaerobic conditions (5% O 2 , 10% CO 2 , and 85% N 2 ).Subsequently, 100 µl cultures were inoculated on modified charcoal cefoperazone deoxycholate agar (mCCDA, Oxoid) plates containing Campylobacter CCDA selective supplements at 42°C under microaerobic condition for 48 h.Suspected positive colonies were identified with Gram staining and 16S rDNA PCR (Linton et al., 1997).All isolates were identified with PCR targeting the C. jejuni-specific hipO gene and the C. coli-specific asp gene (Lawson et al., 1998).
Antimicrobial sensitivity testing
All isolates were tested for antimicrobial susceptibility to ciprofloxacin, tetracycline, cefotaxime, amikacin, erythromycin, and lincomycin with the disk diffusion method on Mueller Hinton Agar (Oxoid), according to the Clinical and Laboratory Standards Institute (CLSI) guidelines (Igwaran and Okoh, 2020).When the isolates were resistant to at least three different types of antibiotics, they were considered multidrug resistant (MDR).Escherichia coli ATCC 25922 was used as a quality control strain.
Whole-genome sequencing and analysis
The genomic DNA of the Campylobacter species was extracted with the TIANamp Bacteria DNA Kit (Tiangen, Beijing, China).The purity and concentration of the genomic DNA were determined by NanoDrop ™ One sectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA).Genomic DNA (5 mg; OD 260/280 = 1.8-2.0)was used for library construction.The Illumina NovaSeq 6000 sequencing platform (MajorBio Co., Shanghai, China) was used to sequence those libraries with a 2 × 150-bp read length.The raw reads obtained after sequencing were filtered with the fastp software (version 0.19.6)(Chen et al., 2018) and clean reads were obtained after the adapter sequences and low-quality sequences (Q < 20) were removed.The clean reads were then assembled with SOAPdenovo version 2.04 (Luo et al., 2012).The assembled contigs were uploaded to PubMLST (https://pubmlst.org/Campylobacter/) to determine their multilocus sequence types (STs) and clonal complexes (CCs).The phylogenetic tree and SNP count matrix heat map based on SNP analysis was obtained by using the online tool "multiple genome analysis" provided by B acWGSTd b 2 .0 ( http://bacdb .cn/BacWGSTdb/ ) .RM1221_CP000025, which had abundant studies on its genome (Parker et al., 2006;Neal-McKinney et al., 2021;St. Charles et al., 2022), was selected as reference genome in BacWGSTdb tool and the construction of the phylogenetic tree in this tool relies on Neighbor-Joining (NJ) algorithm (Feng et al., 2021).The virulence genes were predicted based on the Virulence Factor Database (VFDB; http:// www.mgc.ac.cn/VFs/).The tool ResFinder v.4.1 was used to detect acquired AMR genes and point mutations in specific genes conferring AMR; 90% minimum percentage identity and 60% minimum length coverage were used as the selection criteria.The sequence of the regulatory region of the cmeABC promoter (CmeR-Box) which is a 16-base inverted repeat sequence [TGTAATA (or T) TTTATTACA] (Cheng et al., 2020) and the amino acid sequence of CmeR were obtained by comparing the sequence alignment through BLAST (https://blast.ncbi.nlm.nih.gov/Blast.cgi).RAST Server (Rapid Annotation using Subsystem Technology) was used for Genome annotation of the assembled genome of multi-drug resistant Campylobacter spp, and the annotation scheme was ClassicRAST (http://rast.theseed.org/FIG/rast.cgi).Antibiotic resistance gene were also analyzed by Mobile Element Finder (https://cge.food.dtu.dk/services/MobileElementFinder/), and SnapGene® 2.3.2 was used to visualize gene arrangement.
Correlation analysis of susceptibility phenotypes and genotypes
The possible link between the Campylobacter resistance phenotype and the genotype predicted with WGS was analyzed by manually comparing the susceptibility test results (resistance or susceptibility) with the presence of known corresponding resistance genes and/or specific mutations.The percentage correlation between the resistance phenotype and genotype was calculated as the sum of true positives and true negatives divided by all the isolates tested.The positive predictive value was calculated by dividing the true positives by the sum of the true positives and false negatives, and the negative predictive value was calculated by dividing the true negatives by the sum of the true negatives and false positives.Sensitivity was calculated by dividing the true positives by the sum of the true positives and false positives, and specificity was calculated by dividing the true negatives by the sum of the true negatives and false negatives (Hodges et al., 2021).
SNP analysis was further carried out, and we found that there was a certain genetic diversity among these isolates, and these differences involved SNP differences vary greatly, from a few to thousands (Supplementary Figure 1).All the isolates can be cluster to four main branches of the phylogenetic tree (Figure 1).Branches 1 and 2 contained the major clonal complex CC-828 of C. coli, and branch 3 contained the main clonal complex (CC-354) of C. jejuni.Interestingly, branch 4, which contained the three C. coli isolates with unassigned STs, clustered with the larger branch containing C. jejuni.
Antimicrobial susceptibility
All the isolates were tested for susceptibility to six antibiotics.As shown in Table 1, all showed resistance to ciprofloxacin and tetracycline (100% in both C. jejuni and C. coli).More than half the isolates were resistant to lincomycin (61.5% of C. jejuni and 64.7% of C. coli).The resistance rates of Campylobacter to erythromycin, amikacin, and cefotaxime were 30.0%,26.7%, and 20.0%, respectively.The resistance rate of C. coli to erythromycin was 41.2%, which was more than twice that of C. jejuni (15.4%).The data showed similar trends for amikacin (35.3% in C. coli and 15.4% in C. jejuni).Among the 30 isolates, 22 were resistant to three or more classes of antimicrobial agents, and the most prevalent pattern of MDR was resistance to ciprofloxacin, tetracycline, and lincomycin (45.5%, 10/22) (Figure 1).
Antibiotic resistance genes and resistance mutations
In this study, a C257T chromosomal point mutation in the gyrA gene, which conferring the Thr-86-Ile substitution, and 15 acquired resistance genes were identified by genome-wide analysis.Figure 1 and Supplementary Table 3 show the distributions of the genetic determinants of resistance detected in each isolate with WGS.
Of all the isolates tested, 96.7% (29/30) carried the gyrA gene point mutation (C257T) along with a ciprofloxacin resistance phenotype.The correlation analysis of resistance phenotype and genotype showed that the gyrA C257T mutation correlated strongly with ciprofloxacin resistance (100% in C. coli and 92.3% in C. jejuni) (Table 2).Genetic relationships, antimicrobial-resistance phenotypes, and the distributions of resistance-and virulence-related genes determined in this study.The phylogenetic tree was constructed based on genomic single-nucleotide polymorphisms, and the reference genome was RM1221_CP000025.The genetic determinants of antibiotic resistance are grouped according to their corresponding antibiotic categories and are color coded.The isolates were divided into 4 branches of the tree and distinguished by different colors: yellow (branch 1), gray (branch 2), green (branch 3), red (branch 4).CIP, ciprofloxacin; TET, tetracycline; CTX, cefotaxime; AK, amikacin; ERY, erythromycin; LC, lincomycin.*Point mutation.The blaOXA-type b-lactamase-encoding gene was identified in 27 strains (90%, 27/30).And 22.2% (6/27) of isolates were resistant to cefotaxime.
The erythromycin and lincomycin resistance gene erm(B) was only identified in four C. coli isolates (13.3%, 4/30).The correlation between erm(B) and the erythromycin or lincomycin resistance phenotype was not strong (70.6% or 47.1%, respectively, in C. coli; and 84.6% or 38.5%, respectively, in C. jejuni).Further analysis of the isolates for point mutations in 23S rRNA revealed eight mutations in total (Supplementary Table 4), although neither the A2075G nor A2074C/G mutation, which reportedly cause erythromycin resistance, was detected.
CmeR-Box polymorphisms
A CmeR-Box polymorphism analysis of all isolates (Table 3) detected six CmeR-Box variants in 28 isolates.Among these, point substitutions were most common (96.4%), involving 17 C. coli and 10 C. jejuni isolates, whereas only one C. jejuni isolate (3.6%) had a point deletion, and no point insertion was detected in the CmeR-Box.
Genetic environment analysis of antibiotic resistance gene clusters in an MDR C. jejuni isolate
We analyzed the genetic environments of the resistance genes in C. jejuni JZ02, which was resistant to all of six antibiotics tested.Two antibiotic resistance gene clusters were detected (Figure 2).Gene cluster 1 contained the tet(O), tet(L), and cat (pC194) genes (Figure 2A).A transposase was encoded upstream from the tet(L) gene, and a 39-bp repeat and another transposase gene that shared 100% identity with IS1216 family transposase gene, were detected between tet (O) and cat (pC194).Moreover, a transposon encoding the protein TnpV was detected upstream from the tet(O) gene.Gene cluster 2 consisted of the ant (9), aph(3')-III, aph(2'')-If, and cat genes (Figure 2B).However, no mobile genetic elements or repetitive sequences were detected in this gene cluster, although a box element and several hypothetical proteins with sequences similar to those of some Gram-positive bacteria were found.
Virulence gene detection
Based on the VFDB, 126 virulence-related genes, involving adhesion, invasion, motility, toxins, and the type IV secretion system, were identified (Supplementary Table 5).We observed more virulence-related genes in C. jejuni (83-116 per isolate) than in C. coli (56-61 per isolate), and among them, isolates C. jejuni JZ05 (CC-21) and JS02 (CC-464) had the most virulencerelated genes (Figure 1).Most of the genes only detected in C. jejuni were related to motility and adhesion, including cadF, htrB, pebA, ciaB, jlpA, and cheA, and genes encoding cytolethal distending toxin (cdtABC) were also only detected in C. jejuni.Type IV secretion system genes, including virB11, virB10, virB9, virB8, virB4, and virD4, were detected in one C. jejuni isolate (3.3%), and wlaN was only found in two C. jejuni isolates (6.7%) (Table 4).Campylobacter isolates in most branches (branch 1, 2, 4) of the phylogenetic tree had similar numbers of virulence genes, and the categories of these genes were not quite different.Interestingly, in branch 3, the type and abundance of virulence genes vary greatly from different ST types, and most of the different genes are related to capsular synthesis and immune regulation.Some isolates with more virulence-related genes were distributed in a sub-branch of branch 3.
Discussion
Campylobacter is the main bacterial pathogen causing human diarrhea worldwide, and its increasing prevalence and antibiotic resistance have caused great concern globally in recent years, in both human and veterinary clinics.Poultry is the main host of Campylobacter, but the prevalence of the pathogen varies across different species and different regions.For instance, an investigation in southeastern Italy showed that the prevalence of C. jejuni was higher in broilers than in laying hens (45.7% and 21.1%, respectively) (Parisi et al., 2007).In Europe, the prevalence of broiler flocks colonized with Campylobacter ranged from 18% to > 90% in different countries (Newell and Fearnley, 2003).Meat-egg dual-purpose local chickens may differ from commercial varieties because their breeding modes and breeding cycles differ, and they may pose a potential risk of Campylobacter transmission to both meat and eggs (Ahmed et al., 2021).Therefore, we investigated the phylogenetic relationships, virulence genes, antibiotic resistance, and genetic bases of the resistance phenotypes of Campylobacter isolates collected from local meat-egg dual-purpose chicken in China.
In this study, we identified two main prevalent Campylobacter species, C. jejuni and C. coli, and found strong genetic diversity in the Campylobacter strains transmitted in these chickens.The National Center for Biotechnology Information (NCBI) database indicated that CC-354 strains occur mainly in the United States and the United Kingdom, whereas they are quite dispersed in other countries (Yu et al., 2020).A previous study showed that CC-353 and CC-464 are the dominant CCs of C. jejuni in central China, and CC-354 was the dominant population of C. jejuni detected in the present study.CC-21 is also the most frequently reported C. jejuni genotype in diarrhea patients in China (Zhang et al., 2020b), and in Zhang et al.'s study (Zhang et al., 2020a), CC-21 was also the dominant Campylobacter CC in chickens in southeastern China.However, in the present study, only one strain belonging to CC-21 was isolated, suggesting that the diversity of C. jejuni may vary by region and sample source, and that the epidemic patterns of Campylobacter may differ in local meat-egg dual-purpose chickens.Three ST types, ST1586, ST872, and ST828, were found in the C. coli isolates, which belong to the same clonal complex, CC-828.This was expected because CC-828 is the dominant population of C. coli, and a large number of past studies have reported its prevalence around the world (Zhang et al., 2016;Di Giannatale et al., 2019;Gomes et al., 2019).Based on principal component analysis (PCA) on the evolutionary distances of core gene families, Snipen et al. reported that Campylobacter has a mixed evolutionary pattern characterized by genomes (Snipen et al., 2012).It is noteworthy that the three C. coli strains with undefined STs detected in this study clustered with C. jejuni on the same large branch of a phylogenetic tree based on a genomic SNP analysis, suggesting that their genetic relationship was close.Previous studies have shown that an bidirectional increase in the rate of recombination between C. jejuni and C. coli has led to the gradual convergence of the two species (Sheppard et al., 2008).
Antibiotic resistance has become one of the most important factors threatening human public health globally (Mancuso et al., 2021).It is noteworthy that 73.3% of Campylobacter isolates were multidrug resistant in the present study.The resistance rates of Campylobacter to ciprofloxacin and tetracycline in China are high, and studies have reported rates of 90%-100% in broilers (Ma et al., 2014;Li et al., 2016;Wang et al., 2021).In the present study, there were similar high resistance rates to ciprofloxacin and tetracycline in both C. jejuni and C. coli.In the past, fluoroquinolones have been widely used in the edible animal industry, especially in poultry production, although tetracyclines are also commonly used to treat and prevent bacterial diseases in poultry in China.This may explain the high resistance rates to these two antibiotics in Campylobacter.In the early 1980s, the development and introduction of the thirdgeneration extended-spectrum cephalosporin cefotaxime provided a new treatment for patients infected with Gram-negative bacilli (Hawkey, 2008).Here, we detected a relatively low rate of cefotaxime resistance (20.0%).Although it is not approved for use in food animals in China (Dai et al., 2008), we detected a high rate of amikacin resistance in Campylobacter (26.7%).Nor did the proportion of erythromycin-resistant isolates in our study differ greatly from that reported in previous studies (30.0% and 25.2%, respectively) (Cheng et al., 2020).However, the erythromycin resistance rate of C. jejuni was lower than in previous studies (15.4% and 30.1%, respectively), whereas the rate in C. coli was higher (41.2% and 18.3%, respectively) (Cheng et al., 2020).We detected high rates of resistance to lincomycin in both C. jejuni and C. coli, which may be related to the antibiotics commonly used in the areas from which the isolates were collected.The resistance of C. coli to antibiotics other than tetracycline and ciprofloxacin was greater than that of C. jejuni.These findings are consistent with the results of Tang et al. (Tang et al., 2020a), who reported that the prevalence of antibiotic resistance in chicken-derived C. coli was higher than in chicken-derived C. jejuni.In general, there is a worrying trend that, although the addition of antibiotics to feed supplements was banned in China in 2020, it has not reduced antibiotic resistance.On the contrary, some antibiotic resistance rates are still rising in some regions (Cheng et al., 2020).
Previous studies have shown that there is a strong correlation between the presence of AMR determinants detected with WGS and phenotypic antibiotic resistance (Rokney et al., 2020;Habib et al., 2023).However, it is known that Campylobacter also has many resistance mechanisms other than resistance gene-mediated, such as changes in membrane permeability, modification of the antibiotic efflux pumps, etc. (Iovine, 2013).The determinants of drug resistance do not always confer resistance phenotypes, and single resistance determinant may correlate weakly with certain antibiotics (S ̌oprek et al., 2022).In this study, we found that the overall correlation between the 16 antibiotic resistance determinants detected with ResFinder v.4.1 and phenotypic resistance was not strong, and that there were huge differences between the different antibiotics.This suggests that current research into the resistance mechanisms of Campylobacter remains to be improved, and that simply analyzing bacterial resistance in terms of the antibiotic resistance determinants predicted with WGS does not provide an accurate assessment.
In the present study, phenotypic resistance to ciprofloxacin and tetracycline correlated well with the presence of the gyrA gene point mutation (C257T) and the tet(O) or tet(L) gene, respectively, confirming that they are the main factors conferring resistance to the corresponding antimicrobial agents.CTX-M type b-lactamases usually are the cause of drug resistance of Gram-negative bacteria to cephalosporin such as cefotaxime, but CTX-M was not found in resistant strains in this study.In our isolates, 90% isolates of our study contained blaOXA-type blactamase-encoding gene.Indeed, most Campylobacter strains contain the bla-OXA gene encoding b-lactamase that confers resistance to carbapenems, but not to cephalosporin (Hadiyan et al., 2022).Research has already shown that different blactamases have different hydrolysis profiles (Poirel et al., 2011) and that the expression of b-lactamase directly affects the resistance of strains to b-lactam antibiotics (Casagrande Proietti et al., 2020).This may also explain why strains containing the bla-OXA gene but with a b-lactam-sensitive phenotype have been found in several other studies (Griggs et al., 2009;Zeng et al., 2014;Hadiyan et al., 2022).Our study further confirmed that the presence of bla-OXA gene is not related to the resistance of cephalosporin drugs.
The prevalence of aminoglycoside-resistance-related determinants was low in the isolates tested, but these determinants showed relatively high diversity.A previous study demonstrated that the combined action of the aph(3′)-III, aac(6′)-aph(2′′), and ant(6)-Ia genes conferred resistance to aminoglycoside antibiotics on Campylobacter (Zhang et al., 2022), which was confirmed in our study.However, even with the synergistic effect of ant( 6)-Ia and aac (6')-aph (3''), the correlation between each gene and amikacin resistance was still low.Moreover, a C. coli isolate containing aadE-Cc showed a sensitive phenotype.This finding is consistent with a report by Painset et al. (Painset et al., 2020), who also observed Campylobacter strains carrying the aadE-Cc gene that were not resistant to some aminoglycoside antibiotics.There may be some unknown mechanism that inactivate these genes in Campylobacter.
Erythromycin and lincomycin have similar resistance mechanisms (Zhao et al., 2016;Wang et al., 2022).In the present study, the correlation between erm(B) and resistance to those two antibiotics was not strong.Therefore, we analyzed the sequence of 23S rRNA, and found no A2075G mutation.
Since most of the strains in this study have multi-drug resistance, we analyzed the genetic environment of the resistance gene of JZ02 (resistant to six antibiotics), and try to know how this strain obtained the antibiotic resistance gene.As is known that Campylobacter can acquire exogenous DNA through natural transformation (Wang and Taylor, 1990).The spread of antibiotic resistance genes in Campylobacter isolates from humans, animals, and the environment has previously been reported (Asuming-Bediako et al., 2019).The tetracycline resistance gene tet(O) is believed to have originated in Gram-positive cocci (Zilhao et al., 1988), and the tetracycline resistance mediated by this gene is mainly spread via the horizontal transfer of resistance genes on conjugated plasmids (Wardak et al., 2007).Although we found that resistance gene tet(O) was located on chromosome of isolate JZ02, some tranposase-encoding sequences were detected near tet(O).The presence of these transposases implies that the antibioticresistance genes were co-transferred with some mobile genetic elements into the genomes of related strains.Although no relevant mobile elements were found in cluster 2, several genes, such as ant(9) and aph(2'')-If, which encode aminoglycoside-modifying enzymes, are similar to those of some Gram-positive bacteria, indicating that they may have derived from Gram-positive bacteria in the environment or animal intestines (Fabre et al., 2018).
The ability of Campylobacter to cause human diseases is considered multifactorial, and several genes are closely related to its virulence, including ciaB and cdtABC (Lopes et al., 2021).An analysis of the virulence-related genes of our isolates showed that C. jejuni carried more virulence-related genes than C. coli, which is consistent with the study of Lapierre et al. (Lapierre et al., 2016), and most of these genes were involved in motility (flaA), adhesion (cadF, cheA, jlpA et al), and invasion (ciaB).It is noteworthy that in this study, CC-21 and CC-464 had the most virulence-related genes and these two clonal complexes are also common among the clinical isolates of Campylobacter (Zhang et al., 2020a;Zhang et al., 2020b;Zang et al., 2021).Then we found that the additional genes they carried were mainly involved in immune modulation like bacterial capsule biosynthesis, especially by sugar and aminotransferase enzymes (kfiD, glf, Cj1426c, Cj1432c, Cj1434c, Cj1435c, Cj1436c, Cj1437c) while these genes do not be harbored in other complexes(Supplementary Table 5; Supplementary Figure 2).Although a high prevalence of virulence-associated genes (ciaB and flaA) has been already reported in Campylobacter strains infecting children with moderate to severe diarrhea (Quetz et al., 2012), these genes were only detected in C. jejuni in the present study.This may explain why C. jejuni colonizes its host more readily than C. coli and is responsible for more food-borne bacterial infection events (Moffatt et al., 2019;Callahan Sean et al., 2021;Schirone and Visciano, 2021).Virulence genes related to the type IV secretion system were only found in one strain of C. jejuni, and these genes are less prevalent in Asia and Europe (Panzenhagen et al., 2021).We also detected the wlaN gene, which is involved in Guillain-Barre syndrome in two C. jejuni isolates (Guirado et al., 2020).
In conclusion, in this study, we have demonstrated the genetic diversity and antimicrobial susceptibility of Campylobacter isolated from local dual-purpose chickens in China, and analyzed their resistance-and virulence-related genes.It thus provides important data on the epidemiological characteristics of Campylobacter in this food source.
TABLE 1
Resistance rates of tested Campylobacter isolates to six antibiotics.
TABLE 2
Correlation analysis of antibiotic resistance phenotype and antibiotic resistance determinants.Most of the isolates (96.7%, 29/30) contained the tet(O) gene, and one C. jejuni strain carried tet(L) (3.3%, 1/30).All isolates showed tetracycline resistance.The correlation between the tetracycline resistance phenotype and the resistance gene tet(O) or tet(L) was 100% in C. coli and 92.3% in C. jejuni.
TABLE 3
CmeR-Box polymorphisms in C. jejuni and C. coli isolates.Resistance gene clusters identified in Campylobacter jejuni strain JZ02.(A) Tetracycline resistance gene cluster; (B) Aminoglycosides resistance gene cluster.SnapGene ® 2.3.2 made this figure.
a underline means point substitution, "-" means point deletion.
TABLE 4
Frequencies of parts of predicted virulence-related factors in the genomes of 30 Campylobacter isolates. | 6,672.4 | 2023-09-07T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
Lepton universality violation and right-handed currents in $b \to c \tau \nu$
We consider the recent LHCb result for $B_c\to J/\psi \tau \nu$ in conjunction with the existing anomalies in $R(D)$ and $R(D^\star)$ within the framework of a right-handed current with enhanced couplings to the third generation. The model predicts a linear relation between the observables and their SM values in terms of two combinations of parameters. The strong constraints from $b\to s \gamma$ on $W-W^\prime$ mixing effectively remove one of the combinations of parameters resulting in an approximate proportionality between all three observables and their SM values. To accommodate the current averages for $R(D)$ and $R(D^\star)$, the $W^\prime$ mass should be near 1 TeV, and possibly accessible to direct searches at the LHC. In this scenario we find that $R(J/\psi)$ is enhanced by about 20\% with respect to its SM value and about 1.5$\sigma$ below the central value of the LHCb measurement. The predicted $d\Gamma/dq^2$ distribution for $B\to D(D^\star) \tau \nu$ is in agreement with the measurement and the model satisfies the constraint from the $B_c$ lifetime.
Different models for the form factors produce a SM result in the range 0.25 to 0.28 [13][14][15][16] which is about 2σ lower. For definiteness, we will use as SM value the most recent result [17] R(J/ψ) = 0.283 ± 0.048 .
Not surprisingly, these anomalies have generated a large number of possible new physics explanations including additional Higgs doublets, gauge bosons and leptoquarks .
In Ref. [26] we have studied R(D) and R(D ) in the context of a right-handed W with enhanced couplings to the third generation [48,49]. Here, we revisit this possibility motivated by the new measurement of R(J/ψ), and to address additional constraints from the dΓ/dq 2 distributions [34] and the B ± c lifetime [50].
To separate the symmetry breaking scales of SU (2) L and SU (2) R , we introduce the two Higgs multiplets H L (1, 2, 1)(−1) and H R (1, 1, 2)(−1) with respective vevs v L and v R . An additional bi-doublet φ (1, 2, 2)(0) scalar with vevs v 1,2 is needed to provide mass to the fermions. Since both v 1 and v 2 are required to be non-zero for fermion mass generation, the W L and W R gauge bosons of S(2) L and SU (2) R will mix with each other. In terms of the mass eigenstates W and W , the mixing can be parametrized as In the mass eigenstate basis the quark-gauge-boson interactions are given by, where Rij the unitary matrices which rotate the right handed quarks u Ri and d Ri from the weak eigenstate basis to the mass eigenstate basis.
The model has a different neutrino spectrum than the SM: three left-handed neutrinos ν L i and one right-handed neutrino ν R 3 . Additional scalars ∆ L (1, 3, 1)(2) and ∆ R (1, 1, 3)(2) with vevs v L,R ∆ are needed to generate neutrino masses. In order for the possibly enhanced SU (2) R interaction with the third generation to explain the B decay anomalies, we need the right-handed neutrino to be light, which requires v L,R ∆ to be small. In this model, the neutrinos will receive Majorana masses from the vevs of ∆ L,R and Dirac masses from φ. The mass eigenstates (ν m L , (ν m R 3 ) c ) are related by a unitary transformation to the weak eigenstates as In our model U L = (U Lij ), U RL = (U RLi3 ) and U LR = (U LR3i ) and U R = (U R33 ) are 3 × 3, 3 × 1, 1 × 3 and 1 × 1 matrices, respectively. Writing the rotation of charged lepton weak eigenstates L,R into mass eigenstates m L,R as L,R = V L,R m L,R , the lepton interaction with W and W becomes where U is approximately the PMNS matrix. From Eqs. 6 and 8 we see that a large g R /g L will enhance the third generation interactions with W . The final neutrino flavor is not identified in B meson decays so it must be summed. For the processes involving left-and right-handed charged leptons, neglecting neutrino masses compared with the charged lepton masses, the final decay rates into a charged lepton j , when summed over the different neutrino final states, are proportional to To obtain these results we used the unitarity of U : The starting point for our calculations is the differential decay rate dΓ/dq 2 with q 2 = (p B − p D ( ) ) 2 for the SM. We use the notation, parameterization and values of Ref. [2] for all the relevant form factors. This will be sufficient for a comparison to the experimentally determined shape of this distribution as well as the total decay rate. Of course, dΓ/dq 2 is obtained after integrating over angles and summing over polarizations. Other observables, such as angular correlations, can also be used to discriminate between the SM and new physics scenarios as well, but we will not consider that possibility in this paper as they have not been measured yet.
In the type of model we consider, in addition to the SM diagram, there is a W mediated diagram as well as interference between these two. When all the neutrino masses can be neglected, there is no interference between the left and right-handed lepton currents and this allows us to write simple formulas for both dΓ/dq 2 and the decay rate Γ in terms of the corresponding SM results and the following two combinations of constants: The first term arises from the separate W and W contributions, whereas the second term is induced by W − W mixing. The superscript bc is used to denote the b → c quark transition and is useful in order to generalize the notation to other cases. Our results for the different modes are then: • Leptonic decay B ± c → ± ν . The hadronic transition in this case proceeds only through the axial vector form factor thus yielding • Semileptonic decay B → Dτ ν. In this case only the vector form factor contributes to the hadronic transition resulting in This implies that the normalized distribution 1/ΓdΓ/dq 2 for this mode is identical to the SM. Upon integration, • Semileptonic decay B → D τ ν. This mode is more complicated in that both the vector and axial vector form factors contribute and they behave differently. Defining we can write the differential decay rate as Using the values of the form factors given in Ref. [2], the SM is dominated by the axialvector hadronic form factor as can be seen from Figure 1. A transition through a purely axial-vector form factor (blue curve) has a spectrum shape almost indistinguishable from the SM case (black curve). On the other hand, a transition through a purely vector hadronic form factor produces a spectrum shifted towards lower q 2 , as shown by the red curve in the same figure. The vector and axial-vector form factors do not interfere in this distribution, and their respective contributions to the total decay rate in the SM are This implies that the vector form factor contributes only 5.6% of the SM rate. Integrating Eq. 16 we find very close to the result one would obtain from a pure axial-vector transition.
• Semileptonic decay B c → J/ψτ ν. This is also a pseudoscalar to vector transition so it has the same behaviour as the previous mode.
where now In this case we use the values for the form factors as given in Ref. [17] to display the SM differential distribution on the right panel of Figure 1 as the black curve. Once again the blue (almost indistinguishable from the black one) and red curves illustrate transitions mediated by a purely axial-vector or vector form factors respectively, normalized to the total decay rate. We see that in this case the axial-vector hadronic form factor is even more dominant than in B → D τ ν. Their respective contributions to the total decay rate in the SM in this mode are the vector form factor barely contributes 1.5% of the SM rate. It follows that for our model, These parameters result in a differential distribution dΓ/dq 2 for B → D τ ν that is dominated by the axial-form factor and is thus very similar to the SM one as shown in Figure 2. It should be clear from this result, in conjunction with the BaBar comparison of the SM vs observed distribution [2], that dΓ/dq 2 is in agreement with the results of our model. This is shown in the right panel of Figure 2 where the normalized distributions are compared with the BaBar data. The SM and our model are a good fit to the data and they are almost indistinguishable. This is due to the dominance of the F bc dir term over the mixing contribution. Including the latest result, R(J/ψ), does not change the fit significantly due to its large uncertainty. The prediction for this quantity, R(J/ψ) = 0.34, is thus on the low side of the central value by about 1.5 standard deviations. The differential distribution dΓ/dq 2 in this case is also very similar to the SM one, as seen in Figure 3, and would not serve to distinguish this model. The rate Γ(B − c → τ − ν) is predicted from Eq. 12 to be about 20% larger than its SM value, which is well within the bound from the B c lifetime discussed in Ref. [50].
IV. DISCUSSION
We now examine the parameter values of Eq. 23 in the context of the model with RH currents as shown in Eq. 11. The anomalies suggest that only the τ lepton is affected as the experiment sees no difference between muons and electrons. The model must then single out only the third family for enhancement with |V R3τ | ∼ 1. Now V Rcb ≡ V u Rtc V d Rbb and V d Rbb ∼ 1 and V u Rtc is of the same order as V cb , as discussed in our global analysis of the model in Ref. [51]. This requires g R /g L M W /M W ∼ 0.7 to reproduce the first result of Eq. 11.
Requiring the model to be perturbative implies that g R /g L < ∼ 10. If we take this ratio to be in the range (5 − 10) we find in turn that M W is in the range 574 − 1150 GeV, well within the direct reach of LHC.
The second combination of parameters, F bc mix , requires that we re-examine bounds on W − W mixing, in particular the combination ξ ef f = ξ W g R /g L . This combination is constrained by b → sγ [48,52,53] and we applied this constraint to our model in Ref. [26]. We can update that result using the most recent HFLAV collaboration average: B(b → X s γ) = (3.32 ± 0.15) × 10 −4 [8] combined with the NNLL SM calculation B(b → X s γ) = (3.15 ± 0.23) × 10 −4 [54]. Assuming that the new physics interferes constructively with the SM, at the 3σ level the allowed range becomes −1.
This range severely restricts the possible size of F bc mix . For example, if we use F bc dir = 1.28 in Eq. 11 in combination with the above constraint on ξ ef f , we find and it is not possible to reach the value F bc mix = 0.04 in the fit, Eq. 11. Under these conditions the three observables R(D), R(D ), and R(J/ψ) become approximately proportional to the SM results, It is interesting to notice that this universal enhancement of the three asymmetries reproduces what is found in models with an additional SU (2) L symmetry [35,36]. We illustrate the situation in Figure 4 which takes into account these constraints. The very narrow width of the model prediction range is due to the tight constraint on mixing and comparison with data implies that the model needs a W mass very close to 1 TeV to successfully explain these anomalies. The best direct limits on such a W come from 19.7 fb −1 of CMS data at √ s = 8 TeV [55]. The first result presented in that paper excludes an SSM W 1 with mass below 2.7 TeV. Since the production couplings for W are model dependent it is more useful to quantify the constraint as σ × B(W → τ ν) < ∼ 3 fb. In the SSM model considered by CMS B(W → τ ν) ≈ 8.5% for W masses of order a TeV, where decay into top-bottom is allowed. In the non-universal model discussed here, this branching fraction approaches 25% when g R >> g L and W couples almost exclusively to the third generation. At the same time the production cross-section at LHC for our W would be very suppressed due to its negligible couplings to the light fermions. Roughly then, For the first term in the last bracket, corresponding to a direct coupling of the W to the light quarks, we have: [56]; and fitting the existing body of FCNC constraints implies that V u Rtu ∼ 10 −3 [51]. For the second term in the bracket we already saw that ξ W is at most 10 −3 in this scenario and we conclude that the corresponding σ × B(W → τ ν) in our model is more than 6 orders of magnitude smaller than that of an SSM W and the CMS data does not place any significant constraint.
The CMS paper also quantifies their result using a type of non-universal W that also singles out the third generation dubbed 'NUGIM' [57,58]. In this case the CMS data excludes a W with mass below 2.0 − 2.7 TeV. Comparing the relevant figure of merit, σ × B(W → τ ν), of this model to ours we see that B(W → τ ν) can be quite similar but Whereas V Rud can (and in fact is constrained to be) very small, the parameter s E /c E of NUGIM is of order one (for this reason the W − W mixing in the NUGIM model is not important in σ(pp → W )). The net result is that the CMS limits do not directly apply to our model. A separate study is needed for an accurate comparison of our model to LHC results, taking into account production from heavier quarks. As mentioned before, our model relies on the existence of an additional light neutrino to explain these anomalies and this can have other observable consequences. In Ref. [26] we have already seen that there are no significant constraints from the invisible Z width. At the same time the model can provide an enhancement to the rare K → πνν modes [59] where new results are expected from NA62 and KOTO.
The existence of a light right-handed neutrino contributes to the effective neutrino number ∆N ef f which is also constrained by cosmological considerations and this may affect the viability of our model. There is some uncertainty as to the value of this constraint, but commonly used numbers are, for example [60], 1 For SSM it is assumed that the W is a heavy copy of the SM W , with its same couplings to fermions.
As we saw above, our model requires (29) to explain the R D ( * ) anomalies and this is only slightly weaker than the usual weak interaction. At the same time, the exchange of a W can bring the new ν R into thermal equilibrium with the SM particles through scattering of right-handed neutrinos with tauons at a rate proportional to (g R M W /g L M W ) 4 |V R3τ | 4 relative to the usual weak interaction. In fact, with V Rcb /V cb ∼ 1 this would result in ∆N ef f ∼ 1 bringing into question the viability of our model. The mixing induced interaction, proportional to ξ W , is smaller and does not lead to large contributions to ∆N ef f . However, the aforementioned scattering of right-handed neutrinos with tauons is only effective for temperatures T R above T τ ∼ m τ . At the time of big bang nucleosynthesis (BBN), the temperature is about T BBN ∼ 1 MeV, implying that ∆N ef f is suppressed by a factor where g * (T ) is the effective number of relativistic degrees of freedom at temperature T and g * (T BBN ) = 10.75. In addition, g * (T R ) is larger than g * (T QCD ) ∼ 58 since the QCD phase transition temperature T QCD is of order a few hundred MeV [61]. All this implies that the contribution to ∆N ef f from our additional neutrino is less than 0.1 and safely within the BBN constraint. Similarly, τ decay processes into ν R plus other SM particles are also suppressed by the same factor r, but one might worry about additional processes without this suppression. For example, ν R scattering off an electron or a muon. However, these are proportional to the additional mixing parameters |V R3e(µ) | 4 and can be made sufficiently small by lowering V R3e(µ) .
Another potentially worrisome process is the exchange of a Z in the scattering of a ν R off an electron or SM neutrino ν L . In this case the interaction strength is proportional to (g 2 Y /M 2 Z ) 2 [49], and when compared to Z exchange induced ν L scattering off an electron or ν L , it is suppressed by a factor of (M Z /M Z ) 4 . The constraint on ∆N ef f becomes in this case a lower bound on the Z mass, M Z > ∼ 200 GeV.
In conclusion we find that new right handed currents affect the semi-tauonic B decay anomalies in a way that is consistent with current bounds, including those on the effective number of neutrino species from BBN. A confirmation of a high value for R(J/ψ) would exclude them as a viable explanation and would also exclude new left-handed currents. The most promising way to rule out this explanation of the anomalies is the exclusion of a W in the τ -channel at LHC in the mass range 1 − 1.4 TeV. The suppression of our W couplings to light fermions significantly complicates this comparison.
of PRC (Grant Nos. 11575111 and 11735010), and partially supported by a grant from Science and Technology Commission of Shanghai Municipality (Grants No.16DZ2260200) and National Natural Science Foundation of China (Grants No.11655002). | 4,419.8 | 2017-11-27T00:00:00.000 | [
"Physics"
] |
NAMES OF PLANTS IN THE RUSSIAN AND CHINESE LINGUISTIC PICTURE OF THE WORLD (BASED ON PROVERBS OF THE RUSSIAN AND CHINESE LANGUAGES)
The article presents the results of a comparative linguoculturological analysis of linguistic units representing the "plant" cluster in the Russian and Chinese linguistic world-image. The object of the research is proverbs with a phytonym component (with the names of trees and plants and their fruits). The relevance of this study is justified by the fact that the study and comparison of linguistic world-image of different peoples is a developing direction in linguistics because the worldview and attitude of the people, their own version of the image of the world are fixed in the semantics of words. The aim of the research is to analyze the proverbs with names of plants of the Russian and Chinese languages and to describe the features of the Russian and Chinese linguistic world-image through the prism of linguocultural studies. Since plants have been present in the life of any nation since antiquity and are an integral part of human life, it is not surprising that phytonyms are often found in phraseological units, in particular, proverbs. Phytonym components have an extremely wide associative potential, vividly reflect the peculiarities of national consciousness and the specifics of the linguistic world-image of the people as a whole; they can be international and nationally specific. The research showed that in the Russian and Chinese languages, proverbs containing components of plant names can be divided into 3 groups: 1) proverbs with a tree component (fruit and non-fruit); 2) proverbs with a cultivated plant component; 3) proverbs with an uncultivated (wild) plant component. The analysis showed that in the Russian and Chinese languages the absolute majority belongs to proverbs with a tree component, both fruit and non-fruit. The study also revealed that from a semantic point of view, a significant number of proverbs with components of plant names describe a person's character, his external and physical data, moral qualities. Semantic analysis of proverbs showed that sayings with the names of plants can have a similar image and be used in the same meaning in the languages under consideration, or they can have completely different connotations, which are due to the national specifics of culture, lifestyle, imaginative perception of the external world by both peoples.
INTRODUCTION
In modern linguistics, an actual direction of research is the study of language from the point of view of its relationship with material and mental reality, as a reflection of the specific way of perceiving the world and realities both by an individual and by a certain people. In this regard, the object of close attention of modern linguists is the description of the linguistic world-image.
At present, the concept "linguistic world-image" is at the stage of active study: linguists are making attempts Sciences 18-19 January, 2021 to formulate its full and detailed definition, to differentiate it from other related concepts. In science, there is no single approach to the definition of this term. V. A. Maslova notes that along with the most frequently used notion -linguistic world-image‖, various linguists use such definitions as -linguistic intermediate world‖, -linguistic representation of the world‖, -linguistic model of the world‖, etc. (Maslova, 2001, p. 63). And V.A. Maslova gives her own definition too: the linguistic world-image is a set of all human knowledge about the world, captured in linguistic form (Maslova, 2001, p. 63). N. Yu. Shvedova emphasizes that -a world-image is a picture of everything that exists as an integral and multifaceted world in its structure and in the connections of its parts, which is comprehended by the language of its parts, developed by the centuries-old experience of the people and carried out by means of linguistic nominations. Firstly, a person, his material and spiritual life activity and, secondly, everything that surrounds him: space and time, living and inanimate nature, the area of myths created by man and society‖ (Shvedova, 1999, p. 14). Therefore, people's culture, way of life, beliefs and religious views are reflected in the linguistic world-image of the people.
Proceedings of INTCESS 2021 8th International Conference on Education and Education of Social
The relevance of this study is justified by the fact that the study and comparison of linguistic world-image of different peoples is a developing direction in linguistics due to the fact that -the worldview and attitude of the people, their own version of the image of the world, are fixed in the semantics of words‖ (Borisova, 2018, p. 66).
This article presents the results of a comparative linguocultural study of proverbs, including the names of plants that play an important role in the Russian and Chinese linguistic world-image. In the Chinese language, the Russian concept of the proverb corresponds to the word yanyu (读语 -proverb), which is also distinguished by an instructive and didactic character and in which the experience and life wisdom of the Chinese people, its mentality and world-image are reflected.
METHODOLOGY
Research methods are conditioned by the set goals and objectives. The main method is comparativetypological, which allows to identify the general and differences in the use of plant names in the Russian and Chinese world-image. Descriptive and contextual methods have been used to interpret the meaning given by representatives of two nationalities to proverbs containing a plant component. The method of continuous sampling was used when working with the phraseological fund of the Russian and Chinese languages.
RESULTS
Russian and Chinese proverbs often include the names of various trees, herbs and flowers, cultivated and wild plants, etc. In this regard, proverbs can be divided into 3 groups: 1) proverbs with a tree component (fruit and non-fruit); 2) proverbs with a cultivated plant component; 3) proverbs with an uncultivated (wild) plant component.
Proverbs with a Tree Component
Proverbs with a tree component (fruit and non-fruit) are the most common in the Russian and Chinese languages. The analysis of the material showed that in Russian proverbs such fruit trees as pear and apple are more common, in Chinese there are pear, peach, and plum; among barren trees there are oak, birch, aspen, willow and pine (in Russian proverbial expressions), willow and pine (in Chinese).
First of all, it should be noted the volume of the meaning of the lexeme tree in the Russian and Chinese languages is different. In the Explanatory Dictionary of S. I. Ozhegov the following definition is given: a tree is ‗a perennial plant with a hard trunk and branches extending from it, forming a crown' (Ozhegov). The Xinhua Dictionary gives the following interpretation of the hieroglyph 树 (tree): 1. The general name of trees. 2. 树 大 根深 -a large tree with a deep root. Also it is given figuratively meaning -smb. who has power and solid foundation (Xinhua Dictionary). Moreover, in Chinese, shrubs can also be called a tree. Therefore, the important meanings of the word tree in Russian are the trunk and branches, and in Chinese -the root and height.
The following associations and ideas about trees are common in the linguistic world-image of Russians and Chinese: 1) A job, the results of which will not be visible immediately (Деревья скоро садят, да не скоро с них плоды едят (Trees are planted soon, but they will give fruits not soon) / 砍柴上山,捉鸟上树 (To cut brushwood you need to climb a mountain, to catch a bird you need to climb a tree)); 2) Parents and children (От доброго дерева добрый и плод (From a nice tree, you will have nice fruit / 哪样 18-19 January, 2021 树开哪样花 (A flower will look like a tree));
Proceedings of INTCESS 2021 8th International Conference on Education and Education of Social Sciences
3) Cause-and-effect relationship (Не от добра дерево листья роняет (A tree drops leaves not because of good reason) / 树不坚硬虫来咬 (If the tree is not solid, worms like to eat it)); 4) Assessment of human qualities (Дерево познается по плодам, а человек по делам (A tree is known by its fruits, and a person by deeds) / 树正不怕月影斜 (A straight tree is not afraid of the curve of the moon's shadow)).
It should be noted that both in Russian proverbs and in Chinese, the root of a tree is especially prominent as the basis, an important component of both plants and the foundations of humanity: Дерево держится корнями, а человек -друзьями (A tree is kept by its roots, and a person is kept by friends) / 树从根上起 (The tree grows from its root), 树长根,人长心 (A tree grows a root, a person grows a soul).
Nevertheless, differences can be traced in the world-image of the Russians and the Chinese. So, in Russian, a tree also has associations with friendship (Человек без друзей, что дерево без корней (A man without friends is like a tree without roots), with the upbringing of the young generation (Гни дерево, пока гнѐтся, учи дитятко, пока слушается (Rot a tree while it is bending, teach a child while he is listening)), and in Chinese -with spiritual origins (树从根上起 (A tree grows from its root), 树要皮,人要脸 (A person needs a face, a tree needs bark (it says about the importance of reputation for a person; a person needs to preserve honor as much as a tree needs bark); 树长根,人长心 (A tree grows a root, a person grows a soul (if a tree grows a root, it can become thick; if a person has a conscience, he has morality)).
The most revered barren trees in Russian culture are an oak, a birch, a pine. Thus, the oak in proverbs acts as a symbol of strength, spiritual and physical strength: Не срубишь дуба, не отдув губы (You cannot cut down an oak without blowing your lips off); Держись за дубок, дубок в землю глубок (Hold on to the oak, the oak is deep in the ground). In the Russian world-image the birch is associated with girlish beauty, and is also often used to characterize a person: Кривая береза не удержит снега, плохой человек не сдержит слова (A crooked birch will not hold snow, a bad person will not keep his word), Горбатую березу распаришь да поправишь, а дурного человека хоть парь, хоть май всѐ таким останется (You can steam a humpbacked birch and fix it, but you can steam and excruciate a bad personbut he will remain the same).
Vivid phytonyms reflecting the peculiarities of the Chinese national world-image are trees such as pine (松 树 sōngshù), which is associated with such human qualities as nobility, unyielding will, longevity (不学杨柳随风 摆,要学青松立山冈 (Don't learn from the poplar and willow that sway in the wind, but learn from the green pine tree on the hill)); and bamboo (竹子 zhúzi) -righteousness, honesty, integrity, unselfishness, decency, honor, will, strong spirit. However, bamboo also has a negative connotation: 竹子 开花 兆 灾 (If bamboo blossoms, wait smth unhappy).
In Chinese proverbs, the acacia tree is often found, which has a negative connotation: 刺槐做的棒槌-扎手 (A rolling pin made of acacia wood is pricked) (the proverb is used when talking about difficult matters); 指桑 树,骂槐树 (To point to the mulberry tree, and to scold the acacia (-to speak in roundabout, hints, not bluntly‖)).
Proverbs with a component of the name of a fruit tree are closely interconnected with the geographical characteristics of the peoples living, as well as ideas about the characteristics of useful property of the tree and its fruit.
The pear tree among the Russians was endowed with signs of holiness and purity, and among the Chinese it was a symbol of longevity.
Both in Chinese and in Russian, a pear can be associated with a feeling of love (Любит, как душу, трясет, как грушу (He loves like a soul, shakes like a pear)). Nevertheless, this component most often occurs when Russians talk about unreal events (Когда на сосне груши будут (When there will be pears on a pine tree)), as well as about the value of certain phenomena or things (На грушу лезть -или грушу рвать, или платье драть (To climb on a pear -or to tear a pear, or to tear a dress)); in Chinese, it is about the importance of gaining one's own life experience (百闻不如一见,百见不如一干 (You go into the water yourself -you will find out if it is deep or shallow; if you try a pear yourself, you will know if it is sour or sweet.) The apple-tree component is very common in Russian proverbs, however, it is practically not found in Chinese proverbs. In the Russian linguistic world-image, an apple tree is used to depict the continuity of generations, the similarity of parents and children (Какова яблонька, таковы и яблочки (Apples are look like the apple tree), Яблоко от яблони недалеко падает (An apple does not fall far from an apple tree)), tracing causal relationships (От яблони яблоко родится, а от ѐлки -шишка (An apple is born from an apple tree, and a pine cone from a tree)) etc. In the Chinese world-image, peach trees are used with similar meanings, associated with longevity, spiritual purity, and plum as a characteristic of the best qualities of a person (千朵桃花一树儿生 (A thousand peach flowers are born from one tree)).
Proverbs with a Cultivated Plant Component
In Russian and Chinese proverbs, among the names of cultivated plants and their fruits the following are most often used: wheat, carrot, radish, pumpkin, peas, beans, etc.
Among the names of cereal plants in the Russian linguistic world-image, wheat is characterized by a pronounced cultural connotation (Borisova, 2018, p. 66). The wheat component in Russian proverbs is a symbol of prosperity, which is due to the fact that in the life of the Slavs bread has always remained the main food on the table: В поле пшеница годом родится, а добрый человек всегда пригодится (In the field, wheat will be born every year, and a kind person will always be useful); Удобришь землицу -снимешь пшеницу (Fertilize the land, and you will get the wheat). Such proverbs are instructive in nature and use to characterize a person. On the contrary, in the Chinese language the phytonym wheat is found in proverbial expressions containing an assessment of the actions and life of a person: 麦高于禾,风必吹之 (If the wheat grass grows higher than other seedlings, then the wind shakes it (about the problems arising because of the fame of a person)).
The phytonym cabbage is also used in Russian proverbs to mean wealth in the house: Ни один рот без капусты не живет (Not a single mouth can live without cabbage); Вырастишь капусту -в закромах не будет пусто (If you grow cabbage, the bins will not be empty). In Chinese, the phytonym cabbage is used for characterizing a person: 萝卜青菜,各有所爱 (One loves radish, the other -cabbage (meaningtastes differ).
In Russian and Chinese proverbs, carrot can be used to characterize a person (Рожа -хоть репу сей, хоть морковь сажай (Face is like to seed turnip, even plant carrots; 一个萝卜一个坑 (Each carrot has its own hole (1) one person -one place, there are no extra places for him; 2) each his own place; 3) about a person's compliance with his position or about a solid, conscientious style of work). Also carrot is used to describe social phenomena (Лук с морковкой хоть и с одной грядки, да неодинаково сладки (Onions and carrots are unequally sweet, although from the same garden)); 胡 萝 卜加大棒 (Carrot plus a club (about the simultaneous use of rewards and punishment for motivation)).
Consequently, the use of the cultivated plant component in the Russian and Chinese languages is very different, which is associated with the difference in the cultural characteristics of the life of peoples.
Proverbs with the Component Uncultivated (Wild) Plant
Among the phytonyms of this subgroup of proverbs, the component rose is often found in all languages. Since ancient times, among many peoples, the rose has been something beautiful, associated with blooming life and youth. On the one hand, the plant embodies an unapproachable beauty, and, on the other hand, a reward for labor, for example, this is evidenced by Russian proverbs «Без шипов розы не бывает» (There is no rose without thorns), «Чем красивее роза, тем длиннее у нее шипы» (The more beautiful the rose, the longer its thorns).
In Chinese proverbs, the rose acquires an additional connotation --behind something beautiful there are 18-19 January, 2021 flaws‖: 玫瑰花可爱,刺太扎手 (The rose flower is cute, but its thorns are extremely prickly (meaning --everything attractive has its drawbacks‖)).
Proceedings of INTCESS 2021 8th International Conference on Education and Education of Social Sciences
However, the most common use for Chinese proverbs is the lotus plant, which can have different connotations: 1) causal relationship: 玫瑰花可爱,刺太扎手 (To pluck lotus flowers means to pull out its rhizome); 2) the relationship and interdependence of the phenomena: 藕断丝连 (The rhizome of the lotus is broken, but the fibers are stretching); 3) a description of a person who has achieved success by his work, despite a low social status: 莲花开在污 泥中,人才出在贫寒家 (A lotus grows in the silt, talents come out of poor families) (The Great Chinese-Russian Dictionary).
CONCLUSIONS
Plant names play an important role in Russian and Chinese proverbs. They serve as an object for evaluating a person, his appearance, moral qualities, behavior, and also characterize different aspects of a person's life. The vocabulary of the thematic group -plants‖ forms one of the significant fragments of the linguistic worldimage. This allows us to talk about the importance of the plant code in culture (Borisova, 2014, p. 44). The study of proverbs with the names of plants in the Russian and Chinese linguistic world-image is interesting and important in the linguoculturological aspect, since in these names we find many associative meanings that allow presenting better the peculiarities of mastering the world by means of the national language. | 4,120.2 | 2021-01-19T00:00:00.000 | [
"Linguistics"
] |
The Secret Doctrine and the Gigantomachia: Interpreting Plato’s Theaetetus-Sophist
The Theaetetus’ ‘secret doctrine’ and the Sophist’s ‘battle between gods and giants’ have long fascinated Plato scholars. I show that the passages systematically parallel one another. Each presents two substantive positions that are advanced on behalf of two separate parties, related to one another by their comparative sophistication or refinement. Further, those parties and their respective positions are characterized in substantially similar terms. On the basis of these sustained parallels, I argue that the two passages should be read together, with each informing and constraining an interpretation of
Plato, as is well known, presents the Sophist as a literary companion to the Theaetetus. Most conspicuously, the Sophist's first line-Theodorus: 'We've come at the proper time by yesterday's agreement, Socrates' (216a1)-directly answers the last lines of the Theaetetus-Socrates: 'let us meet here again in the morning, Theodorus' (210d3-4). 1 In this way and others, Plato rhetorically flags the Sophist as a continuation of the recorded conversation begun at Theaetetus 143d1.
The Sophist does not merely pick up where the Theaetetus leaves off, however. The two dialogues are more intimately connected. In what is perhaps the most famous example, the Sophist fills out the Theaetetus' discussion of false judgment. Rather than simply branching out in new directions, the Sophist, at least on occasion, is informed by, returns to, and supplements substantive discussions in the Theaetetus.
In what follows, I aim to highlight another such point of contact between the two dialogues. Specifically, I will present three comprehensively developed parallels between, on the one hand, the Theaetetus' discussion of the flux theorists and their 'secret doctrine' and, on the other hand, the Sophist's discussion of the giants in their fight against the 'friends of forms.' I will show that [1] both passages exhibit the same basic structure, in which two substantive positions are presented on behalf of two separate parties, related to one another by their comparative sophistication or refinement, and that [2] those parties and [3] their respective positions are characterized in remarkably similar terms (see Figure 1).
The Secret Doctrine and the Gigantomachia:
Interpreting Plato's Theaetetus-Sophist Elements of these parallels have been observed previously, but they are almost always mentioned only in passing, typically consigned to footnotes. 2 By focusing on them directly and considering them as a group, I aim to support a pair of related methodological theses. In particular, I submit, Plato's efforts to wed these sections of the Theaetetus and Sophist suggest that an interpretation of the relevant part of either dialogue both can inform and should complement an interpretation of the other. If correct, we will have a trove of fresh resources, from Plato himself no less, to guide our interpretations of two of the most notoriously challenging passages in the corpus.
Let me begin with the relevant section of the Sophist (246a-249d). The Eleatic Visitor there presents 'something like a battle between gods and giants […] over being ' (246a4-5). 3 The battleground is ontology. Each party aims to advance a 'detailed account […] of that which is' (245e6).
From this introduction, one might expect those on either side of the field to uniformly hold a single view. But this is not the case. At any rate, the giants, on whom I will focus, 4 are hardly a monolithic group. They split into two factions. At the outset of the battle, we meet the first-the 'crude giants,' as I will call them. They 'insist that only what offers tangible contact is, since they define being as the same as body' (246a10-b1). Their initial foray, then, consists in offering a view about both the intension and the extension of being. What it is to be, on the crude giants' account, is to be a body. Accordingly, all and only bodies-those things affording tangible contact-are. 5 That identification of being and body leaves the crude giants immediately vulnerable to attack and prefigures the introduction of a second faction to take up their standard. The trouble for the crude giants begins with the extensional component of their thesis. Some of the things that respectable Greeks would count among beings do not seem to be bodies. 6 Of special note are souls and the virtues.
Since the crude giants are said to be difficult-'perhaps just about impossible' (246d1)to talk to, we cannot be certain whether they would [i] admit souls and virtues as genuine exceptions and so challenges to their thesis, [ii] bite the bullet and preclude them from their ontology, or like the Stoics after them, 7 [iii] take both souls and virtues to be bodies and so unproblematic. The Visitor suggests that the crude giants, hardened in their ways, would deflect the question, stubbornly reasserting their thesis and failing to engage (247c4-5). When challenged, they just 'won't listen […] any more' (246b3).
In the crude giants' stead, the Visitor thus questions some imagined 'better people' (246d7 and e2), whom I will call 'refined giants.' The refined giants are partial to the view of their crude compatriots but ultimately concede defeat on that front. 'The soul seems to them to have a kind of body,' making it, if not a body, at least bodily and so providing a small measure of solace; 'but as far as [the virtues] are concerned, they're ashamed and don't dare either to agree that they are not beings or to insist that they are all bodies' (247b8-c2). Since the refined giants will neither dismiss souls and the virtues as nonbeings nor accept them as bodies, options [ii] and [iii] are off the table. This leaves only option [i] remaining. With the soul and the virtues in mind, the refined giants retreat from the crude position that everything is a body and, with it, from the position that being and body are the same.
To retrench, the Visitor claims, they have to reflect upon the various kinds of beings that they recognize-namely, bodies and now souls and the virtues as well-and determine what is common among them that might qualify them all as beings (247d2-4). 8 Since the refined giants are not present to speak for themselves, Theaetetus and the Visitor suggest a new, more fortified position on their behalf. The refined giants are thus agreed to advance the view that 'a thing really is if it has any capacity at all, either by nature to do something to something else or to have even the smallest thing done to it by even the most trivial thing;' they 'take it as a definition 9 that those which are amount to nothing other than capacity' (247d8-e4; cf. 248c4-5).
This new position, as one commentator puts it, 'is not a complete abandonment' of the crude giants but rather 'an attempt to articulate the spirit of their original position, in a way that accommodates the Visitor's counterargument.' 10 Its emphasis on capacity (dunamis) is distilled from the introduction of bodies as whatever is causally salient. For the crude giants, the noteworthy mark of a body was that it afforded tangible contact (246a10). 11 Bodies, that is, were first presented as having a particular kind of capacity for action or passion. The refined giants thus home in on the only feature of bodies that the crude giants had singled out for attention and present it in its pure, unadulterated form. That is, the refined giants are not merely possessed of a better, or more refined, character than their crude compatriots; their position refines that of their crude compatriots as well. And that latter refinement is of no small significance. It allows the refined giants to treat souls and the virtues alongside bodies, 12 thus disarming the Visitor's challenge.
With that survey of the giants' tours of duty complete, I turn now to the Theaetetus' fluxists and begin to draw parallels between the two. The fluxists make their entrance in connection with Theaetetus' proposal that 'knowledge is simply perception' (151e2-3). Socrates will disabuse him of that view, but the path forward is long and largely indirect. For the most part, Socrates' objections are leveled at one of two theses that Theaetetus' proposed definition of knowledge is purported to imply: 13 the familiar, Protagorean dictum that 'man is the measure of all things' (152a2-3) and a much less familiar, 'secret doctrine' (152c10) held by some 'fluent fellows' (181a4), who are commonly referred to in the secondary literature as 'flux theorists,' or 'fluxists.' 14 Initially, it seems as if the fluxists are most concerned to advance a theory of perception and some related theses in the philosophy of language. But as it turns out, these are derivative parts of their doctrine. At its core, Socrates claims, the secret doctrine presents an ontology. As he puts it, their various claims about perception and language 'begin from the principle [archê] that everything is really motion and there is nothing besides motion' (156a3-5).
I discuss that principle below. What is important to observe at the outset is that just as the Sophist presented two factions of giants, so, too, the Theaetetus presents two factions of fluxists. Before introducing the heart of the secret doctrine, Socrates issues a warning. We must take care, he says, that 'none of the uninitiated are listening' (155e3). What can one say about this latter group? To begin, they are obviously not party to the content of the secret doctrine, for otherwise there would be no reason to avoid expressing it in their presence. Nonetheless, they cannot simply be identified with those who have not yet come to know the secret doctrine, for otherwise Theaetetus would count among their ranks, and Socrates would not go on to present it to him. 15 Instead, they are broadly in league with those already initiated, but they stand, as of yet, separated off; much like fraternity pledges, they are candidates for being brought into the fold.
Further, the uninitiated are distinguished from their initiated compatriots by their comparative lack of refinement. They are said to be 'very crude people [amousoi]' (Theait. 156a2) relative to the 'much more refined [polu komp-soteroi]' initiates (156a2). 16 That is, the uninitiated are the crude counterparts to a faction of more refined fluxists, standing to them just as the crude giants stood in relation to their more civilized and sophisticated compatriots (Soph. 246c9, d7, and e2). 17 The crude giants are further characterized in two ways that even more powerfully liken them to the Sophist's crude giants. First, Plato describes them in corporeal terms, associating them with the earth especially. They are 'hard to the touch [sklêros]' and 'resistant [antitupous]' (Theait. 155e7-156a1), 18 making them firm examples of bodies, as the crude giants describe them, which 'offer tangible contact' (Soph. 246a10). 19 The nature of their development is even more telling. We learn that 'There are no pupils and teachers among these people. They just spring up on their own [automatoi anaphusontai]' (Theait. 180b9-c1). That final expression models their genesis on that of plants, growing of themselves from the earth. This finds an analogue in the Visitor's description of the crude giants as 'earth people [gêgeneis]' (Soph. 248c1), 'grown from seed [spartoi]' and 'sprung from the land itself [autochthones]' (247c5). 20 And second, much as the crude giants are 'just about impossible' to converse with (Soph. 246d1), one cannot have a philosophical discussion with a crude fluxist 'any more than [one] could with a maniac' (Theait. 179e5-6). The trouble, as in the case of the crude giants, who could not be compelled to 'answer less wildly' (Soph. 246d6), is that the crude fluxists are restless. As Theodorus' puts it: 'As for abiding by what is said, or sticking to a question, or quietly answering and asking questions in turn, there is less than nothing 21 of that in their capacity' (Theait. 179e7-180a2). 22 The crude fluxists resemble the crude giants not only in description but also in doctrine. Their insistence that 'nothing exists but what they can grasp with both hands' (Theait. 155e4-5) very nicely tracks the crude giants' insistence 'that only what offers tangible contact is' (Soph. 246a10-b1). 23 Indeed, their view is even more forcefully recalled by a later summation of the crude giants' position: namely, that 'anything they can't squeeze in their hands is absolutely nothing' (Soph. 247c5-7). 24 These giants thus approach the battlefield 'clutching rocks and trees with their hands' (246a8-9)-that is, clinging to the tangible bodies on the ground, in contrast to the invisible beings that the friends of forms, their foes, champion from more ethereal climes.
These parallels cannot but be deliberate on Plato's part. The crude fluxists and the crude giants are presented as being one and the same, as are their positions. Since we have seen that each of these crude factions is compared to a more refined one, we should expect to find further parallels in Plato's presentations of their more refined compatriots and their more refined positions. The texts push, albeit less forcefully, precisely in that direction.
Apart from their relative refinement, which should be regarded as an initial parallel, neither the nobler fluxists nor giants are particularly well described. There is accordingly little to compare across Plato's characterizations of each. That absence of characterization, however, should itself be regarded as a further parallel in Plato's presentation, for its explanation is in each case the same: namely, both camps are ultimately presented as being fictional. 25 Because the Theaetetus' crude fluxists are not capable of conversation, Socrates, Theaetetus, and Theodorus agree to 'come to the rescue' (164e6) and 'take their doctrine out of their hands and consider it for ourselves' (180c5-6). Whether someone actually holds the doctrine in question is incidental to the discussion. As a result, the refined fluxists are not so much advocates of a position as they are placeholders for anyone who might (be tempted to) advance it. 26 Similarly, in the Sophist, Theaetetus and the Visitor agree to deal with the crude giants' intransigency 'by making them actually better than they are […] in words' (246d4-5). As a consequence, the focus must again be more on the position than on those who hold it. As the Visitor says, 'we're not concerned with the people; we're looking for what's true' (246d8-9).
What, then, can we say about the ways in which their respective doctrines are presented? At first glance, frankly, they would appear to be at odds. That of the refined giants is framed in terms of capacity (dunamis). Their central tenet, to recall, is that 'those which are amount to nothing other than capacity' (Soph. 247e3-4). That of the refined fluxists, by contrast, is framed in terms of motion (kinêsis). Their central tenet is that 'everything is really motion and there is nothing besides motion' (Theait. 156a3-5).
There are, nevertheless, two principle classes of parallels that serve to largely bridge that difference in framing and more broadly align the two doctrines. 27 The first class bears directly on their central claims. To begin, we may note, both are ontological. Both, further, appeal to a single criterion (capacity; motion). 28 And in each case, that criterion is similarly dichotomous. For the refined giants, there are two basic kinds of capacities, those for action and those for passion (Soph. 247e1, 248c5 and 7). Likewise, for the refined fluxists, 'there are two kinds of motion, […] the one having the capacity to act and the other the capacity to be acted upon' (Theait. 156a5-7, trans. after McDowell).
Their central claims are thus related not only in structure but also in content, as both use capacities for action and passion to ground their respective doctrines. 29 This parallel is strengthened by a common conception of actions and passions and, thus, of the capacities for them. First, both assume that actions and passions just are motions. In light of their treatment of the two kinds of motion, I take it that this is obvious for the refined fluxists. 30 But one also finds the assumption operative in the Sophist, where those in the grips of the refined giants' doctrine find it inevitable that, for example, if being known is a passion, 'then insofar as [a thing] is known, it's moved [kineisthai]' (248e3-4). 31 Accordingly, both parties assume that capacities, generally, are capacities for motion. 32 Second, both assume that actions and passions are systematically interrelated. In particular, for every action there is a distinct, complementary and reciprocal passion, and vice versa. The refined fluxists thus speak of their 'twin births' (Theait. 156b1), and the refined giants assume, for example, that 'if knowing is doing something, then necessarily [anagkaion] what is known has something done to it' (Soph. 248d10-e1). Accordingly, both assume that the capacities for those actions and passions are analogously paired. 33 A second, indirect class of parallels obtains between the corollaries drawn, in each dialogue, from those central ontological claims. Before addressing them, however, a preliminary point is in order. In the Sophist, those corollaries are revealed in the Visitor's treatment of the giants' opponents, the friends of forms, who initially accept a qualified version of the refined giants' capacity doctrine. At this point in the exchange, the friends of forms alter the capacity doctrine only by qualifying its scope. The doctrine applies in full, they allege, to everything that the refined giants recognize in the ontology (that is, as the friends of forms would put it, to the entire domain of coming-to-be); yet there is also, on their view, a more exalted domain of imperceptible, non-bodily forms to which the capacity doctrine does not apply (248c1-9). The friends of forms are thus a valuable source for the refined giants' capacity doctrine.
Two points in that discussion are especially striking. First, insofar as they hold the capacity doctrine, the friends of forms are said to 'break [bodies] up into little bits and call each a process of coming-to-be instead of being' (Soph. 246b9-c2). This cannot but recall the refined fluxists' claim that each body is an 'aggregate [hathroismati]' of 'becomings' that resist description in terms of 'the verb "to be"' (Theait. 157b1-c3; cf. 152d7-e1). Second, and even more notably, the friends of forms take perception to be the analogue of knowledge in the domain of coming-to-be (Soph. 248a10-11). Since, again, this is only domain that the refined giants admit, the implication is that knowledge is no mere analogue of perception for the refined giants; it just is perception. That is to say, the refined giants are presented as being committed to the single most dialectically significant corollary of the refined fluxists' position-namely, the claim that 'knowledge is simply perception' (Theait. 151e2-3). 34 All told, I submit, we thus have considerable evidence for strongly associating the refined fluxists with the refined giants and for strongly associating their respective positions.
If, as these parallels suggest, the Theaetetus' fluxists and the Sophist's giants are, at the very least, philosophical kin, then our interpretative approach to these dialogues should be dramatically altered. On the one hand, we are licensed to draw upon, and would do well consult, the relevant section of one dialogue to inform and advance an interpretation of that of the other. On the other hand, we are at the same time constrained, in that an interpretation of the one should not, on the whole, fail to broadly compliment an interpretation of the other. In each respect, standard interpretations of the Theaetetus-Sophist will require revision and supplementation. My hope is that we are now better poised to determine the form that those emendations should take. 35 Centrone 2008, n. 106 and 107, Cornford 1935, 48 n. 2, Karfík 2011, 124 and 131, Klein 1977, 89, Notomi 1999, 217 n. 21, Polansky 1992, 96, Ross 1953, 102-103, Sedley 2004, 46 n. 9, Seeck 2011, 74 n. 62, Špinka 2011, 232, Teisserenc 2012, 74, 76, and 78, Waterfield 1987, 38 n. 2, and Wiehl 1967, n. 74 and n. 78. Many prominent commentaries-e.g., Bluck 1975, Bostock 1988, Burnyeat 1990, Chappell 2005, Cooper 1990, Duerlinger 2005, Heidegger 1997, Migliori 2007, Rosen 1983, Rijk 1986, and Seligman 1974 to in any way address these parallels. Campbell 1861 and 1867 and Gonzalez 2011 are exceptions to prove the rule both in consistently comparing the two passages and in doing so more than merely in passing. 3 Combat metaphors run throughout and frame this section of the Sophist. Notomi 1999, 217 n. 22 presents an extensive catalogue.
The Theaetetus' fluxists are similarly engaged in battle: 'There is no small fight going on about [their conception of being], anyway-and no shortage of fighting men' (179d4-5). Indeed, as a group, they form 'an army led by Homer' (153a1-2) and wage a 'most vigorous campaign' to advance their theory (179d8). For extended discussion, see Nercam 2013. 4 The giants' side of the fight grounds most of the parallels that I will draw, below, to the Theaetetus 5 On the connection between body and tangible contact, see note 11 below. 6 Note, for example, Theaetetus' emphatic responses at Soph. 246e6-247a4. 7 On the Stoics' engagement with this passage in the Sophist, see the excellent study by Brunschwig 1994. Sellars 2010, for a different assessment, is suspicious of a connection. 8 The assumption that all beings will have some one thing in common in virtue of which they are beings is accepted by all parties. It is hardly innocent, however. Aristotle's focal analysis of being is a clear, ancient alternative. Wittgensteinean family resemblance is a modern one. 9 There is a large body of literature on whether 'horos' should be translated as 'definition' or, less strongly, 'mark.' For a recent overview of and engaging contribution to the debate, see Leigh 2010. While I am inclined to think 'definition' the better option, nothing below will depend on how one decides the question. Since I am arguing for a pair of methodological claims about how one should approach interpreting the Theaetetus and the Sophist, I aim to keep substantive interpretive claims to a minimum. Beere 2009, 7. His development of the point comes in three stages. Mine, in the remainder of the paragraph, overlaps with the first of them. 11 The refined giants will later associate body with visibility as well (247b3-5). Body is similarly marked by tangibility and visibility in the Timaeus (31b4 especially). Contrast both Platonic passages with Aristotle's view, on which the primary mark of a body is instead to be extended in all three dimensions (e.g., DC I.1, 268 a 6-8). For Aristotle, tangibility is a mark of bodies not as such but only insofar as they are perceptible (GC II.2, 329 b 6-7). 12 How, precisely, the refined giants' position accommodates the earlier problem cases is not specified in the text. Crivelli suggests, plausibly to my mind, that a soul or a virtue might count as a being for the refined giants since each 'causes people to act in ways in which they would not in its absence' and thus 'may be described as having the power of affecting things in [… having] the quasi-causal power of making them be in certain ways ' (2012, 87). But what matters is simply that the refined giants' position does, somehow or other, allow them to admit souls and the virtues as beings. 13 While my argument will not depend upon the point, I agree with Burnyeat 1982, esp. pp. 5-6, with n. 2 that the basic argumentative structure of this section of the Theaetetus is a reductio: knowledge is not perception since various implications of that view are absurd. Chappell 2005, 51 shows that this conception of the argument's structure is compatible with both unitarian and revisionist readings of the text (i.e., with both the A reading and the B reading [on which, see Burnyeat 1990, 7-10]), and I intend to remain neutral with respect to those options here. 14 Socrates attributes the 'secret doctrine' to Protagoras, but the very fact that it is presented as a 'secret' raises a question about the grounds for pinning it to his historical namesake (on which, see Brancacci 2011). And indeed, as soon as Socrates raises the doctrine, he rebrands it as a kind of ancient wisdom, something with respect to which 'all the wise men of the past […] stand together' (152e2-3). In the lines that follow, Heraclitus, Empedocles, Epicharmus, and Homer are all placed in Protagoras' company. Parmenides is notable for being explicitly excepted. Melissus is later said to be in league with him (180e2 and 183e3). 15 Nor, conversely, does someone count among the initiates simply for being familiar with the theory. Socrates presents neither himself nor the fluxists' primary opponents as having been initiated. 16 I have substituted Levett and Burnyeat's translation (in Cooper 1997) of 'kompsoteroi' with an alternative from LSJ. 17 There are at least two senses of 'refined' operative in each passage. First, the refined fluxists and the refined giants are both comparatively 'gentle' in character. This sense of 'refinement' is particularly evident, I submit, in their relative willingness to engage in discussion (I comment further on this feature of Plato's presentation below). Second, the refined fluxists and the refined giants are both comparatively 'clever.' The crude fluxists are uneducated, as 'amousos' implies, as are, by their own admission, the crude giants since education, culture, intelligence, and the like are neither tangible nor visible.
In both passages, assessments of comparative refinement can be made not only synchronically (e.g., Theait. 156a2 and Soph. 246d7-8) but also diachronically. This is because the interlocutors recognize a process of refinement in each passage-namely, initiation in the Theaetetus and betterment in the Sophist (246d4-5).
The treatment of that process is perhaps significant. In the Theaetetus, nothing explicit is said about the way in which the crude and refined fluxists' respective positions are related. Yet, the initiation metaphor is suggestive. As Plato presents it elsewhere, an initiation culminates in the initiate changing her mind (Meno 76e6-9), in the face of dialectical puzzles (Euthydemus 277d-e), but by refining her positions rather than simply jettisoning them (Phaedo 69b-c). Notably, in Socrates' own initiation, that movement leads away from the particular bodies, and even body generally, that the uninitiated are presented as focusing upon (Symposium 210a-b). Admittedly, though, this reconstruction is too speculative to count as compelling, let alone decisive, evidence of a parallel. taton].' On the other hand, he denies that the difficulty in talking with the crude fluxists is the same as that in talking with the crude giants (1867, 120); I take Soph. 246d6, quoted above, to meet the worry he raises. 22 An anonymous referee rightly notes that Theaetetus 179e-180c, on which I have drawn both in this paragraph and the one prior, does not unambiguously refer to the crude fluxists. An important indicator that this is, indeed, the way to take the reference comes in Theodorus' description of those in question as those 'who profess to be adepts [prospoiountai empeiroi]' (179e4-5). Since 'prospoieô' connotes pretending (LSJ points to Gorgias 519c3 for this coloring of the verb), the description can be paraphrased as 'those who profess to be but are not in fact initiates.' 23 The crude fluxists go on to deny 'that actions and processes and the invisible world in general have any place in reality' (155e5-6). The last component of that de-nial reveals a conception of body as tangible and visible, in that order. See note 11, above. The first two components further liken them to the crude giants, who also do not admit capacities, unlike their refined compatriots. On this point, I disagree with Benardete, who takes the crude giants to 'deny […] the changeable' (1984, I.108). They deny changes (or, at least, deny that changes are fundamental), not the bodies capable of change. 24 Campbell 1861, 50 n. 6; 1867, 123 n. 1 is particularly sensitive to resemblances among Plato's formulations of the crude fluxists' and giant's positions. 25 Sedley 2004, 46 n. 9 notes the parallel; Diès 1992, 109 n. 3 links the passages I use to support it. Whether this is merely a matter of presentation is a separate question that I will not here address.
It is notable for Plato to develop a position on behalf of no one in particular. Indeed, Brown 1998, 182 observes that, in the early and middle dialogues, Plato is unlikely to develop a position even on behalf of a determinate proponent who is neither participating in the conversation nor present for it. 26 Protagoras and others are no doubt regularly associated with the doctrine, but at critical junctures it is explicitly wrested from them and developed independently. Presumably in relation, the refined fluxists' central tenet is called a 'veiled truth' (155d10) hidden within what was already said to be a 'secret doctrine' (152c10). 27 Ross 1953, 102-103, Benardete 1984, II.41 n. 65, Sedley 2004, 46 n. 9, Centrone 2008, n. 107, and Karfík 2011 are among those who liken the refined fluxists' and refined giants' respective doctrines. While the parallels that I will present are perhaps insufficient to completely bridge the gap between capacity and motion, and so to simply identify the two doctrines (on this point, see Gonzalez 2011, 69-70), they reveal deep and pervasive agreements between those doctrines that are, I submit, sufficient to motivate the pair of methodological theses that I ultimately have in view. 28 On the significance of a commitment to a single criterion, see note 8, above. 29 Gonzalez notes the parallel, observing that, in both passages, 'all things are identified with a dunamis of either ποιεῖν or παθεῖν ' (2011, 70). 30 While, so far as I can see, there is no cause to doubt that, for the refined fluxists, all actions and passions are motions, the status of the converse claimthat all motions are actions and passions-is less certain. Though I suspect the refined fluxists would accept it as well, I am not relying on the latter claim for the parallel in the body of the paper. 31 Similarly, in a related context, the Visitor glosses 'action and passion' as 'motion' and 'that which acts or is acted upon' as 'that which moves' (Soph. 249b2). The inference in the body of the paper, it bears noting, is not presented directly on behalf of the refined giants. Rather, Theaetetus and the Visitor treat it as an implication of their doctrine when demarcating them from the friends of forms, who accept a qualified form of the doc-trine. I discuss the evidential import of the passage below. 32 Leigh 2010, 76 emphasizes this point in her discussion of the Sophist. It is not trivial that capacities should be conceived of exclusively as capacities for motion. Focusing on the Sophist as well, Beere 2009, 12-13 proposes that the difficulties arising in relation to this position prompt Aristotle to introduce both activities that are not also motions and, with them, capacities that are not also capacities for motion.
This link between capacity and motion may also help to explain why, in the Sophist, the giants' opponents, the friends of forms, might present their own position as denying that being has any share of motion, rather than as denying a claim about capacities directly. Just as the fluxists' opponents proclaim that being is 'unmoving' and 'stands still' (Theait. 180b2, 180e1-3, and 183d1), the friends of forms maintain that 'being always stays the same and in the same state' (Soph. 248a12). 33 The systematic coupling of capacities for action and passion, though not uncommon in the corpus (see, e.g., Rep. VI, 507e6 ff. and Leg. X, 903b4-9), is similarly nontrivial. To draw a comparison with the Charmides, it would preclude a capacity, like knowledge, from acting upon itself (cf. Barnes 2001, 79). 34 Seeck 2011, 78 n. 70 draws a related parallel to Theait. 184b7-185a7. 35 I owe the impetus for this paper to Charles Kahn and Susan Meyer, who encouraged me to develop and support its central thesis, a version of which I had rather flatly asserted in a footnote to my doctoral dissertation. I am also grateful to Francisco Gonzalez, two anonymous referees, and audiences at Portland State University and an SAGP meeting at Fordham University for comments on drafts. | 7,483 | 2015-07-22T00:00:00.000 | [
"Philosophy"
] |
Discriminating Bacterial Infection from Other Causes of Fever Using Body Temperature Entropy Analysis
Body temperature is usually employed in clinical practice by strict binary thresholding, aiming to classify patients as having fever or not. In the last years, other approaches based on the continuous analysis of body temperature time series have emerged. These are not only based on absolute thresholds but also on patterns and temporal dynamics of these time series, thus providing promising tools for early diagnosis. The present study applies three time series entropy calculation methods (Slope Entropy, Approximate Entropy, and Sample Entropy) to body temperature records of patients with bacterial infections and other causes of fever in search of possible differences that could be exploited for automatic classification. In the comparative analysis, Slope Entropy proved to be a stable and robust method that could bring higher sensitivity to the realm of entropy tools applied in this context of clinical thermometry. This method was able to find statistically significant differences between the two classes analyzed in all experiments, with sensitivity and specificity above 70% in most cases.
Introduction
Body temperature is a key clinical parameter. It is usually assessed once per shift in hospital wards and has always been considered a hallmark of infectious diseases. However, the values obtained with the standard measurements are interpreted dichotomously: either the patient has a fever or is afebrile.
Body temperature assessment is also highly dependent on the method of measurement. Central methods are accurate and reliable (pulmonary artery catheter, urinary bladder, esophagus) but they are not suitable in most clinical scenarios. Tympanic temperature is often used as a replacement for central temperature because values are close, and it is more convenient and less invasive [1]. Peripheral temperature can be assessed in different anatomical locations (mouth, armpit). Despite not being as accurate [2], peripheral measurements are the standard procedure in clinical practice.
Furthermore, the definition of fever is arguably flawed, as it depends on many factors such as age, gender, circadian rhythms, or underlying conditions [3][4][5]. As a matter of fact, there is no universal threshold for fever, as a wide range of temperatures has been shown in individuals considered healthy [6,7]. Some efforts to standardize the normal body temperature range have been carried out in the past [7] but they have not been transferred into clinical practice.
Traditionally, attempts have been made to find clinical differences in the patterns of fever caused by infectious diseases (malaria, tuberculosis, typhoid fever) [8]. Nevertheless, none of these approaches are sufficient to make clinical decisions [8,9]. Furthermore, a wide spectrum of noninfectious conditions can also induce the synthesis and release of pyrogenic cytokines and eventually cause fever [10].
Since body temperature regulation is a dynamical process, by obtaining just two or three measurements per day, a wealth of information is lost. However, some devices are available to obtain high-frequency measurements of body temperature. Body temperature monitoring has been proven useful in certain clinical scenarios when more frequent measurements (associated or not to alarm settings) entail earlier recognition of fever [11][12][13][14]. Moreover, temperature monitoring devices enable the registry of temperature time series. This allows its use as a continuous variable, instead of a series of isolated values [15].
Similar to many other biological systems, thermoregulation can be considered a complex process, and might therefore be analyzed under the scope of nonlinear dynamics. Complexity metrics have previously been applied to other biological variables [16]. It has been widely demonstrated that changes in complexity of biological signals are associated with damage to or degradation of the system [17][18][19][20][21][22][23].
In this context, entropy statistics could be of clear interest to unveil certain characteristics of the thermoregulation process and, perhaps, the underlying cause of fever. In previous works, we have already demonstrated the feasibility of this approach. For example, in [24], we described a method based on the entropy statistic Slope Entropy (SlpEn) [25] to distinguish between body temperature time series from malaria and dengue patients. The achieved accuracy was up to 90% correctly classified records with a single numerical feature computed for each one. In another study [26], a different entropy method, Sample Entropy (SampEn) [27], was used with the same purpose of distinguishing among body temperature time series coming from infectious diseases, tuberculosis, nontuberculosis, and dengue fever patients. The global accuracy achieved was close to 70%.
Other works have used a combination of features; this is the case for the work described in [28], which used temperature temporal patterns to detect tuberculosis. In [29], the authors used a more sophisticated approach using the Fourier transform, entropy, energy, power, and a set of additional coefficients to train a quadratic support vector machine to carry out the classification of tuberculosis, intracellular bacterial infections, dengue, and inflammatory and neoplastic diseases temperature time series.
The goal of this study is to assess if patients with bacterial infections have significant changes in the entropy of their body temperature compared with patients with other infections or other causes of fever. As the entropy statistic for the analysis, we chose SlpEn for its good performance in previous studies [24,30,31]. For comparative purposes, we included more widely used methods such as Approximate Entropy (ApEn) [32] and SampEn, which have been successfully used in a myriad of similar biosignal classification works [33][34][35][36][37][38][39].
SlpEn
The recently proposed time series entropy measure termed Slope Entropy (SlpEn) [25] can achieve high classification accuracy using a diverse set of records [24,25,30]. Despite its short life, it has already been implemented in scientific software tools such as EntropyHub (https://github.com/MattWillFlood/EntropyHub.jl, accessed on 15 February 2022) and CEPS (Complexity and Entropy in Physiological Signals) [40].
The first step of SlpEn computation is extraction from an input time series x = {x 0 , x 1 , . . . , x N−1 } of a set of consecutive overlapping subsequences of length m, commencing at sample i, x i = {x i , x i+1 , . . . , x i+m−1 }, 0 ≤ i < N − m + 1 (m being the embedded dimension variable and n the total length of the time series, with m << N). Each of the n m extracted subsequences, x i , can then be transformed into a new one of length m − 1 by computing and storing the differences between each pair of consecutive samples in the subsequence, namely, Using, in its basic configuration [25], 5 different symbols from an alphabet-for example +2, +1, 0, −1, −2-the differences obtained are represented by these symbols instead, according to two input thresholds, δ and γ, and the expressions described in [25]. Further details of SlpEn implementation and examples can be found in [24,25]. A software library using this method is also described in [40].
In addition to SlpEn, two other entropy methods-ApEn and SampEn-were applied to the time series in order to assess the hypothesized improvement in classification accuracy that SlpEn could bring to the analysis. Although these methods have been used extensively, and they are characterized and described in great detail in a number of publications [41][42][43][44][45][46], they are depicted for completeness in the next two subsections.
Approximate Entropy
ApEn [32] is also based on extracting subsequences of length m from the input time series, x i = {x i , x i+1 , . . . , x i+m−1 }, as for SlpEn. Then, a distance is computed between every subsequence and a fixed reference If the number of comparisons falling below a predefined threshold r-termed matches, d ij < r-is computed for two consecutive embedded dimensions (m and m ← m + 1), two counters can be defined as Computing the averages of these counters, , from which the result of ApEn can be finally obtained as ApEn(m, r, N) = φ m (r) − φ m+1 (r) .
Sample Entropy
The first steps of the SampEn algorithm [35] are the same as for ApEn. However, when counting the matches, subsequences are not compared with themselves, formally 0 ≤ j < N − m + 1, with j = i.
from which SampEn is computed as SampEn(m, r, N) = − log A m (r) B m (r) .
Experimental Dataset
The study was conducted at Hospital Universitario de Móstoles (Madrid, Spain). Patients older than 18 years old admitted to the general Internal Medicine ward presenting with fever at admission and/or suspected infectious disease were considered suitable for inclusion. Pregnancy and inability to cooperate with the monitoring process were considered exclusion criteria.
Temperature values were obtained through a probe (Truer Medical, Inc., Orange, CA, USA) placed in the external auditory canal (EAC), after otoscopy to check the integrity of the tympanic membrane. Data from the EAC were used as surrogates of central temperature [1]. The probe was wired to a Holter device (TherCom, Innovatec) that registered one measurement per minute. When feasible, the monitoring process was performed in real-time. Otherwise, data were stored in the device and downloaded later for analysis. The aim was to perform 24-h recordings, but in some cases, the process was stopped earlier due to poor compliance of the patient, displacement of the probes for long periods (preventing the proper recording of data), or abnormally low temperatures, suggesting that measurements were clearly inaccurate.
Patients were classified into two categories concerning diagnosis: bacterial infection (confirmed or suspected) or others. The latter included patients with nonbacterial infections (viral, fungal, etc.) or with fever caused by inflammatory diseases, cancer, or fever of unknown cause (when bacterial infection was deemed to be excluded).
Temperature time series were processed by visual inspection. In some cases, the beginning and/or the end of the recording were trimmed to ensure the stability of the signal. Disconnections of at most 5 measurements were repaired through linear interpolation. For longer disconnections, the segment was removed, provided the remaining interval was clean.
The experimental dataset contained 10 body temperature time series of patients with confirmed bacterial infection and 13 from patients with other causes of fever. The lengths of the time series are shown in Table 1. In all cases, different fixed time series lengths were used to assess the classification accuracy of each metric and ensure length equality: 500, 600, 700, 800, 900, 1000, and 1100. Those time series below the cut-off length were discarded in that specific experiment.
Experiments and Results
All the experimental time series were processed using the three entropy calculation methods described previously: SlpEn, ApEn, and SampEn. They were also cut short to the lengths stated above. The remaining central part was used for the analysis, as it would theoretically be the most stable segment (thermal equilibrium reached, probe still in place). The entropy result was used as the classification feature applying Sensitivity (Se) and Specificity (Sp) [47], with a threshold obtained from the corresponding ROC curve (closest point to (0,1); an example is shown in Figure 2) [48][49][50]. The statistical significance was assessed using the Wilcoxon-Mann-Whitney test [51], with α = 0.05. Input parameters were varied in the range m ∈ [3,9] and γ, r ∈ [0.10, 0.90]. For SlpEn, δ was kept constant at δ = 0.001. Time series were normalized for zero mean and unit standard deviation. The stationarity of the input time series was assessed by computing the standard deviation for each consecutive 50-sample window, yielding fairly similar values. Table 2 shows the results for lengths N = 500, 600, and 700. For N = 500, SlpEn achieved good classification accuracy for m = 3 and in the γ region of 0.15-0.30. Neither ApEn nor SampEn reached significance for any combination of their input parameters m and r, after a grid search for m between 3 and 9, and r between 0.10 and 0.90 in 0.05 steps. It is important to note that these methods are very sensitive to length, and N = 500 is arguably too short for them.
For N = 600 and N = 700 the results were similar. ApEn and SampEn did not discriminate, SlpEn remained stable in the same region of m = 3 and γ = 0.15-0.25, but other parameter configurations in the same region of γ with values of m such as 4, 6, and 7 also reached discriminatory power. Table 2. Experiment results for lengths N = 500, 600, 700 using SlpEn, Approximate Entropy (ApEn), and Sample Entropy (SampEn). Parameter grid search for m, between 3 and 9, and r and γ, between 0.10 and 0.90 in 0.05 steps. The values of the input parameters are included as (m, r) or (m, γ) for cases when p < 0.05 after the grid search. Otherwise, no combination provided significant results, represented by −−. Statistical Table 3 displays the results for lengths N = 800 and 900. For N = 800, the trend is the same as in Table 2. SlpEn is able to find differences in the vicinity of m = 3 and γ = 0.20, but ApEn and SampEn are unable to yield any statistically significant classification. For N = 900, the number of parameter combinations for SlpEn increases, with γ fairly stable in the same region around 0.20, for almost any m value except 5. Additionally, ApEn is also significant in the region around m = 3 and r = 0.20. Table 3. Experiment results for lengths N = 800, 900 using SlpEn, ApEn, and SampEn. Parameter grid search for m, between 3 and 9, and r and γ, between 0.10 and 0.90 in 0.05 steps. The values of the input parameters are included as (m, r) or (m, γ) for cases when p < 0.05 after the grid search. Otherwise, no combination provided significant results, represented by −−. Statistical significance was reached by SlpEn and ApEn.
Finally, the results for N = 1000 and N = 1100 are shown in Table 4. No more lengths were tested since not enough time series would be available if N > 1100. For both cases, the number of significant combinations increased significantly, with SlpEn certainly stable in the same regions as for other N values, and even ApEn and SampEn reaching significance for N = 1100. Figure 3 shows a plot of results for N = 1100, γ = r = 0.20, m = 3 and for the three methods tested. Table 4. Experiment results for lengths N = 1000, 1100 using SlpEn, ApEn, and SampEn. Parameter grid search for m, between 3 and 9, and r and γ, between 0.10 and 0.90 in 0.05 steps. The values of the input parameters are included as (m, r) or (m, γ) for cases when p < 0.05 after the grid search. Otherwise, no combination provided significant results, represented by −−. Statistical significance was reached by all methods in some cases.
Discussion
The experiments explored the capability of SlpEn, ApEn and SampEn to distinguish between two classes of body temperature records: time series from patients with bacterial infection and time series from patients also with fever but due to other causes. The experimental set available enabled a study using lengths from 500 up to 1100 samples.
For all these lengths, SlpEn was able to find significant differences when the input parameters were m = 3 and γ = 0.15-0.25, with additional m values available depending on N values. This illustrates the fact that SlpEn is fairly stable and robust, as also demonstrated in other studies based on this recent method [30,31,52].
The results obtained using ApEn were significant only for N = 900 (Table 3) and N = 1100 (Table 4). This is in accordance with the reported high sensitivity of ApEn to the length of the input time series [44]. One of the most popular guidelines for this minimum length using ApEn is N ≥ 10 m [53], which translates in this case to N ≥ 1000, in agreement with the results in this study. In order to illustrate how the ApEn statistics were computed for the most unfavorable case, N = 500, the percentage of estimated probabilities on at least 10 matches, weak criterion, was 80.17 ± 10.37, and on at least 100 matches, strong criterion, was 6.52 ± 13.01 [54].
SampEn only achieved significance for N = 1100. Although SampEn is known to be sensitive to input time series length, it is usually claimed to be more robust in this regard than ApEn. However, in other cases, we have also found that ApEn performed better than SampEn in classification tasks such as in [38].
There is also an association between time series length and classification performance in terms of specificity and sensitivity [47]. For shorter time series, there are some cases where these metrics achieve values below 0.7. As the length increases, the performance improves, with more values in the vicinity of 0.85, arguably very high for a biomedical signal classification application.
Therefore, for lengths shorter than 1000 samples, imposed by the operational difficulties linked to obtaining long-term body temperature records, SlpEn seems a good choice to find differences between the record classes present in the experimental dataset. If longer series were available, other methods such as ApEn and SampEn could also be applied, which remains to be further studied.
From a clinical perspective, the results of this work suggest that patients admitted to the hospital with a diagnosis of bacterial infection had a misregulation of their body temperature, measured with entropy statistics. This is in accordance with findings of a previous study by our group [55]. The complexity of biosignal time series seems to be an indicator of the integrity and performance of biological systems, and disease usually exhibits low levels of entropy metrics [16,20]. Bacterial infection may entail the development of sepsis, a clinical entity with a very high risk of complications and death, which is unusual with other infections or other causes of fever. In our opinion, the loss of entropy that we have observed in the temperature curves of patients with bacterial infections may be another facet of homeostasis disturbance.
Remarkably, these results were irrespective of the confirmation of fever by the staff nurse (standard measurements), or the maximum temperature obtained. These results suggest that body temperature may supply relevant information, over and above attaining a certain pre-established febrile threshold.
In fact, body temperature may provide clues in many clinical aspects, as long as enough information is obtained through continuous monitoring. It has already proved to be useful to assess the prognosis of critically ill patients in the Intensive Care Unit [19,56], to forecast fever peaks [13], and to classify patients according to the cause of fever [26,29].
We are aware of the many limitations of this work. On the one hand, the monitoring system has some issues: the tympanic probe is prone to be displaced, wired probes can be bothersome for the patient, the Holter device needs to be wirelessly connected to a computer in the range of Bluetooth, etc. For these reasons, over half of the patients in the study were excluded from this analysis because the recordings were lost or defective. Several adjustments were carried out during the study to solve or reduce the impact of these issues, such as real-time monitoring through a wireless network and periodic backups of data to keep a copy of the recording in case there was a disconnection. In any case, we acknowledge that the final sample was small and, although significant differences have been found between the two groups, reliability might be limited for this reason.
On the other hand, as it has already been exposed, entropy metrics need clean time series and are in general more informative the longer the data. This has been a limitation in this work and is a common problem for the analysis of biological time series recorded in real life, as many factors may cause artifacts and make data unsuitable for evaluation.
In our opinion, future research should focus on two issues: Firstly, acquisition of long and clean time series. For this purpose, wireless and ergonomic probes that fit properly at the external auditory canal could improve the quality of the recordings and minimize the loss of information. Secondly, obtaining temperature recordings in a wide range of clinical settings-including healthy individuals for comparison-may provide details about physiological processes and may broaden the utility of clinical thermometry to subtler issues than just the identification of fever peaks. Funding: This research was funded by Instituto de Salud Carlos III, grant number PI17/00856 (Cofunded by European Regional Development Fund, "A way to make Europe"). The APC was funded by Instituto de Salud Carlos III.
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of Hospital Universitario de Móstoles with protocol code 2017/033 on 26 October 2017.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to confidentiality reasons.
Conflicts of Interest:
The authors declare no conflict of interest. | 4,894.4 | 2022-04-01T00:00:00.000 | [
"Computer Science"
] |
Osteopontin: a leading candidate adhesion molecule for implantation in pigs and sheep
Osteopontin (OPN; also known as Secreted Phosphoprotein 1, SPP1) is a secreted extra-cellular matrix (ECM) protein that binds to a variety of cell surface integrins to stimulate cell-cell and cell-ECM adhesion and communication. It is generally accepted that OPN interacts with apically expressed integrin receptors on the uterine luminal epithelium (LE) and conceptus trophectoderm to attach the conceptus to the uterus for implantation. Research conducted with pigs and sheep has significantly advanced understanding of the role(s) of OPN during implantation through exploitation of the prolonged peri-implantation period of pregnancy when elongating conceptuses are free within the uterine lumen requiring extensive paracrine signaling between conceptus and endometrium. This is followed by a protracted and incremental attachment cascade of trophectoderm to uterine LE during implantation, and development of a true epitheliochorial or synepitheliochorial placenta exhibited by pigs and sheep, respectively. In pigs, implanting conceptuses secrete estrogens which induce the synthesis and secretion of OPN in adjacent uterine LE. OPN then binds to αvβ6 integrin receptors on trophectoderm, and the αvβ3 integrin receptors on uterine LE to bridge conceptus attachment to uterine LE for implantation. In sheep, implanting conceptuses secrete interferon tau that prolongs the lifespan of CL. Progesterone released by CL then induces OPN synthesis and secretion from the endometrial GE into the uterine lumen where OPN binds integrins expressed on trophectoderm (αvβ3) and uterine LE (identity of specific integrins unknown) to adhere the conceptus to the uterus for implantation. OPN binding to the αvβ3 integrin receptor on ovine trophectoderm cells induces in vitro focal adhesion assembly, a prerequisite for adhesion and migration of trophectoderm, through activation of: 1) P70S6K via crosstalk between FRAP1/MTOR and MAPK pathways; 2) MTOR, PI3K, MAPK3/MAPK1 (Erk1/2) and MAPK14 (p38) signaling to stimulate trohectoderm cell migration; and 3) focal adhesion assembly and myosin II motor activity to induce migration of trophectoderm cells. Further large in vivo focal adhesions assemble at the uterine-placental interface of both pigs and sheep and identify the involvement of sizable mechanical forces at this interface during discrete periods of trophoblast migration, attachment and placentation in both species. Electronic supplementary material The online version of this article (doi:10.1186/2049-1891-5-56) contains supplementary material, which is available to authorized users.
Introduction
Domestic animal models for research are generally underappreciated [1]; however, pigs and sheep offer unique characteristics of pregnancy, as compared to rodent or primate models, and studies of pigs and sheep have provided significant insights into the physiology of implantation including: 1) elongation of the blastocyst into a filamentous conceptus; 2) the protracted peri-implantation period of pregnancy when the conceptus is free within the uterine lumen requiring extensive paracrine signaling between conceptus and endometrium, as well as nutritional support provided by uterine secretions; 3) a protracted and incremental attachment cascade of trophectoderm to endometrial epithelium during implantation; and (4) development of a true epitheliochorial or synepitheliochorial placenta, respectively, that utilizes extensive uterine and placental vasculatures for hematotrophic nutrition, and placental areolae for histotrophic support of the developing fetuses. Our understanding of the complex mechanistic events that underlie successful implantation and placentation across species has been and will likely continue to be advanced by studies of pigs and sheep as biomedical research models and to increase reproductive success in animal agriculture enterprises providing high quality protein for humans.
Overview of the biology of osteopontin (OPN)
OPN is a secreted extracellular matrix (ECM) protein that binds to a variety of cell surface integrins and several CD44 variants [2][3][4][5][6]. Integrins are transmembrane glycoprotein receptors composed of non-covalently bound α and β subunits that promote cell-cell and cell-ECM adhesion, cause cytoskeletal reorganization to stabilize adhesion, and transduce signals through numerous signaling intermediates [7,8]. Integrin-mediated adhesion is focused within a primary mechanotransduction unit of dynamic structure and composition known as a focal adhesion whose size, composition, cell signaling activity and adhesion strength are force-dependent [2,9]. The intrinsic properties of the ECM in different niches and tissue-level compartments affect the composition and size of focal adhesions that, in turn, modulate cell behavior including gene expression, protein synthesis, secretion, adhesion, migration, proliferation, viability and/or apoptosis [10].
Integrins are dominant glycoproteins in many cell adhesion cascades, including well defined roles in leukocyte adhesion to the apical surface of polarized endothelium for extravasation of leukocytes from the vasculature into tissues [11]. A similar adhesion cascade involving interactions between the ECM and apically expressed integrin receptors on the uterine luminal epithelium (LE) and conceptus (embryo and placental membranes) trophectoderm is proposed as a mechanism for attachment of the conceptus to the uterus for implantation; the initial step for the extensive tissue remodeling that occurs during placentation [12]. OPN is a leading candidate adhesion molecule for implantation in pigs and sheep [13].
Timeline of key advancements in understanding the role of OPN as an attachment factor for implantation OPN was first observed in endometrial tissue when, in 1988, Nomura et al., [43] performed in situ hybridization to localize OPN in mouse embryos, the endometrium from the gravid and non-gravid uterine horns of pregnant mice, and the endometrium from mice exposed to intrauterine injection of oil to induce a deciduoma. High levels of OPN mRNA were detected in the LE, but not GE, of the gravid uterine horns. Interestingly, epithelial expression of OPN appeared to be specific to pregnancy because little to no OPN mRNA was observed in the uterine LE of non-gravid or pseudopregnant mice [43]. In addition to the LE, high levels of OPN mRNA were localized to the granulated metrial gland (GMC) cells of decidual and deciduoma tissues, with lower numbers of OPN positive cells in the deciduoma of uteri [43]. It is noteworthy that these investigators were the first to argue that OPN plays a wider role than had previously been assumed, and that its functions are not confined to bone development. The decidual cells that express OPN have since been confirmed to be uterine natural killer (uNK) cells [44,45]. Similar to expression in mice, immunocytochemical studies performed by Young and colleagues in 1990 [25] localized OPN protein to the decidua of women; however, in contrast to mice, OPN was also expressed by the secretory phase endometrial GE. It was suggested that the absence of OPN in GE during the proliferative phase of the menstrual cycle indicated that changes in expression in GE of normal cycling endometrium were the result of hormonal regulation and that the function(s) of OPN in the endometrium might be associated with its ability to enhance cell attachment [25]. A significant conceptual advance regarding the function (s) of epithelia-derived OPN was made by Brown and coworkers [26] in 1992, when OPN mRNA and protein were localized to epithelial cells of a variety of organs, including the hypersecretory endometrial GE associated with pregnancy in women. In the secretory epithelia of all organs examined, OPN protein was associated with the apical domain of the cells, and when the luminal contents were preserved in tissue sections, proteins secreted into the lumen were positive for OPN staining. It was hypothesized that OPN secreted by epithelia, including uterine epithelia, binds integrins on luminal surfaces to effect communication between the surface epithelium and the external environment [26]. Between 1992 and 1996, Lessey and coworkers established that transient uterine expression of αvβ3 and α4β1 integrins defines the window of implantation in women [46][47][48] and that altered expression of these integrins correlates with human infertility [49,50]. Noting that the αvβ3 and α4β1 integrin heterodimers present during the implantation window bind OPN, these investigators suggested involvement of OPN and integrins in trophoblast-endometrial interactions during the initial attachment phase of implantation [46].
Comprehensive examination of the temporal and spatial expression and hormonal regulation of uterine OPN mRNA and protein and integrin subunit proteins in the uteri and placentae of sheep (discussed in detail later in this review), performed from 1999 through 2002, provided the first strong evidence that OPN is a progesterone-induced secretory product of endometrial glands (histotroph) that binds integrins on apical surfaces of endometrial LE and conceptus trophectoderm to mediate attachment of uterus to trophectoderm for implantation [18,29,51,52]. Indeed, pregnant Day 14 ewes, which lack uterine glands (uterine gland knockout, UGKO phenotype), exhibit an absence of OPN in uterine flushings compared with normal ewes, and do not maintain pregnancy through the peri-implantation period [53]. Similarly, functional intrauterine blockade of αv and β3 integrin subunits, that combine to form a major receptor for OPN, reduces the number of implantation sites in mice and rabbits [54,55]. Further evidence for regulation of uterine OPN by sex steroids was provided by results from studies using human and rabbit models. Progesterone treatment increased OPN expression by human endometrial adenocarcinoma Ishakawa cells (in vitro findings, 2001) as well as endometrium of rabbits (in vivo findings, 2003) [56,57]. In contrast, i.m. injection of estrogen induced expression of OPN in the uterine LE of cyclic pigs (in vivo, 2005) [58]. Results from pigs were the first to suggest that conceptuses can directly regulate the regional expression of OPN in the endometrium at specific sites of implantation through secretion of estrogens [58,59]. Microarray studies from 2002 and 2005 strongly support a role for OPN during implantation [60][61][62]. Two reports confirmed that OPN is the most highly up-regulated ECM-adhesion molecule in the human uterus as it becomes receptive to implantation [60][61][62].
Research regarding OPN has begun to focus on its interactions with integrin receptors in the female reproductive tract. In 2009, Burghardt and colleagues [63] reported the in vivo assembly of large focal adhesions containing aggregates of αv, α4, α5, β1, β5, alpha actinin, and focal adhesion kinase (FAK) at the uterine-placental interface of sheep, that expand as pregnancy progresses. It is noteworthy that OPN was present along the surfaces of both uterine LE and trophectoderm, although it was not determined whether it co-localized to the focal adhesions [63]. Similar focal adhesions form during implantation in pigs [64,65]. Affinity chromatography and immunoprecipitation experiments revealed direct in vitro binding of porcine trophectoderm αvβ6 and uterine epithelial cell αvβ3, and ovine trophectoderm αvβ6 integrins to OPN [64,66]. These were the first functional demonstrations that OPN directly binds specific integrins to promote trophectoderm cell migration and attachment to uterine LE that may be critical to conceptus elongation and implantation. Recently (2014), Aplin and co-workers [67] employed three in vitro models of early implantation with Ishakawa cells to demonstrate that OPN potentially interacts with the αvβ3 integrin receptor during implantation in humans.
Key events during the peri-implantation period of pigs and sheep
Communication and reciprocal responses between the conceptus and uterus are essential for conceptus survival during the peri-implantation period of pregnancy. These interactions also lay the critical physiological and anatomical groundwork for subsequent development of functional uterine LE, GE, stroma and placentae required to maintain growth and development of the conceptus throughout pregnancy. In a progesterone dominated uterine environment, establishment and maintenance of pregnancy in pigs and sheep requires; (i) secretion of estrogens or interferon tau, respectively, from the conceptus to signal pregnancy recognition [68][69][70][71], (ii) secretions from uterine LE and GE, i.e., histotroph, to support attachment, development and growth of the conceptus [72][73][74], and (iii) cellular remodeling at the uterine LE-conceptus trophectoderm interface to allow for attachment during implantation [8,75,76]. These events are orchestrated through endocrine, paracrine, autocrine and juxtracrine communication between the conceptus and uterus, and the complexity of these events likely underlies the high rates of conceptus mortality during the peri-implantation period of pregnancy [77,78].
Implantation and placentation are critical events in pregnancy. Implantation failure during the first three weeks of pregnancy is a major cause of infertility in all mammals [77][78][79][80]. The process of implantation is highly synchronized, requiring reciprocal secretory and physical interactions between a developmentally competent conceptus and the uterine endometrium during a restricted period of the uterine cycle termed the "window of receptivity". These initial interactions between apical surfaces of uterine LE and conceptus trophectoderm begin with sequential phases i.e., non-adhesive or pre-contact, apposition, and adhesion, and conclude with formation of a placenta that supports fetal-placental development throughout pregnancy [81][82][83]. Conceptus attachment first requires loss of anti-adhesive molecules in the glycocalyx of uterine LE, comprised largely of mucins that sterically inhibit attachment [52,84,85]. This results in "unmasking" of molecules, including selectins and galectins, which contribute to initial attachment of conceptus trophectoderm to uterine LE [86][87][88]. These low affinity contacts are then replaced by a repertoire of adhesive interactions between integrins and maternal ECM which appear to be the dominant contributors to stable adhesion at implantation [1,8,52,[89][90][91]. OPN is expressed abundantly within the conceptus-maternal environment in numerous species, including pigs and sheep [17,29,57,59,62,92,93].
Osteopontin is structurally and functionally suited to support implantation of pig and sheep conceptuses
Depending on cell context and species, OPN expression can be regulated by multiple hormones and cytokines, including the sex steroids progesterone and estrogen [28,51,[56][57][58][94][95][96][97][98]. OPN mediates multiple cellular processes, such as cell-mediated immune responses, inflammation, angiogenesis, cell survival, and tumor metastasis primarily through integrin signaling [3,5,17,99,100]. Integrins are transmembrane glycoprotein receptors composed of non-covalently bound α and β subunits that participate in cell-cell and cell-ECM adhesion, cause cytoskeletal reorganization to stabilize adhesion, and transduce signals through numerous signaling intermediates [7,8]. OPN has an expansive integrin receptor repertoire that includes RGD-mediated binding to αvβ3 [101,102], αvβ1 [103], αvβ5 [103], and α8β1 [104], as well as alternative binding sequence-mediated interactions with α4β1 [105], and α9β1 [106]. OPN binding to these various receptors results in diverse effects including: (1) leukocyte, smooth muscle cell and endothelial cell chemotaxis; (2) endothelial and epithelial cell survival; and (3) fibroblast, macrophage and tumor cell migration [64,66,103,104,107]. Clearly, the ability to bind multiple integrin receptors to produce different cellular outcomes greatly increases OPN's potential role(s) during conceptus development and implantation. Importantly, OPN contains a serine protease cleavage site that when activated generates bioactive OPN fragments [23,108], and two glutamines that support multimerization of the protein [22]. It is notable that OPN is flexible in solution, allowing for simultaneous binding to more than one integrin receptor [16,109]. Further, OPN can also exist in a polymerized form cross-linked by transglutaminase. Homotypic OPN bonds have high tensile strength, suggesting that self-assembly is involved in cell-cell and cell-matrix interactions [22]. These multimeric complexes may present multiple RGD sequences for simultaneous binding to integrins on multiple surfaces [22,110]. Therefore, OPN has the potential to bind multiple proteins and to participate in assembly of multi-protein complexes that bridge and form the interface between conceptus to uterus during implantation.
OPN expression, regulation and function in the uterus and placenta of gilts
A hallmark of pregnancy in pigs is the protracted periimplantation period of pregnancy when conceptuses are free within the uterine lumen to elongate from spherical blastocysts to conceptuses with a filamentous morphology (Reviewed in [111]). Pig embryos move from the oviduct into the uterus about 60 to 72 h after onset of estrus, reach the blastocyst stage by Day 5, then shed the zona pellucida and expand to 2-6 mm in diameter by Day 10. At this stage, development of pig embryos diverges from that of rodents or primates. Within a few hours the presumptive placental membranes (trophectoderm and extra-embryonic endoderm) elongate at a rate of 30-45 mm/h from a 10 mm blastocyst to a 150-200 mm long filamentous form, after which further elongation occurs until conceptuses are 800-1,000 mm in length by Day 16 of pregnancy [111]. During this period of rapid elongation, porcine conceptuses secrete estrogen beginning on Days 11 and 12 to signal initiation of pregnancy to the uterus, and by Day 13 begin an extended period of incremental attachment to the uterine LE [17,69]. The attached trophectoderm/chorionendometrial epithelial bilayer develops microscopic folds, beginning about Day 35 of gestation, and these folds increase the surface area of contact between maternal and fetal capillaries to maximize maternal-to-fetal exchange of nutrients and gases [112].
In pigs, OPN is an excellent candidate for influencing this complex environment of pregnancy, because the OPN gene is located on chromosome 8 under a quantitative trait loci (QTL) peak for prenatal survival and litter size, [113]. The temporal and spatial expression of OPN in the porcine uterus and placenta is complex, with independent and overlapping expression by multiple cell types. Between Days 5 and 9 of the estrous cycle and pregnancy, OPN transcripts are detectable in a small percentage of cells in the sub-epithelial stratum compactum of the endometrial stroma [59]. The morphology and distribution of OPN mRNA-and protein-positive cells in the stratum compactum of the stroma on Day 9 of the estrous cycle and pregnancy suggest that these are immune cells. Certainly Eta-1/OPN, is an established component of the immune system that is secreted by activated T lymphocytes [15]. It is reasonable to speculate that because insemination in pigs is intrauterine, OPN expressing immune cells may protect against pathogens introduced during mating. A similar pattern of distribution of OPN-producing cells is also evident in the allantois of the placenta beginning between Days 20 and 25 of pregnancy, and the number of these cells increases as gestation progresses [58]. The identity of these cells remains to be determined.
OPN expression in uterine LE increases markedly during the peri-implantation period of pigs, but is never observed in uterine LE during the estrous cycle [59]. OPN mRNA is initially induced by conceptus estrogens in discrete regions of the LE juxtaposed to the conceptus just prior to implantation on Day 13, then expands to the entire LE by Day 20 when firm adhesion of conceptus trophectoderm to uterine LE occurs [58]. However, OPN mRNA is not present in pig conceptuses [58,59]. In contrast to mRNA, OPN protein is abundant along the apical surfaces of LE and trophectoderm/chorion, but only in areas of direct contact between the uterus and conceptus [58,59]. Remarkably, OPN mRNA and protein are not present in uterine LE and chorion of areolae where the chorion does not attach to LE, but rather forms a "pocket" of columnar epithelial cells that take up and transport secretions of uterine GE into the placental vasculature by fluid phase pinocytosis [114] (Figure 1). OPN levels remain high at this interface throughout pregnancy [59], as do multiple integrin subunits that potentially form heterodimeric receptors that bind OPN [8,84,90].
All experimental and surgical procedures were in compliance with the Guide for Care and Use of Agricultural Animals in Teaching and Research and approved by the Institutional Animal Care and Use Committee of Texas A&M University. Day 35, and then in both cell types to term). Note that OPN is not detectable in uterine LE associated with areolae where there is no direct attachment of uterine LE to placental trophectoderm/chorion). This precise spatial distribution for OPN expression strongly suggests that it plays a role for attaching uterus to placenta during epitheliochorial placentation.
Affinity chromatography and immunoprecipitation experiments were performed to test whether the integrin subunits αv, α4, α5, β1, β3, β5, and β6, expressed by porcine trophectoderm cells (pTr2) and porcine uterine epithelial (pUE) cells, directly bind OPN. Detergent extracts of surface-biotinylated pig trophectoderm (pTr2) and uterine epithelial (pUE) cells were incubated with OPN-Sepharose and the proteins that bound to OPN were eluted with EDTA to chelate cations and release bound integrins. To identify these integrins, immunoprecipitation assays were performed using antibodies that successfully immunoprecipitated integrin subunits from pTr2 or pUE cell lysates. OPN directly bound the αvβ6 integrin heterodimer on pTr2 cells and αvβ3 on ULE cells [64]. OPN binding promoted dose-and integrindependent attachment of pTr2 and pUE cells, and stimulated haptotactic pTr2 cell migration, meaning that cells migrated directionally along a physical gradient of nonsoluble OPN [64]. Further, immunofluorescence staining revealed that both OPN and αv integrin subunit localized to the apical surface of cells at the interface between uterine LE and conceptus trophectoderm at Day 20 of pregnancy. The αv integrin subunit staining pattern revealed large aggregates at the junction between trophectoderm and uterine LE, suggesting the formation of OPN-induced in vivo focal adhesions at the apical surfaces of both conceptus trophectoderm and uterine LE that facilitate conceptus attachment to the uterus for implantation. The β3 subunit appeared in aggregates on the apical surface of LE cells, but not trophectoderm cells, fitting with affinity chromatography data indicating direct binding of αvβ3 on pUE cells to OPN [64]. Finally, OPN-coated microspheres revealed co-localization of the αv integrin subunit and talin to focal adhesions at the apical domain of pTr2 cells in vitro [64]. Collectively, results support that OPN binds integrins to stimulate integrin-mediated focal adhesion assembly, attachment, and cytoskeletal force-driven migration of pTr2 cells to promote conceptus implantation in pigs ( Figure 2).
In addition to expression in LE during the periimplantation period, total uterine OPN mRNA increases 20-fold between Days 25 and 85 of gestation due to induction of OPN expression in uterine GE [59]. The initial significant increase in GE is delayed until between Days 30 and 35 when placental growth and placentation are key events in pregnancy in pigs [5]. OPN expression Figure 2 Expression, regulation and proposed function of OPN produced by the uterine LE of pregnant pigs. A) As porcine conceptuses (Trophoblast) elongate they secrete estrogens for pregnancy recognition. These estrogens also induce the synthesis and secretion of OPN (osteopontin) from the uterine LE (luminal epithelium) directly adjacent to the conceptus undergoing implantation [58]. The implantation cascade is initiated when progesterone from CL down-regulates Muc 1 on the surface of uterine LE [84]. This exposes integrins on the LE and trophoblast surfaces [84] for interaction with OPN, and likely other ECM proteins, to mediate adhesion of trophoblast to LE for implantation [58,59,64]. B) In vitro experiments have identified the αvβ6 integrin receptor on trophoblast, and the αvβ3 integrin receptor on LE as binding partners for OPN [64]. OPN may bind individually to these receptors to act as a bridging ligand between these receptors. Alternatively, OPN may serve as a bridging ligand between one of these receptors and an as yet unidentified integrin receptor expressed on the opposing tissue.
in GE during later stages of pregnancy is also observed sheep [115], and a microarray study in rats showed that OPN expression increased 60-fold between Day 0 of the estrous cycle and Day 20 of pregnancy, likely within the decidua [116]. Indeed, OPN is expressed by uterine natural killer (uNK) cells of the mouse decidua [44,45]. Secretions of GE in livestock, and the secretions of decidua in rodents and primates, are critical to support implantation, placentation, and fetal growth and development [117,118]. OPN is also expressed in uterine GE of Day 90 of pseudopregnant pigs, suggesting that maintenance of secretion of progesterone by CL is responsible for expression of OPN in GE [58]. Progesterone also regulates OPN expression in the GE of sheep and rabbits [51,54], as well as OPN synthesis by human Ishikawa cells [56].
However, the involvement of progesterone in the regulation of OPN in uterine GE is complex as indicated by recent analysis of long-term progesterone treatment on the expression of OPN in pigs in the absence of ovarian or conceptus factors. In addition to OPN expression, other established progesterone targets including progesterone receptor (PGR) as an index of progesterone's ability to negatively regulate GE gene expression [119], acid phosphate 5, tartrate resistant (ACP5, commonly referred to as uteroferrin) as an index of progesterone's ability to positively regulate early pregnancy GE gene expression [120], and fibroblast growth factor 7 (FGF7, commonly referred to as keratinocyte growth factor) provide an index of progesterone's ability to positively regulate gene expression in uterine GE beyond the periimplantation period [121]. Pigs were ovariectomized on Day 12 of the estrous cycle when progesterone secretion from CL is high and treated daily with intramuscular injections of progesterone or vehicle for 28 days [122,123]. As anticipated, PGR mRNA decreased, uteroferrin mRNA increased, and FGF7 mRNA increased in uterine GE of pigs injected with progesterone [123]. Surprisingly, long-term progesterone, in the absence of ovarian and/or conceptus factors, did not induce OPN expression in uterine GE [123]. It is currently hypothesized that the hormonal milieu necessary for the production of individual components of histotroph varies, and may require specific servomechanisms, similar to those for sheep and rabbits, which involve sequential exposure of the pregnant uterus to ovarian, conceptus, and/or uterine factors that include progesterone, estrogens and IFNs [124][125][126]. Recently OPN expression was compared in placental and uterine tissues supplying a normally sized and the smallest fetus carried by hyperprolific Large White and Meishan gilts. Not only were levels of OPN strikingly different between the two breeds of pigs, but OPN was higher in the LE and GE of uteri surrounding smaller sized fetuses, suggesting OPN may be associated with placental efficiency [127].
OPN expression, regulation and function in the uterus and placenta of ewes
Similar to pigs, the conceptuses of sheep remain freefloating within the uterine lumen as they elongate from spherical blastocysts to conceptuses with a filamentous morphology (Reviewed in [88]). Sheep embryos enter the uterus on Day 3, develop to spherical blastocysts and then, after hatching from the zona pellucida, transform from spherical to tubular and filamentous conceptuses between Days 12 and 15 of pregnancy, with extra-embryonic membranes extending into the contralateral uterine horn between Days 16 and 20. During this period of rapid elongation, the mononuclear trophoblast cells of ovine conceptuses secrete interferon tau between Days 10 and 21 of pregnancy, and implantation begins on Day 16 as trophectoderm attaches to the uterine LE [70,88]. The ovine placenta eventually organizes into discrete regions called placentomes that are composed of highly branched placental chorioallantoic villi termed cotyledons which grow rapidly and interdigitate with maternal aglandular endometrial crypts termed caruncles. Approximately 90% of the blood from the uterine artery flows into the placentomes for nutrient transfer from the maternal uterine circulation to the fetus and exchange of gasses between these tissue compartments [128].
The temporal and spatial expression of OPN in the uteri and placentae of sheep is similar to that previously described for the pig, except: 1) unlike in the pig, OPN is not expressed by uterine LE; 2) induction of OPN in uterine GE occurs earlier than in the pig during the periimplantation period, and expression in the GE is regulated by progesterone; 3) OPN is a prominent component of the stratum compactum stroma; and 4) although large focal adhesions assemble during the peri-implantation period of pigs, they are not observed at the uterineplacental interface until later stages of pregnancy in sheep.
OPN mRNA and protein are present in a small population of cells scattered within the stratum compactum stroma immediately beneath the endometrial LE during the early stages of the estrous cycle and pregnancy in sheep [18]. OPN-producing cells are also present in the allantois of the ovine placenta beginning between Days 20 and 25 of pregnancy and increase in number as gestation progresses [17]. As hypothesized for pigs, these are presumed to be immune cells because Eta-1/OPN is a prominent player in the immune system [15]. In contrast to pigs, in which the OPN-expressing endometrial cells are readily evident in the stratum compactum stroma throughout pregnancy, these cells are difficult to discern in the sheep due to an increase in expression of OPN by stromal cells between Days 20 and 25 gestation [129]. In pregnant mice and primates, OPN in decidualized stroma is considered to be a gene marker for decidualization [130,131]. Decidualization involves transformation of spindle-like fibroblasts into polygonal epithelial-like cells that are hypothesized to limit conceptus trophoblast invasion through the uterine wall during invasive implantation [118]. Although Mossman [132] and Kellas [133] described decidual cell characteristics in the placentomal crypts of sheep and antelope, their reports were largely ignored, and decidualization was not thought to occur in species with central and noninvasive implantation characteristic of domestic animals. However, endometrial stromal cells do increase in size and become polyhedral in shape in pregnant ewes following conceptus attachment, and the classical decidualization markers desmin and α-smooth muscle actin are expressed in these cells, suggesting that OPN expression in this stromal compartment is part of a uterine decidualization-like response to the conceptus during ovine pregnancy [129]. In contrast, no morphological changes in uterine stroma, nor induction of OPN mRNA and protein, or desmin protein, were detected during porcine pregnancy [129]. One of the primary roles of decidua in invasive implanting species is to restrain conceptus trophoblast invasion to a circumscribed region of the endometrium. Both pigs and sheep have noninvasive implantation, but the extent of conceptus invasion into the endometrium differs between these two species. Pig conceptuses undergo true epitheliochorial placentation in which uterine LE remains morphologically intact throughout pregnancy and the conceptus trophectoderm simply attaches to the apical surface of uterine LE surface without contacting uterine stromal cells [134]. Synepitheliochorial placentation in sheep involves extensive erosion of the LE due to formation of syncytia with binucleate cells of the trophectoderm. After Day 19 of pregnancy, conceptus tissue is opposed to, but does not penetrate ovine uterine stroma [135]. Although speculative, differences in stromal expression of OPN between these species suggest that the extent of decidualization is correlated positively with degree of conceptus invasiveness.
In contrast to pigs, OPN is not synthesized by sheep uterine LE, but is nonetheless a component of histotroph secreted from the endometrial GE into the uterine lumen of pregnant ewes as early as Day13. It is not secreted by uterine GE of cyclic ewes [18,29]. OPN mRNA is detected in some uterine glands by Day 13, and is present in all glands by Day 19 of gestation [18]. Progesterone induces expression of OPN in the endometrial GE, and induction is associated with a loss of PGR in uterine GE. Analysis of uterine flushings from pregnant ewes has identified a 45 kDa fragment of OPN with greater binding affinity for αvβ3 integrin receptor than native 70 kDa [29,51,52,108]. Comparison of the spatial distribution of OPN mRNA and protein by in situ hybridization and immunofluorescence analyses of cyclic and pregnant ovine uterine sections has provided significant insight into the physiology of uterine OPN during pregnancy. OPN mRNA increases in the endometrial GE during the peri-implantation period; however, it is not present in LE or conceptus trophectoderm [18]. In contrast, immunoreactive OPN protein is present at the apical surfaces of endometrial LE and GE, and on trophectoderm where the integrin subunits αv, α4, α5, β1, β3, and β5 are expressed constitutively on the apical surfaces of trophectoderm and endometrial LE and could potentially assemble into several heterodimers that could serve as receptors for OPN including αvβ3, αvβ1, αvβ5, α4β1, and α5β1 heterodimers which [29,52]. These results strongly suggest that OPN is a component of histotroph secreted from GE into the uterine lumen of pregnant ewes in response to progesterone, and that OPN binds integrin receptors expressed on endometrial LE and conceptus trophectoderm.
Affinity chromatography and immunoprecipitation experiments, similar to those described previously for pigs, determined whether αv, α4, α5, β1, β3, β5, and β6 integrins expressed by ovine trophectoderm cells (oTr1) directly bind OPN. Successful immunoprecipitation of labeled oTr1 integrins occurred with antibodies to αv and β3 integrin subunits, as well as an antibody to the integrin αvβ3 heterodimer. Antibody to the αv integrin subunit also precipitated a β chain, presumed to be the β3 integrin subunit, as an antibody to the β3 integrin subunit precipitated an α chain at the same relative size as the bands precipitated by an antibody to the αvβ3 heterodimer. Thus, the αvβ3 integrin on oTr1 cells binds OPN [66]. OPN binding to the αvβ3 integrin receptor induced in vitro focal adhesion assembly (see Figure 3), a prerequisite for adhesion and migration of trophectoderm, through activation of: 1) P70S6K via crosstalk between FRAP1/MTOR and MAPK pathways; 2) MTOR, PI3K, MAPK3/MAPK1 (Erk1/2) and MAPK14 (p38) signaling to stimulate trophectoderm cell migration; and 3) focal adhesion assembly and myosin II motor activity to induce migration of trophectoderm cells [66]. Collectively, results indicate that OPN binds αvβ3 integrin receptor to activate cell signaling pathways that act in concert to mediate adhesion, migration and cytoskeletal remodeling of trophectoderm cells essential for expansion and elongation of conceptuses and their attachment to uterine LE for implantation ( Figure 4).
Focal adhesions, the hallmark of activated integrins, are prominent structures of cells grown in culture; however, they are rarely observed in vivo. It is noteworthy that large aggregations of focal adhesion-associated proteins that have been interpreted to be three dimensional focal adhesions are present at the uterine-placental interface of sheep [63]. By day 40 of pregnancy in sheep, the punctate apical surface staining of integrin receptor subunits identified in peri-implantation uterine LE and conceptus trophectoderm [52] is replaced by scattered large aggregates of αv, α4, β1, and β5 subunits in interplacentomal LE and trophectoderm/chorion cells. Integrin aggregates are observed only in gravid uterine horns of unilaterally pregnant sheep, demonstrating a requirement for trophectoderm attachment to LE, and aggregates increase in number and size through Day 120 of pregnancy [63]. Interestingly, no accumulation of β3 was observed even though ITGB3 is a prominent component of the uterine-placental interface during the peri-implantation period in sheep [52]. In some regions of the interplacentomal interface, greater subunit aggregation was seen on the uterine side, in other regions it was predominant on the placental side; whereas in some others, both uterine and placental epithelia exhibited prominent focal adhesions. However, by Day 120 of pregnancy, extensive focal adhesions were seen along most of the uterine-placental interface [63]. The placentomes, which provide hematotrophic support to the fetus and placenta, exhibited diffuse immunoreactivity for these integrins compared with interplacentomal regions perhaps due to extensive folding at this interplacentomal interface [63]. These results suggest that focal adhesion assembly at the uterine-placental interface reflects dynamic adaptation to increasing forces caused by the growing conceptus. Cooperative binding of multiple integrins to OPN deposited at the uterine-placental interface may form an adhesive mosaic to maintain a tight connection and increased tensile strength and signaling activity between uterine and placental surfaces along regions of epitheliochorial placentation in sheep.
Steady-state levels of OPN mRNA in total ovine endometrium remain constant between Days 20 and 40, increase 40-fold between Days 40 and 100, and remain maximal thereafter [18]. The major source of this OPN is uterine GE which undergoe hyperplasia through Day 50 followed by hypertrophy and maximal production of histotroph after Day 60 [115]. Additionally, immunofluorescence microscopy demonstrated that the secreted 45-kDa OPN cleavage fragment is exclusively, continuously, and abundantly present along the apical surface of uterine LE, on trophectoderm, and along the entire uterine-placental interface of both interplacentomal and placentomal regions through Day 120 of the 147 day ovine pregnancy [115]. These findings definitively localize OPN as a secretory product of the GE to regions of intimate contact between conceptus and uterus, where OPN may influence Figure 3 OPN stimulates in vitro activation of integrin receptors to form focal adhesions at the apical surface of oTr1 cells. A) Cartoon illustrating a polystyrene bead coated with recombinant rat OPN containing an intact RGD integrin binding sequence, and allowed to settle onto a cultured oTr1 cell. Note the illustrated representation of aggregated integrins, indicative of focal adhesion assembly, at the interface between the surface of the bead and the apical membrane of the cell [52,64,66]. B) Immunofluorescence co-localization (left panels) to detect the aggregation of αv integrin subunit (right panels) and talin middle panels), an intracellular component of focal adhesions, around beads coated with recombinant rat OPN containing an intact RGD integrin binding sequence (RGD) or coated with recombinant OPN containing a mutated RAD sequence that does not bind integrins [66]. Optical slice images from the apical plasma membrane of oTr1 cells are shown. Note the apical focal adhesions represented by immunofluo rescence co-localization (yellow color) of the integrin αv subunit with talin that results from integrin activation in response to binding of intact OPN on the surface of the bead. No apical focal adhesions were induced by beads coated with mutated OPN as evidenced by lack of integrin αv and talin aggregation around the bead. fetal/placental development and growth, and mediate communication between placental and uterine tissues to support pregnancy to term.
Increases in OPN from GE are likely influenced by uterine exposure to progesterone, interferon-tau, and placental lactogen which constitute a "servomechanism" that activates and maintains endometrial remodeling, secretory function and uterine growth during gestation. Sequential treatment of ovariectomized ewes with progesterone, interferon tau, placental lactogen, and growth hormone results in GE development similar to that observed during normal pregnancy [126]. Administration of progesterone alone in these experiments induced expression of OPN in GE, and intrauterine infusion of interferon tau and placental lactogen to progesteronetreated ovariectomized ewes increased OPN mRNA levels above those for ewes treated with progesterone alone [126]. An attractive hypothesis for OPN expression in GE is that progesterone interacts with its receptor in GE to down-regulate the progesterone receptor. This removes a progesterone "block" to OPN synthesis, and subsequent increases of OPN expression by GE are augmented by stimulatory effects of placental lactogen. Current studies focus on defining the role(s) of OPN secreted from the uterine GE during later stages of pregnancy.
Conclusions
Research conducted with pigs and sheep has significantly advanced understanding of the role(s) of OPN during implantation through exploitation of 1) the prolonged peri-implantation period of pregnancy when elongating conceptuses are free within the uterine lumen requiring extensive paracrine signaling between conceptus and endometrium, and 2) the protracted and incremental attachment cascade of trophectoderm to uterine LE during implantation. Although OPN is synthesized in different cell types (LE in pigs, GE in sheep) and is regulated by different hormones (conceptus estrogens in pigs, progesterone in sheep), nonetheless OPN protein localizes to the interface between the uterus and trophectoderm where it is well placed to serve as a bifunctional bridging ligand between integrins, expressed by uterine LE and conceptus trophectoderm, to mediate attachment for implantation. It is noteworthy that OPN has been reported to be a prominent component of the uterine-placental environment of other species including primates and rodents, and therefore knowledge gained about the physiology of OPN in sheep and pigs may have significant relevance to human pregnancy. Our understanding of events that underlie successful implantation and placentation across species has been and will likely continue to be advanced by studies of pigs and sheep as biomedical research models. [51]. The implantation cascade is initiated with down-regulation Muc 1 (the regulatory mechanism remains to be identified) on the LE surface to expose integrins on the LE and trophoblast surfaces for interaction with OPN to mediate adhesion of trophoblast to LE for implantation [29,51,52,66]. B) In vitro experiments have identified the αvβ3 integrin receptor on trophoblast as a binding partner for OPN [66]. OPN then likely acts as a bridging ligand between αvβ3 on trophoblast and as yet unidentified integrin receptor(s) expressed on the opposing uterine LE. Note that the α5 integrin subunit was immunoprecipitated from membrane extracts of biotinylated oTr1 cells that were eluted from an OPN-Sepharose column, but the β1 integrin subunit, the only known binding partner for α5, could not be immunoprecipitated. Therefore, while we cannot definitively state that OPN binds α5β1 integrin on oTr1, we are reticent to exclude this possibility. | 9,098.6 | 2014-12-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Ergonomic Mechanical Design and Assessment of a Waist Assist Exoskeleton for Reducing Lumbar Loads During Lifting Task
The purpose of this study was to develop a wearable waist exoskeleton to provide back support for industrial workers during repetitive lifting tasks and to assess reductions in back muscular activity. The ergonomic mechanical structure is convenient to employ in different applications. The exoskeleton attaches to the wearer’s body with 4 straps, takes only 30 s to put the exoskeleton on without additional help, weighs just 5 kg and is easy to carry. The mechanical clutch can assist the wearer as needed. Inertia Measurement Unit (IMU) was used to detect wearers’ motion intention. Ten subjects participated in the trial. Lower back muscle integrated electromyography (IEMG) of the left and right lumbar erector spinae (LES), thoracic erector spinae (TES), latissimus dorsi (LD) were compared in symmetrical lifting for six different objects (0, 5, 10, 15, 20, 25 kg) under two conditions of with and without the exoskeleton. The exoskeleton significantly reduced the back muscular activity during repetitive lifting tasks. The average integrated electromyography reductions were 34.0%, 33.9% and 24.1% for LES, TES and LD respectively. The exoskeleton can reduce burden and the incidence of strain on lumbar muscles during long-term lifting work.
Introduction
Despite the widespread use of robots and work-related automation equipment in modern factories to lift loads that are heavy and beyond human ability, there are still many tasks that require manual operation, especially in the logistics, manufacturing, and medical industries. According to the National Bureau of Statistics of China survey, there is an enormous and growing number of workers doing manual handling every year [1]. Long-term heavy lifting activities can significantly increase the lumbar spine and waist injuries. According to the survey, low back pain is the main problem suffered by heavy lifting workers. The proportion of workers who work in logistics and suffer from back pain is 84%, the proportion of workers who engage in construction and suffer from lumbar spine compression is 75%, and the proportion in medical care industry is 67%. In addition, there are about 31% to 45% of workers doing heavy physical labor retire due to muscle and tendon injuries [2]. The resulting economic compensation for work-related diseases and individual lumbar spine injuries not only affect the workers' quality of life, increase the economic costs of enterprises, but it is also a major societal problem [3] with the increased number of patients with lumbar muscle strain, there is an urgent need to develop a portable and lightweight lumbar exoskeleton aimed at providing assistive torque to users who do heavy or repetitive lifting tasks.
Nowadays, many universities and research institutes are developing powered assist devices to protect the lower back of workers who do frequent manual handling and lifting tasks whether in manufacturing, construction, meatpacking or nursing care. All kinds of lower back assist exoskeletons have been designed to provide assistance between the torso and thighs to reduce lumbar spinal loading. These exoskeletons can be divided into active and passive ones according to the actuator.
Various passive exoskeletons have been proposed, such as Personal Lifting Assistive Device (PLAD) [4,5], 'Wearable moment restoring device' [6], Bending non-demand return (BNDR) [7], Happyback [8], Bendezy [8], The PLAD uses elastic straps to provide mechanical assistance for lifting and lowering objects. The Wearable moment restoring device uses a spring to store energy when wearers bend forward, and while in same position, the stored energy supports wearers to keep that posture or to erect the trunk. The Happyback is also a passive back support device, which use bungee cords to provide assistance when wearers are doing stooped work. Erector spinae activity was reduced 23% when static holding of 0, 4 and 9 kg with the Happyback [8]. The BNDR also uses springs to provide mechanical assist for users during stooped working. For the BNDR, a reduction in back muscle force was also found in lifting tasks, but reductions in waist muscle activity were due to the exoskeleton's ability to limit torso flexion [8]. A recent study [9] proposed a powerless waist assist device using a torsion spring. It can store gravitational potential energy in the mechanical spring when wearers bend forward, and the stored energy is released when carrying or lifting an object. The significant advantage of this device is that it is lightweight and portable, but the force provided by the device is inadequate.
Various commercial active exoskeletons have been also designed, such as the ATOUN Model A, the ATOUN Model AS, ATOUN Model Y [10], HAL Lumbar [11], Muscle Suit [10] and WSAD [12]. The ATOUN Model Y was designed to provide lower back support during lifting and holding tasks. It automatically estimates the user's movement motion and erects users' torso while pushing on their thighs. The physical load on lower back was reduced by 15 kg. The Muscle Suit uses an artificial muscle as the actuator to assist wearers lifting and static holding loads up to 35 kg [10]. Sasaki et al. [13] proposed a wearable lumbar assist device. The device has many more degrees of freedom that can satisfy various activities of the human body, which is mainly used for assistance by medical personnel. Other studies [14] have introduced a wearable stoop assist device which can decrease waist erector spinae activity during stooped work. This type of equipment is mainly controlled by two tension bands. Two tension bands extend from the chest strap to corresponding pulleys which are fixed to the respective output shafts of the servo motor. Muramatsu et al. [15] developed an exoskeleton that can reduce lumbar load and back muscle activity. It needs to be worn over the entire body and takes a long time to put on, and is not portable and expensive. A wearable powered assist exoskeleton for lower back support has been proposed [16]. It is convenient for wearers to work with in different environments, such as in a small room and on stairs. The total weight of the system is 6.5 kg including a battery, a control board and frames. A lower-back exoskeleton for back support has also been developed [17], which provides assistance to workers when lifting or lowering heavy objects. A series-elastic was employed on the hip joint modules, and an admittance control strategy was adopted. The weight of the whole device is 11.2 kg, and the ankle and knee joints are passive. This exoskeleton is limited by the weight of the system.
To address above issue, we present a lightweight, wearable powered lumbar exoskeleton designed to provide assistance to the human body during lifting tasks, thereby reducing lumbar spine compression to lower the risk of developing Worked-Related Musculoskeletal Disorders (WMSDs) of the users' back. We optimized the exoskeleton's mechanical structures to make the device convenient to use with many applications. Optimal dimensions were implemented in the mechanical design to imitate human lumbar vertebrae joints. The weight of the device needed to be light since workers will wear the equipment for extended durations. It becomes a burden when users have to wear a heavy exoskeleton for a long time [18]. The total exoskeleton system weighs just 5 kg and is easy to carry. To provide effective assistance, we used an actuator with a continuous torque of 64 N·m on both hip joints to actively assist hip flexion/extension. The exoskeleton attaches to the wearer's body with 4 straps, and users require only 30 s to put the exoskeleton on without additional help. The mechanical clutch plays an important role in the driven module of the exoskeleton, with it the exoskeleton can assist as needed, where the motors is used only when the wearer requires assistance. This ensures that the power consumption is maintained within a relatively low level and dramatically improves the charge cycle length. We collected lower back muscles' electromyography (EMG) signal for evaluating the effectiveness of the assistance from the device, and the wearer's movement intention was estimated by the inertial measurement unit (IMU). A comparison of identified industrial exoskeletons with the SIAT (Shenzhen Institutes of Advanced Technology) waist exoskeleton (exoskeleton presented in this paper) shown in Table 1. We describe the lumbar structure and lumbar load in Section 2. Mechanical structure design and hardware description of the exoskeleton are presented in Section 3. The experimental produce is shown in Section 4. The experimental results and analysis are exhibited in Section 5, and conclusions and future work are discussed in Section 6.
Lumbar Structure
The spine is located in the middle of the posterior wall of the torso and consists of 33 vertebrae [19]. The spine functions to support weight, provide motion and protect internal organs. Vertebrae are usually classified as cervical, thoracic, lumbar, sacral, and coccyx. The cervical spine consists of seven vertebrae represented as C1-C7. The thoracic vertebra has twelve vertebrae, T1-T12, and the lumbar vertebrae is made up of five vertebrae, L1-L5 as shown in Figure 1. The sacrum is composed of five vertebrae, S1-S5. From the front, the vertebral body is gradually widened from top to bottom, and the second sacral vertebrae is the widest. It can be seen from the side that the spine has the shape of S, denoted cervical lordosis, thoracic kyphosis, lumbar lordosis and sacral kyphosis [19].
The upper limb is connected to the spine by means of the humerus, clavicle and sternum, and the lower limb is connected to the spine by the pelvis. The various activities of the upper and lower limbs are adjusted through the spine to maintain body balance [19]. In addition to support and protection, the spine also has a flexible movement function. Although the range of motion between adjacent two vertebrae is small, when multiple vertebrae actions are accumulated, they can perform a wide range of motion, such as flexion and extension, lateral flexion and rotation. However, lumbar flexion and extension are often associated with disc pressure, which can lead to varying degrees of low back pain [19].
Lumbar Load
The simplified model of human upper body lifting objects is intercepted for the force exerted by the waist muscle analysis shown in Figure 2c. F 1 is the tensile force of the waist muscles, and F 2 is the support force of the abdomen and hip joints, following static equilibrium conditions: Assuming that the body mass is m, according to prior research [10], m 3 = 0.044m, m 2 = 0.386m, abdominal and hip support F 2 = 0.12 mg. The distance from the center of the skull to the center of the waist is denoted L, then L 4 = L cos α, (3): The value of L 6 is 3-5 cm [20]. When the weight of the object is 20 kg, the mass of the human body is 60 kg, the bending angle is 25 • and the L is about 0.6 m. The tensile force of the waist is 2839 N-4732 N. When the mass of the lifted object increases, these two forces will continue to increase, and back muscles will withstand a large pulling force, which will cause certain damage to the back muscles and easily cause back pain of workers for a long time. The application of an external force can effectively alleviate the tensile force on the back muscles.
Design Theory Analysis
The exoskeleton provides assistance in the sagittal plane. We only evaluated the lumbar vertebrae joint changes in the sagittal plane. The lumbar vertebrae comprise five vertebrae represented L1 to L5. There is a certain degree of deviation during bending and ascending due to the different sizes of the exoskeleton and lumbar joints, which affects the comfort level of the waist assist exoskeleton [21]. Therefore, it is necessary to optimize the model to gain the appropriate size of the lumbar assist device. When adults stand upright, the bending radius of the lumbar joint curve is approximately 19 to 24 cm [9]. The lumbar joint on account of the sagittal plan size of the L1 to L5 is shown in Figure 3. The intersection points of L5 of the lumbar vertebral and the sacrum are selected as the origin points to establish the rectangular coordinate system. The rotation angle of lumbar vertebra L1 to L5 served as the design parameter [5], and the lumbar vertebra joint motion curve in extreme flexion and complete extension can be determined. In the current coordinate system, the coordinates of lumbar vertebrae L1 of the motion curve in extreme flexion is (81.75 mm, 154.6 mm), in the upright position (0 mm, 172.4 mm), and in complete extension (−121.14 mm, 80.11 mm), respectively. The moving radius of overall lumbar vertebrae joints are calculated by the least square's method: Solving the partial differential equations (PDE) in terms of A, B and C as follows: From the equations above, the coordinate values of the circle center are (16.7 mm, 17.65 mm), and the circle radius is 144.6 mm. The approximate length from the gyration center of the hip joint to the lumbar vertebrae L1 terminal is r, The projection of the center on the Y-axis and the length L5 of the tibia is about 254 mm.
Overall Mechanical Structure
The overall mechanical structure of the exoskeleton is shown in Figure 4, which includes a waist support module, a hip joint module and a thigh connecting rod. The waist support structure has a concave shape, which fits wearers lower-back better with its curved surface. To optimize the mechanical mechanism of the device, we placed the battery system, the electrical hardware system and motor driver in the waist support module. It is important that the weight of the device should be light because workers will wearer the equipment for extended periods. It is a burden when users have to wear a heavy exoskeleton for an extended period. The weight of the wearable waist assist exoskeleton is 5 kg and comfortable to users over a wide range of applications. To provide effective assist torque, we use an actuator module with a continuous torque of 64 N·m on both hip joint to assist hip flexion/extension. The thigh link can be rotated relative to the hip joint module, backward by 25 • , and forward by 130 • , which can meet the typical walking requirements of the human body. The hip joint can assist abduction/adduction motion. An adjustable mechanism is designed for altering the waistline size to accommodate different wearers. There is a leg bandage and a thigh baffle on the thigh attached the exoskeleton to the human leg. The exoskeleton involves only 4 straps to put on, users require 30 s to put on without assistance. The thigh baffle can increase the force area to reduce injury to the thigh when wearers bend.
Waist support module The thigh link and waist support board are directly connected with the motor, the moment arm at the end of the exoskeleton is long when the motors are working, and bending deformation could occur in these areas. Finite Element Analysis (FEA) is applied to these parts to ensure safety, the load applied to one end of thigh link is 64 N·m, the other end of thigh link is fixed, and the results are shown in Figure 5a,b. The maximum stress of the thigh is 126.53 MPa, and the maximum stress of the waist support board is 143.47 MPa.
Hip Joint Module
The wearable waist assist exoskeleton designed in this paper has active flexion/extension on both hip joints shown in Figure 6, which is driven by a flat, brushless motor (EC90, Maxton motor) and harmonic drive gear (Harmonic Drive, Japan hip module with a maximum torque of 64 N·m. The mechanical clutches are not powered. The clutches clock joints when the wearer turns to upright from a stooped posture while the motors start working. The clutches unlock the joints when assistance is not required and free the wearer's movement. The motors are dormant until the next operation begins. With this approach, the power consumption is low and the clutch greatly increases the battery powered run time. FEA was used to optimize the structure and ensure that the clutch remains in its elastic zone. By analyzing the Formula (4), we can see that the force applied to human body is greatest when the user initiates a lift. When the weight of the object is 30 kg, the mass of the human body is 60 kg, the bending angle is 0 • and the L and the L is about 0.6 m. The assist ratio is 0.5, and the force provided by the exoskeleton is approximately 4014 N. The mechanical clutch is made of aluminum 6061, and the result of FEA is shown in Figure 7. The maximum deformation of the mechanical clutch is 3.77 × 10 −5 mm and the maximum stress is 0.229987 MPa, which are less than the allowed value for aluminum. Figure 8 shows the wearer wearing the waist assist exoskeleton and demonstrating different postured of normal daily activities. The front view in Figure 8a, a side view in Figure 8b, a hip abduction in Figure 8c, the back view with ascending stair in Figure 8d. The SIAT waist exoskeleton can reduce the strain of the erector spinae and the latissimus dorsi when lifting objects, which is beneficial to workers who lift heavy loads.
Embedded Electronics
The electrical hardware system of the device is shown in Figure 9, which consists of motor units, motor control board, system control board and the extended device module. Each joint has one motor with a voltage of 24 V and power of 90 W, and incremental encoder (6400 pulse repetition, Maxon Motor). Motor control board has a motor driver (Accelnet APM-090-30) and three sensors, which can record the motor parameters of angle, angular velocity and current in real time. An OLED interactive module is used for the power management module and is integrated into the system control board of the exoskeletons. The battery management module is the fundamental unit of the operating system. To satisfy the requirements of the usage scenarios and working hours, the battery consists of 18,650 lithium battery units placed in the waist support structure [17].
Dynamical Model Analysis
To obtain the exoskeleton dynamic system characteristics and then accurately control the exoskeleton motion, the human-exoskeleton coupling dynamics model is established in this paper. The system dynamics model is established by the Euler Lagrange equation method [22]. As shown in Figure 10, m 1 is the mass of waist module of the exoskeleton. l 1 is the length of the exoskeleton. m 2 is the mass of the trunk part below the neck, the neck and head, l 2 is the length of the trunk. The human upper limbs, arms and hands are simplified, m 3 is the mass of the human arm and hand, l 3 is the length of the upper limb. m 4 is the mass of the objects. m 5 is the mass of the thigh and the l 5 is the length of the thigh. m 6 is the mass of shank, the l 6 is the length of the shank. m 7 is the foot mass and the l 7 is the foot length. 0 point selected as the origin to establish an x-y Cartesian coordinate system. The transformation matrix from coordinate system i to coordinate system i−1 is [17]: The transformation from coordinate system N to coordinate system 0 is [17]: The centroid position vector, the centroid velocity vector and the centroid angular velocity vector of the exoskeleton are: The centroid position vector, the centroid velocity vector and the centroid angular velocity vector of the human torso are: The centroid position vector, the centroid velocity vector and the centroid angular velocity vector of the human upper limb are: The centroid position vector, the centroid velocity vector and the centroid angular velocity vector of the object are: The centroid velocity of the member i(i = 1, 2, 3, 4) is v i , the moment of inertia around the centroid is J i , the plane where o point located is the zero potential energy surface of the gravitational potential energy. If the elasticity between the components and the motion pair are not taken into account, the total kinetic energy E and the potential energy V of the entire mechanism are [20]: Equation (8) is obtained for θ 1 , θ 2 andθ 1 ,θ 4 respectively [20]: Substituting the Formula (10) into the Lagrange equation: Bring two formula above represented as matrix form: During the process of lifting heavy objects, the objects are always vertically downwards, the geometry is θ 2 = π 2 + θ 1 . Substituting into the above Formula (12): [T] = m 1 r 2 1 + I 1 + m 2 r 2 2 + I 2 + m 3 l 2 2 + m 4 l 2 2 + 4(I 3 + I 4 ) θ 1 + [(m 1 gr 1 + m 2 gr 2 + m 3 gl 2 + m 4 gl 2 ) cos θ 1 ]
Control Strategy
The handling operation means that the user lifts the heavy objects from the stoop position to the upright position with the assist of the exoskeleton [23]. Because the size of lifting objects and the users' height are different, each person has a different starting position when lifting a heavy object, the initial angle is obtained according to angle sensor, and the end position is an upright state. The motion trajectory of the exoskeleton from the initial pose to the final pose must be smooth. In order to avoid impact, the speed and acceleration of the exoskeleton system at the start and end times should be also 0. The motion trajectory curve in this paper is a 5th degree polynomial function. The specific motion rules are as follows: where θ 0 is acquired by angle sensor in real time, θ t f = 90 • ,θ t=t 0 = 0,θ t=t f = 0,θ t=t 0 = 0,θ t=t f = 0. The planning trajectory results are shown in Figure 11. Figure 11a is the angle curve, the initial position is set to 30 • . Figure 11b,c are the angular velocities and angular acceleration curve respectively. The velocity and acceleration at the initial and end times are both 0, which ensures a smooth transition at the start and the end of lifting task, avoiding rigid impact and reducing damage to the motor. Without considering friction and other disturbances, the dynamic equation of the exoskeleton can be obtained according to front dynamic equation: where q is the joint dimension, H(q) is the robot inertia matrix, C(q,q) is the centripetal force and Coriolis force, G(q) is the moment formed by gravity, and T is the torque or force applied to each joint. Set the expected amount of q to q d , which is the expected trajectory of the robot.
In the assist process of the waist exoskeleton, the adaptive controller provides a n × 1 vector of assist torque T at each joint. T can be determined by a model-based adaptive control algorithm in paper [24]. Defined as: where: is the trajectory tracking error, q d is the exoskeleton planning trajectory. K D , K P are the time-varying positive definite matrix. If it is limited to a sliding surfaceq + Λq = 0, an unnecessary steady-state position error fraction can be eliminated, and Λ is a constant matrix whose eigenvalue are located on the right half-complex plane, and Λ = K D · K −1 P . Introduce a new quantity q r as a virtual reference trajectory.
Therefore, the control strategy can be rewritten as: Definitionã is the the system estimation error vectorã =â − a.â where Γ is the n × n order positive definite gain matrix, which determines the controller's adaptive rate based on the overall error. In the literature [25], the regression matrix Y is determined for adapting to different people by using RBF (Gaussian radial basis function). By selecting the Y parameter vector in this way, a becomes a feed-forward term that determines the robot drive torque T, A quantity − 1 τ Y T (YY T ) −1 Yâ of "assisted-as-needed" is introduced in the standard adaptive strategy [25] to limit the rate of change of the parameter a to prevent the parameter estimatorâ from having a large influence on the robot output torque in the time constant τ. The adaptive control strategy can be written as:˙â The global stability proof of the algorithm has a detailed derivation process in the literature [25]. The block diagram of the control shows in Figure 12.
Experiment
There are more than 29 muscles of human back involved in lifting [26]. The erector spinae is one of the most important muscles that allows the spines to erect. It is located on both sides of the spine and connected the head as well as the tail-bone. It mainly consists of the spine and the longissimus. Clinical studies have found that most low back pain is caused by vertebral muscle strain [27]. The erector spinae play a major role in the process of bending and erecting. The activity of the muscle can be reflected by the EMG amplitude, thus the erector spinae is selected as an assessment target. The erector spinae includes the LES, the TES and the LD, as shown in the Figure 13a
Participants
Ten subjects who had no history of lumbar disease or spinal disease volunteered to participate in the study (average age of 26 years, weight of 70 kg, and height of 174 cm). These subjects were asked to read experimental protocol and signed a consent form prior to the trial. This test was performed according to the Research Ethics Procedures of the Shenzhen Institutes of Advanced Technology.
Testing Equipment and Surface Electromyography
The myoelectric equipment used to record electromyographic signals was Biometrics Ltd. (Sampling rate: 1000 Hz). It has 8 channels and one reference electrode. The small wireless adapter connects to the USB port of a PC host and data input is through a WIFI communication. A standard storage box (46 mm × 56 mm × 30 mm) with hand-holes was placed in front of the subject. Six different loads (0, 5, 10, 15, 20, and 25 kg) were used in this trial. The loads studied reflect a range from low to high in industrial tasks [17].
EMG signals of three muscles on the left and right side of the back was studied. The recording sites were: the lumbar erector spinae (LES) at the level of L3 vertebrae with interval of 4 cm, the thoracic erector spinae (TES) at the level of T9 vertebrae with interval 4 cm, and latissimus dorsi [28]. The reference electrode was placed on the elbow joint. Before the EMG electrodes were fixed at the above positions, the skin was cleaned with alcohol cotton ball. Figure 13c shows the electrode positions on a subject's back.
Testing Procedures
Before starting each test, subjects were informed of the detailed testing procedure and setup of exoskeletons, then we adjusted the leg length of the exoskeleton according to the subject's body size and verified the connection of the data acquisition system. Subjects were required to practice the testing procedure initially. Testing began once subjects were proficient and comfortable with the lifting tasks. The subjects maintained a standing posture at the beginning, then began to slowly bend forward and lifted the box in a stooped posture. After lifting the box, the subject's legs were upright and they held the weight for 8-10 s. The subject then slowly lowered the box on the ground and resumed the upright posture. The process of lifting the box is shown in Figure 14. Participates were asked to performed five cycles for each load, with at least 1 min rest period between each cycle.
Results
Ten subjects participated in the trial. Lower back muscle EMG signals of the left and right LES, TES and LD were recorded in symmetrical lifting for six different objects (0, 5, 10, 15, 20, and 25 kg) under two experimental conditions, with and without the waist assist exoskeleton. After filtering with a bandwidth of 10-500 Hz [29] and other processing of EMG signals [30], calculating their integral IEMG values within a lifting cycle, we defined the assistance efficiency of the exoskeleton by the following formula: where I E is the IEMG when the subjects lift the loads with the exoskeleton, and I is the IEMG when the subjects lift the same loads without the exoskeleton. The EMG signals of the left and right muscles were not significantly different in accordance with initial statistical analyses, therefor the EMG data of left and right muscles were pooled and then averaged for each of the ten participates. Figure 15 shows the average IEMG of LES, TES and LD for lifting 0 kg to 25 kg loads respectively under the two conditions, with and without the exoskeleton. Table 2 displays the statistical test results. The waist assist exoskeleton significantly reduced the activities of the LES, TES and LD. When wearing the waist exoskeleton, the amount of IEMG reduction ranged from 23.5 ± 5.5% to 44.5 ± 6.5% for the LES, and the amount of IEMG reduction ranged from 20.5 ± 4.5% to 42.8 ± 6.6% for TES when lifting 0 kg to 25 kg, respectively. The lower-back muscle LD was increased from 13.5 ± 2.8% to 32.8 ± 5.8% when wearing the exoskeleton for the 0 kg to 25 kg lifting task. The average integrated electromyography reductions were 34.0%, 33.9% and 24.1% for the TES, LES and LD respectively. From the Table 2, we can see that the LES and TES decreased by larger rates than the LD when lifting the same objects with the SIAT waist exoskeleton. As the weights of the lifted objects increase, the muscle activity of the LES, the TES and the LD also gradually increased. When the weight of the object is 25 kg, the decrease in the rates of LES, TES and LD reach 44.5±6.5%, 42.8 ± 6.6%, 32.8 ± 5.8% respectively.The experimental results demonstrate that the SIAT waist exoskeleton can reduce muscle fatigue during lifting tasks. It can help to reduce the burden and back pain, and the incidence of lumbar muscle strain during long-term lifting work. Figure 16a shows the comparison between the average of real angle and planning. The exoskeleton can generate smooth trajectory with angles obtained from different users' initial postures, and the motors complete movements based on proposed control algorithm. Figure 16b,c show the comparison between real angular velocity, real angular acceleration and planning respectively. It can be seen that the values of the real angular velocity and real angular acceleration at the beginning and end of the curve are both 0, which are very close to the theoretical value. The comparison results demonstrate that the motors movement trajectories are able to meet the theoretical requirement based on the adaptive control algorithm, which can avoid the impact during the motion process.
Conclusions and Discussion
The purpose of this paper was to describe the design and control algorithm of a powered waist assist exoskeleton that has been specially developed for industrial material handling. The assessment of the potential effects of the device to reduce physical loading on the lower back based on IEMG was also described in this paper. The exoskeleton has active hip joints to provide external assistance for flexion/extension during lifting and reduce the risk of developing WMSD.
Compared with other active waist exoskeletons designed for providing assistances [10][11][12][13], the proposed exoskeleton has several structural advantages. First, we optimized the exoskeleton's mechanical structure to make it convenient to employ for many applications. We developed an ergonomic design to improve the fit for the human lumbar vertebrae joints. The device is easy to put on and requires no additional accessories. The total system weights only 5 kg, making it easily portable. Furthermore, to prevent unnecessary power consumption, we installed a clutch on the exoskeleton. Finally, the SIAT waist exoskeleton automatically detects the users' intended motions when lifting objects using IMU.
The exoskeleton can generate smooth trajectories with angles obtained from different users' initial postures by angle sensors in real time, and the motors then complete movements based on an adaptive control algorithm. When wearing the SIAT waist exoskeleton, the measured myoelectric signals of the lumbar erector spinae (LES), thoracic erector spinae (TES) and latissimus dorsi (LD) of the user's back decreased by 44.5 ± 6.5%, 42.8 ± 6.6% and 32.8 ± 5.8% during lifting 25 kg respectively. The experimental results indicate that wearing the device can reduce muscle fatigue of the erector spinae and latissimus dorsi when lifting heavy objects.
There are some limitations of the exoskeleton proposed in this study. First, this device cannot provide assist when lowering heavy objects. Thus, additional structural designs are needed to allow the exoskeleton to provide mechanical support when lowering objects. Second, this SIAT waist exoskeleton is controlled by tracking the planned trajectories without fully considering the interaction between the users and the exoskeleton. Novel control strategies should be considered to make the exoskeleton more adaptive in the future. Third, our experiments were limited to the laboratory. Additional experimental tests should be conducted by workers engaged in material handling in actual industrial applications. Further exploration in mechanical design, modeling, and motor control are needed to make the SIAT waist exoskeleton more applicable for workers when lifting and holding heavy objects in industry, thereby improving quality of life.
Author Contributions: Authors of this searcher individual contributions described as following: conceptualization and methodology, X.Y.; investigation and writing-original draft preparation, Z.Y.; software, Cao.W.; writing-review and editing, N.L. and Can.W.; funding acquisition and project administration, X.W. | 7,859.6 | 2019-07-01T00:00:00.000 | [
"Engineering"
] |
Cotranscriptional and Posttranscriptional Features of the Transcriptome in Soybean Shoot Apex and Leaf
Transcription is the first step of central dogma, in which the genetic information stored in DNA is copied into RNA. In addition to mature RNA sequencing (RNA-seq), high-throughput nascent RNA assays have been established and applied to provide detailed transcriptional information. Here, we present the profiling of nascent RNA from trifoliate leaves and shoot apices of soybean. In combination with nascent RNA (chromatin-bound RNA, CB RNA) and RNA-seq, we found that introns were largely spliced cotranscriptionally. Although alternative splicing (AS) was mainly determined at nascent RNA biogenesis, differential AS between the leaf and shoot apex at the mature RNA level did not correlate well with cotranscriptional differential AS. Overall, RNA abundance was moderately correlated between nascent RNA and mature RNA within each tissue, but the fold changes between the leaf and shoot apex were highly correlated. Thousands of novel transcripts (mainly non-coding RNA) were detected by CB RNA-seq, including the overlap of natural antisense RNA with two important genes controlling soybean reproductive development, FT2a and Dt1. Taken together, we demonstrated the adoption of CB RNA-seq in soybean, which may shed light on gene expression regulation of important agronomic traits in leguminous crops.
INTRODUCTION
Transcription, the first step of gene expression, is accomplished by the multisubunit protein complex RNA polymerase. In eukaryotic cells, RNA polymerase II (RNA Pol II) is involved in protein-coding gene transcription and some non-coding gene transcription. Before maturation, messenger RNA precursors (pre-mRNAs) are subjected to multiple processing steps, including 5 capping, splicing of introns, 3 cleavage and polyadenylation, and editing (Bentley, 2014). These steps are known as posttranscriptional processing. However, increasing evidence suggests that most processes are cotranscriptional. For example, introns can be either co-or posttranscriptionally spliced, which is supported by the splicing loops of nascent RNA observed by electron microscopy in Drosophila melanogaster and Chironomus tentans (Beyer and Osheim, 1988;Baurén and Wieslander, 1994). In addition, high-throughput sequencing of nascent RNA revealed genomewide cotranscriptional splicing (Khodor et al., 2011;Nojima et al., 2015;Drexler et al., 2020). Studies from budding yeast, flies, and mammals indicated that cotranscriptional splicing frequencies are similarly high, ranging from 75 to 85% (Neugebauer, 2019).
Since Core et al. (2008) published a method wherein the nuclei run on RNA were affinity purified followed by highthroughput sequencing, nascent RNA sequencing (RNA-seq) technologies have significantly improved our ability to analyze transcription at each step across the genome. Rather than steadystate mRNA, nascent RNA-seq detects pre-mRNAs, divergent transcripts, enhancer-derived RNA (eRNA), etc., which are usually unstable and not polyadenylated. Recently, we and another laboratory have reported cotranscriptional splicing in the model plant Arabidopsis using genome-wide nascent RNAseq approaches, plant native elongating transcript sequencing (pNET-seq), and plaNET-seq (Zhu et al., 2018;Kindgren et al., 2020). pNET-seq and plaNET-seq detect nascent RNA through enrichment of transcriptionally engaged RNA Pol II complexes, and splicing intermediates can also be observed when some spliceosomes are copurified with Pol II complexes (Zhu et al., 2018). Moreover, three recent publications directly sequenced the chromatin-bound RNA (CB RNA) of Arabidopsis and found genome-wide cotranscriptional splicing (Jia et al., 2020;Zhu et al., 2020). However, the Arabidopsis genome is the first plant genome to be sequenced and is compact (140 million base/haploid genome), with an average gene length of 2,000 bp and an average intron length of 180 bp (Arabidopsis Genome Initiative, 2000). While harboring thousands to tens of thousands of genes, plant genome size ranges from approximately 0.1 to 100 gigabases (Pellicer and Leitch, 2020). Therefore, knowledge of transcription obtained from Arabidopsis may not be applicable to other plant genomes, especially some complicated crop genomes.
As one of the most important crops, soybean provides protein and oil for humans and livestock. During the past decades, great progress has been made in soybean genome research (Shen et al., 2018;Xie et al., 2019;Liu Y. et al., 2020). Furthermore, many important genes involved in agronomic traits have been characterized via genetic, cellular biology, and biochemical approaches (Kasai et al., 2007;Lu et al., 2017Lu et al., , 2020. For example, Dt1, which controls soybean growth habits, has been cloned as a TFL1 homolog encoding a 173-amino-acid peptide Tian et al., 2010). FT2a and FT5a, two distant homologous genes of Dt1 within the same family, have been shown to play a conserved role in controlling flowering time Takeshima et al., 2019;Wu et al., 2017).
Soybean [Glycine max (L.) Merr.] is a paleopolyploid derived from two whole genome duplication events approximately 59 and 13 million years ago. It has a relatively complicated and large genome, with a size of approximately 1.1 gigabases (Schmutz et al., 2010). The average gene length is approximately 4,000 bp, and the average intron length is approximately 539 bp in soybean (Shen et al., 2014), which are longer than those in Arabidopsis. Despite the considerable transcriptomic analyses of various soybean tissues using mature RNA-seq (Libault et al., 2010;Severin et al., 2010;Shen et al., 2014;Wang et al., 2014;Gazara et al., 2019), genome-wide analysis of nascent RNA from soybean has not yet been reported. In addition to capturing cotranscriptional features, nascent RNA is very sensitive to the detection of unstable regulatory RNAs, such as long noncoding RNAs (ncRNAs). Therefore, the investigation on nascent RNA in soybean would provide a comprehensive description of cotranscriptional characteristics in leguminous crops. Here, we report for the first time the analysis of nascent RNA from the shoot apex and leaf tissues of the soybean cv. Williams 82.
Nascent RNA Profiling of Soybean by CB RNA-Seq
The spatial and temporal expression of genes in the shoot apex largely determines the architecture of crop plants, including the numbers of branches, flowers, and nodes, which finally affect the yield per plant. Specifically, mRNA of Dt1 was detected in the shoot apex at 15 days after emergence under a long-day condition ; therefore, we set to investigate the transcriptome of the shoot apex from 10-to 15-day-old plants ( Figure 1A, see section "Materials and Methods"). To gain insights of the shoot apex-specific gene, we chose the first trifoliolate leaves from 15-dayold plants as control. For nascent RNA, CB RNA was isolated, and the rRNA and polyA RNA it contained were depleted prior to library construction and high-throughput sequencing as described by Zhu et al. (2020). To further reveal cotranscriptional and posttranscriptional processes, we also conducted parallel mature polyA RNA-seq by enriching polyA RNA from total RNA, and these RNAs were constructed into libraries. Three biological replicates were sequenced and analyzed for each tissue. Principal component analysis (PCA) and Pearson correlation analysis of gene expression indicated high reproducibility of biological replication ( Figure 1B and Supplementary Figure 1). In addition, the first two components of PCA explained more than 90% of the variation, indicating that the tissue difference (apex vs. leaf, 61.81% of variance) and methodological difference (CB RNA-seq vs. polyA RNA-seq, 28.46% of variance) were the dominant factors for intersample differentiation ( Figure 1B).
As expected, the read distribution of nascent RNA shows two characteristics compared with that of polyA RNA. First, CB RNAseq detected more intron signals than polyA RNA-seq because more unspliced reads were sequenced at the nascent RNA level. Approximately 25% of unique mapped reads were located in the intron region with CB RNA-seq, while less than 4% of unique mapped reads were located in the intron region with polyA RNAseq (Supplementary Figure 2). In addition, the read density ratio of introns to exons in CB RNA was significantly higher than that in polyA RNA ( Figure 1C). Second, the read density on the gene decreased gradually from the 5 end to the 3 end, while there was no such phenomenon in polyA RNA (Figures 1D,E). For example, the read signal of the gene Glyma.02G231800 declined from 5 to 3 in CB RNA-seq but not in polyA RNAseq. Furthermore, an intron signal was evident in CB RNA but absent from polyA RNA ( Figure 1D). These characteristics were consistent with the results from previous studies and confirmed that the CB RNAs obtained here were bona fide transcriptional processing nascent RNAs Zhu et al., 2020).
Multiple Factors Regulate Cotranscriptional Splicing Efficiency
Cotranscriptional splicing has been widely found in eukaryotic cells. We wondered whether splicing coupled with transcription is widespread in the soybean genome. The intron retention ratio is an indicator of intron splicing efficiency. Thus, we adopted an index for the percent of intron retention (PIR) to measure the extent of cotranscriptional splicing (Braunschweig et al., 2014). In short, the PIR of an intron was calculated as the ratio of unspliced exon-intron junction reads to the total junction reads (unspliced exons-introns and spliced exons-exons). Since each unspliced exon-intron read from one RNA molecule has the chance to be sequenced twice in high-throughput sequencing, the average count of exon-intron reads at the 5 splice site (EI5) and of exon-intron reads at the 3 splice site (EI3) was considered an intron's unspliced exon-intron read count (Figure 2A). Introns with lower PIR values are more efficient for splicing. Constitutive introns of active genes (TPM > 1) were calculated for PIR both in CB RNA and polyA RNA. As expected, the intron retention levels of CB RNA were significantly higher than those of polyA RNA, both in the apex and leaf ( Figure 2B). Most introns in polyA RNA have a very low PIR, usually smaller than 0.1. The . For (C-G), the Wilcoxon test was used to test the difference in PIRs for adjacent groups. All tests were highly significant (p < 0.001) unless symbols were assigned (*p < 0.05; NS, p > 0.05).
median PIR was close to 0.25 (in the apex) or above 0.25 (in the leaf) in CB RNA. These results were similar to those of a previous study of Arabidopsis . The PIR of most introns in CB RNA was lower than 0.5 (PIR = 1 means completely unspliced), indicating the existence of genome-wide cotranscriptional splicing in soybean.
Although most introns undergo cotranscriptional splicing, the extent of intron retention is highly variable. Studies in Drosophila and Arabidopsis have indicated that multiple factors, such as intron characteristics, gene expression level, and number of introns, are related to cotranscriptional splicing efficiency (Khodor et al., 2011;Zhu et al., 2020). To examine how these factors affect the splicing efficiency in soybean, we first divided introns into five groups by length and found that intron retention became more prominent as the intron length increased ( Figure 2C).
In addition to intron length, the intron position is also supposed to influence splicing efficiency. According to the "first come, first served" model, there may be more splicing chances for introns transcribed first (Aebi et al., 1986). Based on the distance to transcription end sites (TES), introns were divided into five groups, and the PIR was compared among groups. Introns more distant from TES are transcribed early and thus are more likely to be spliced first. As expected, the PIR index gradually declined as the intron distance to TES decreased ( Figure 2D).
In addition, the cotranscriptional splicing efficiency was positively correlated with exon number ( Figure 2E) and gene length ( Figure 2F). These patterns were consistent between the apex and leaf tissues. However, a weak positive correlation of cotranscriptional splicing and gene expression was detected in the apex instead of in the leaf ( Figure 2G).
Cotranscriptional Splicing Efficiency Is Correlated With Certain Histone Modifications
Specific histone modifications have been shown to regulate cotranscriptional splicing by either directly recruiting spliceosomes or indirectly influencing transcriptional elongation (Luco et al., 2010;Hu et al., 2020). To test whether cotranscriptional splicing is associated with certain histone modifications in soybean, we used ChIP-seq data of several histone modifications (H3K27me3, H3K4me1, H3K4me3, H3K36me3, H3K56ac, and H2A.Z) in leaf tissue collected from a previous study (Supplementary Table 1; Lu et al., 2019). We then quantified the level of different histone modifications around introns in different groups based on the retention rates (Figure 3). PIR is positively correlated with the levels of H3K27me3, H3K4me3, H3K56ac, and H2A.Z-marked histone, which means that introns with higher cotranscriptional splicing efficiency have lower levels of those histone modifications. PIR is negatively correlated with the level of H3K4me1-marked histones. Notably, H3K27me3, H3K4me3, H3K56ac, H3K36me3, and H2A.Z showed a higher modification level at the upstream exon than at the downstream exon, while H3K4me1 showed a higher modification level at the downstream exon. It is most likely that these histone modifications, H3K27me3, H3K4me3, H3K56ac, H3K36me3, and H2A.Z, preferentially locate at the gene's 5 end, except for H3K4me1 (Supplementary Figure 3).
Alternative Splicing Events Are Likely Determined Cotranscriptionally
In higher eukaryotes, alternative splicing (AS), as an important regulatory step of gene expression, plays a critical role in the development and stress response of organisms (Baralle and Giudice, 2017;Laloum et al., 2018). Previous studies in mammalian cells and Arabidopsis showed that AS events occur co-or post-transcriptionally (Jia et al., 2020). Thus, we wondered to what extent AS is determined cotranscriptionally. We adopted percent spliced-in (PSI) (Wang et al., 2008) to describe the relative abundance of splicing events. We focused on four AS events: alternative 3 splice sites (A3SS), alternative 5 splice sites (A5SS), exon skipping (ES), and retained introns (RI) ( Figure 4A). The PSI values of AS events from CB RNA and polyA RNA were significantly correlated, suggesting that AS events are likely determined cotranscriptionally for all AS types ( Figure 4B). This was true for both shoot apex and leaf tissues (Figure 4 and Supplementary Figure 4). However, the overall PSI value was higher in CB RNA (Figure 4B, insets). For AS events with a higher PSI in CB RNA than in polyA RNA, there are two possible explanations. First, some highly abundant transcripts in CB RNA with AS events may likely be rapidly degraded. For example, coupling of AS and nonsense-mediated mRNA decay (NMD) has been reported to fine-tune gene expression (McGlincy and Smith, 2008). Second, posttranscriptional splicing may lead to a higher PSI in CB RNA, especially for RI events.
Differential Alternative Splicing Between Leaf and Shoot Apex Tissues Is Not Determined Merely by Cotranscriptional Splicing
Given that most AS events are determined cotranscriptionally, we then asked whether differences in AS between the shoot apex and leaf tissues detected by CB RNA-seq and polyA RNA-seq are consistent. Thus, we compared the AS difference of both CB RNA and polyA RNA between the 15-day apex and leaf tissues. Differential splicing events were analyzed by the program SUPPA2 (Trincado et al., 2018). A splicing event was considered differential when the absolute value of the PSI difference ( PSI) between tissues >0.1 and the p-value < 0.05. A small number of the different splicing events between the leaf and shoot apex tissues were detected by both CB RNA and polyA RNA ( Figure 5A). PSI mRNA and PSI CB were barely correlated (Spearman correlation ranged from 0.22 to 0.35) ( Figure 5B). Furthermore, genes with different splicing events detected by CB RNA and polyA RNA were not concordant (Supplementary Figure 5A). Although overall AS events are highly correlated at the cotranscriptional level and posttranscriptional level within the same tissue, tissue-specific mRNA processing, such as degradation and posttranscriptional splicing, may result in the differential AS events that are detected by polyA RNA but not by CB RNA. For those differential AS events detected by CB RNA but not by polyA RNA, it was probably caused by the differentially cotranscriptional splicing efficiency between the shoot apex and leaf tissues and further corrected at the posttranscriptional splicing step, exemplified by the first intron of Glyma.07G206100 (Supplementary Figure 6).
Genes associated with intertissue differential splicing events detected by CB RNA and polyA RNA were also different (Supplementary Figure 5A). To explore the biological function of genes with different AS events, we conducted Gene Ontology (GO) enrichment analysis. Interestingly, genes with different splicing events between the 15-day apex and leaf tissues were significantly enriched in mRNA splicing and RNA processing, which somehow explains the differential splicing efficiency between the shoot apex and leaf tissues (Supplementary Figure 5B).
The Level of Steady-State mRNA Is Moderately Correlated With the Biogenesis of Nascent RNA
Chromatin-bound RNA-seq is applied to detect transcribed RNAs, which are subject to multiple steps of mRNA processing, including cotranscriptional and posttranscriptional processes prior to maturation. Thus, there might be discordance in the abundance at the nascent RNA and mRNA levels. To test this hypothesis, we compared the TPM values of nascent RNA and mature RNA. Overall, the levels of nascent RNA and mature RNA were moderately correlated (Spearman correlation = 0.71-0.73) (Figure 6A and Supplementary Figures 7A-C). There are two types of discordant genes. One is a gene that is highly transcribed with a low level of mature RNA, which might result from a high turnover of mRNA and is designated unstable RNA. The other is a gene with relatively low transcription activity but a high level of mature RNA, which might be due to the high RNA stability and is called stable RNA.
To select unstable and stable RNA transcripts, we first established a linear regression model of the log2 values of TPM genes obtained with CB RNA-seq and polyA RNA-seq. Then, the predicted TPM values of genes in polyA RNA were calculated based on the linear regression model. If the actual TPM of a gene was threefold higher (or lower) than the predicted TPM, the gene was considered to be stable (or unstable) (Figure 6A and Supplementary Figures 7B,C). To investigate whether the stability of RNA is associated with specific biological functions, we performed GO enrichment analysis. For unstable RNAs, defense response, protein phosphorylation, and signal transduction were the most enriched terms. Stable RNAs were mainly associated with translation, photorespiration, ribosome biogenesis, and glycolytic processes (Figure 6B and Supplementary Figures 7C,E).
Differentially Expressed Genes Are Consistent at the Nascent and Mature RNA Levels
We then identified differentially expressed genes (DEGs) between 15-day apex and 15-day leaf tissues at both nascent and mature RNA levels. More than 10,000 genes were expressed more in the apex than in the leaf, and vice versa (Supplementary Figure 8A and Supplementary Table 2). Most of these DEGs detected by CB RNA-seq and polyA RNA-seq overlapped (Figure 6C and Supplementary Figure 8B). Furthermore, fold changes at the CB RNA level and polyA RNA level were highly correlated (Spearman correlation = 0.93) (Figure 6D).
Gene Ontology enrichment analysis was performed to determine the biological functions of the DEGs. Genes with higher expression in the apex were mainly associated with RNA methylation, histone methylation, translation, DNA replication, and meristem initiation and maintenance. Genes with higher expression levels in the leaves were mainly related to photosynthesis and plastid organization (Supplementary Figure 8C).
In addition, only a small number of genes were called DEGs between the 15-day apex and 10-day apex (Supplementary Figure 9A), and they had concordant changes at the nascent RNA and mRNA levels (Supplementary Figure 9B). GO enrichment indicated that genes highly expressed in the 10-day apex were involved in the response to stress, circadian rhythm, etc., and genes highly expressed in the 15-day apex were involved in long-day photoperiodism flowering, response to hormones, and circadian rhythm (Supplementary Figure 9C).
More Non-coding RNAs Were Identified by CB RNA-Seq Than PolyA RNA-Seq
Considering that unstable transcripts are readily detected at the nascent RNA level, we calculated the expression level of ncRNA as defined in a previous study . As expected, more active ncRNA genes were detected by CB RNA-seq than polyA RNA-seq ( Figure 7A). Furthermore, we determined the antisense transcription of annotated mRNAs by counting reads mapped to the opposite strand, and there were more active antisense transcriptional signals at the nascent RNA level (Figure 7B, left). These results indicate that some non-coding transcripts were unstable or not polyadenylated. For example, a transcript encoded from the antisense strand of FT2a, the essential gene involved in flowering timing, was identified in 15-day leaves by CB RNA-seq. Dt1, the key gene controlling growth habit, overlapped with another strong antisense transcript at the nascent RNA level in the apex (Figure 7B, right). To identify novel transcripts, we assembled transcripts from nascent RNA and polyA RNA of each tissue separately. Then, all transcripts were merged and compared based on reference annotations (see section "Materials and Methods"). Only intergenic transcripts were included for further analysis. In total, there were 5,927 and 1,515 active intergenic transcripts from CB RNA-seq and polyA RNA-seq, respectively, with 1,326 transcripts overlapping (Figure 7C, upper panel; Supplementary Table 3). These transcripts were encoded from 4,835 loci, of which 1,142 were shared by CB RNA and polyA RNA ( Figure 7C, bottom panel).
We then applied two tools, CNCI and FEELnc, to evaluate the protein-coding potential of these new transcripts. In total, 4,001 and 974 active new transcripts of CB RNA and polyA RNA were considered non-coding transcripts by both methods, respectively ( Figure 7D), and more ncRNAs were observed in the leaves at the nascent RNA level (Figure 7E).
Non-coding RNA detected only at the nascent RNA level might be unstable or unpolyadenylated. ncRNAs detected only at the polyA RNA level might be very stable and accumulate by slow transcription. Different types of ncRNAs may be regulated differently at the transcriptional level. To gain insight into the effects of histone modifications on ncRNA expression, we compared the metaprofiles of histone modifications for three groups of ncRNAs from the leaf tissue (group I: only detected by CB RNA; group II: detected by both; group III: only detected by polyA RNA) (Figure 7F). Group II and III ncRNA genes were associated with H3K56ac, H3K4me3, and histone variant H2A.Z (Figure 7G).
DISCUSSION
Although nascent RNA-seq has been extensively used to detect cotranscriptional regulation in yeast, fly, and mammalian cells, its application in plants is still lagging behind. Recently, several methods have been developed to detect nascent RNA and reveal plant-specific transcriptional features (Hetzel et al., 2016;Zhu et al., 2018). However, with the exception of one maize publication using GRO-seq (Erhard et al., 2015), all studies have focused on the model plant Arabidopsis. Here, we describe the soybean transcriptome using CB RNA-seq. As expected, CB RNA isolation greatly enriched the nascent RNA by removing the abundant cytosolic mRNAs and nucleoplasmic RNAs. We demonstrated that CB RNA-seq successfully detected nascent RNA biogenesis and cotranscriptional processing of pre-mRNA from the leaves and growing apex tissues. This method can be applied to other tissues at various developmental stages and/or under different environmental conditions, which may further shed light on the transcriptional regulation of the soybean genome.
We found genome-wide cotranscriptional splicing in soybean. Cotranscriptional splicing efficiency is related to intron length, distance from TES, intron number, and gene length. These characteristics are similar to those previously observed in yeast, fly, mammalian, and Arabidopsis cells, indicating a conserved mechanism that controls cotranscriptional splicing in eukaryotic cells (Khodor et al., 2011;Kindgren et al., 2020;. Interestingly, we found that both active (H3K4me3 and H3K56ac) and inactive (H3K27me3) histone markers are negatively related to cotranscriptional splicing efficiency. The elongation rate of RNA Pol II can affect splicing efficiency by finetuning the timing of the spliceosome search for splice sites, as the spliceosome is physically recruited by the carboxyl terminal domain of the largest subunit of RNA Pol II (Nojima et al., 2018).
The inverse correlation between elongation speed and splicing efficiency was proven in yeast in vivo (Carrillo Oesterreich et al., 2016;Aslanzadeh et al., 2018). Moreover, the RNA Pol II elongation rate is regulated by transcription elongation factors and chromatin structural barriers such as nucleosomes. Thus, factors that affect transcription elongation also affect splicing efficiency. Active histone markers are thought to be related to a higher transcription elongation rate. Therefore, it is reasonable that introns with higher H3K4me3 or H3K56ac contents are less efficiently spliced. In addition, the pattern described in this study and a previous study on Arabidopsis revealed that the retained introns are derived from genes with low H3K4me1 and high H3K27me3 signatures (Mahrez et al., 2016). However, further studies of mutants with impaired histone modification are needed to verify their function in cotranscriptional splicing. Actually, these effects are not unidirectional. Cotranscriptional splicing can in turn influence the elongation rate and establishment of histone modifications (Kim et al., 2011).
Alternative splicing is an important part of gene regulation. In our study, a highly correlated relative AS event (PSI) was observed between CB RNA and polyA RNA, suggesting that most AS events are determined cotranscriptionally. This agrees with a previous study in Arabidopsis . However, when comparing intertissue AS events, differential AS events detected at the cotranscriptional and posttranscriptional levels only partially overlapped. Thus, differential AS events cannot be predicted at the nascent RNA level, indicating the complexity of AS regulation. These regulations may be attributed to different degradation rates and/or posttranscriptional splicing among various tissues.
Gene expression is regulated at multiple levels, including transcription, post-transcription, and translation. Steady-state mRNA is the output of transcriptional activity and RNA degradation. Thus, there might be some discordance in gene activity detected by nascent RNA-seq and polyA RNA-seq. As expected, we found that gene activity at these two levels was moderately correlated. However, when comparing different tissues, the changes in gene activity at both levels were highly consistent, indicating that tissue-specific gene expression was mainly associated with transcription. The stability of RNA might contribute to the discordance in gene activity at the nascent and mature RNA levels. It is meaningful for stable mRNA genes to be involved in housekeeping biological processes. Moreover, under normal conditions, keeping regulatory genes at low mRNA levels and relatively high transcription by fast turnover of mRNA is an effective way to ensure rapid responses to potential stimuli. As we have previously reported in Arabidopsis, genes induced highly and quickly by short-term heat shock usually exhibit basic transcription under normal temperature (Liu M. et al., 2020).
Since some ncRNAs are unstable or unpolyadenylated, such as enhancer RNAs and antisense RNAs, more transcripts are expected to be detected by CB RNA-seq. However, this does not rule out the possibility that some transcripts detected only in CB RNA are not nascent RNA but rather chromatin-bound transcripts. To further elucidate the biological significance of these ncRNAs, approaches such as RNA interference and gene editing are needed. It will be interesting to apply CB RNA-seq to various tissues and build a transcriptional regulatory network at the nascent RNA level in the future.
Plant Materials and Growth Conditions
Soybean Wm82 plants were grown under long light day conditions (16 h light, 8 h dark) with a constant 25 • C temperature in a growth chamber. Shoot apexes from 10-to 15-day seedlings were collected in three biological replicates, with each replicate collected from approximately 20 plants. For the leaves, the first trifoliolate leaves of two 15-day-old plants were collected as one biological replicate. All samples were frozen with liquid nitrogen immediately after collection.
After degrading genomic DNA by TURBO DNase (Life Technologies), CB RNA was subjected to rRNA depletion using a riboPOOL kit (siTOOLs Biotech, PanPlant-10 nmol) and polyA RNA removal by oligo(dT) beads (NEB, S1419). Poly(A) RNA was enriched from total RNA by oligo(dT) beads. Both CB RNA and polyA RNA were transformed into cDNA libraries using the NEBNext Ultra II Directional RNA Library Prep Kit for Illumina (NEB #E7765) and sequenced on an Illumina NovaSeq platform.
CB RNA and mRNA Data Processing
Raw reads of CB RNA and polyA RNA were first evaluated by FastQC 1 , and then Cutadapt was used to remove adapters and low-quality reads (Martin, 2011). Clean reads were subsequently aligned to the genome Wm82.a2.v1 by STAR (Dobin et al., 2013). Only uniquely mapped reads were retained for the following analysis. Read distribution on genomic features was evaluated by RSeQC with the subcommand "read_distribution.py" (Wang et al., 2012). To calculate the ratio of introns vs. exons of each gene, featureCounts was used to quantify the read counts on introns and exons separately (Liao et al., 2014). Read density was normalized by the length of introns and exons.
Calculating the Percent of Intron Retention
The proportion of intron-retained reads across an intron is usually used to evaluate the splicing efficiency of the intron. To quantitatively evaluate the genome-wide cotranscriptional splicing efficiency in soybean, we calculated the PIR value for constitutive introns as described previously (Braunschweig et al., 2014). Briefly, three types of reads on an intron were counted: (1) exon-intron junction reads across the 5 SS (EI5), (2) exon-intron junction reads across the 3 SS (EI3), and (3) spliced exon-exon junction reads (EE) (Figure 2A). The PIR of an intron was calculated by dividing the intron-retained reads by the sum of intron-retained reads and intron-skipping reads (Figure 2A). Constitutive introns from the annotation Wm82.a2.v1 were subjected to PIR calculations.
Alternative Splicing Analysis
Mapped reads were assembled into putative transcripts based on a reference guided assembly strategy using the single-sample transcript assembly tool StringTie v2.1.2 (Pertea et al., 2015). Multiple putative transcripts were merged into a unified set of transcripts using the meta-assembly tool TACO v0.7.3, which was considered to be superior to Cuffmerge and StringTie merge (Niknafs et al., 2017). Then, the merged transcripts were compared with the reference gene GTF file using GffCompare v0.11.2 (Pertea and Pertea, 2020). Since CB RNA was nascent RNA with no full splicing, AS analysis was based on transcripts merged from polyA RNA data. AS events were quantified based on the PSI in the program SUPPA2 (Trincado et al., 2018). Since SUPPA2 estimated the PSI based on transcript abundance, 1 http://www.bioinformatics.babraham.ac.uk/projects/fastqc/ we first used salmon for alignment-free transcript abundance estimates (Patro et al., 2017). Transcripts with TPM > 1 in at least three samples were used for analysis. For detection of differential splicing between two samples, we chose PSI > 0.1 and p-value < 0.05 as cut-offs.
Detection of Differentially Expressed Genes
For detecting genes with differential expression, mapped reads in each gene were quantified using featureCounts. Then, differential gene expression was evaluated by the R package DESeq2 (Love et al., 2014). DEGs were defined by the following criteria: they had to show more than twofold up-or downregulation, and the false discovery rate (FDR)-adjusted q-value calculated by DESeq2 had to be less than 0.05. The read density for each gene was calculated by normalizing the read count to the library size and mappable length (TPM).
Gene Ontology Enrichment Analysis
Gene Ontology annotation of genes was extracted from the annotation file for Wm82.a2.v1. A hypergeometric test was explored for the statistical test, and the Benjamini and Hochberg method (1995) was used to adjust the p-value to control the FDR. All analysis was done in R software.
Detection of New Non-coding RNA Genes
To detect new ncRNA genes at the nascent RNA and polyA RNA levels, transcripts were assembled in CB RNA and polyA RNA data separately and merged by TACO as described above in the AS event analysis. Then, annotation GTF files of transcripts were compared with reference annotation GTF files using GffCompare (with the -r option). For each putative transcript, its relationship to the closest reference transcript was described by a "class code" value. For example, the code "=" indicates that the introns of a transcript completely match the introns of the reference transcript. We chose only unknown, intergenic transcripts that were assigned the code "u" and estimated their protein-coding potential by two software programs, CNCI and FEELnc (Sun et al., 2013;Wucher et al., 2017).
Reanalysis of ChIP-Seq Data of Histone Modifications
ChIP-seq raw data of histone modifications were downloaded from NCBI (Supplementary Table 1). The raw data were first processed with adapter removal by Cutadapt and mapping to the genome by STAR. Then, the average distribution of different histone modifications on genomic features was plotted using deepTools by normalization to histone 3 (Ramírez et al., 2016).
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are publicly available. This data can be found here: NCBI website under Bioproject accession PRJNA689321. | 7,407.4 | 2021-04-09T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Protective Effects of BDNF against C-Reactive Protein-Induced Inflammation in Women
Background. Since high sensitivity C-reactive protein (hsCRP) is predictive of cardiovascular events, it is important to examine the relationship between hsCRP and other inflammatory and oxidative stress markers linked to cardiovascular disease (CVD) etiology. Previously, we reported that hsCRP induces the oxidative stress adduct 8-oxo-7,8-dihydro-2′deoxyguanosine (8-oxodG) and that these markers are significantly associated in women. Recent data indicates that brain-derived neurotrophic factor (BDNF) may have a role in CVD. Methods and Results. We examined BDNF levels in 3 groups of women that were age- and race-matched with low (<3 mg/L), mid (>3–20 mg/L), and high (>20 mg/L) hsCRP (n = 39 per group) and found a significant association between hsCRP, BDNF, and 8-oxodG. In African American females with high hsCRP, increases in BDNF were associated with decreased serum 8-oxodG. This was not the case in white women where high hsCRP was associated with high levels of BDNF and high levels of 8-oxodG. BDNF treatment of cells reduced CRP levels and inhibited CRP-induced DNA damage. Conclusion. We discovered an important relationship between hsCRP, 8-oxodG, and BDNF in women at hsCRP levels >3 mg/L. These data suggest that BDNF may have a protective role in counteracting the inflammatory effects of hsCRP.
Introduction
Continued efforts to enhance our understanding of cardiovascular disease (CVD) in women have strengthened risk assessment, diagnosis, and treatment of this disease; however, CVD remains the primary cause of death of women over the age 25 [1]. Furthermore, well-known racial disparities exist for CVD in the United States as African American (AA) women have higher rates of CVD and hypertension than white women [2]. Recent evidence suggests that levels of the CVD risk factor high sensitivity C-reactive protein (hsCRP) differ among AAs and whites and are elevated in females versus males [1,3]. High serum levels of hsCRP pose a significant risk for future CVD events. Thus, continued assessment of this inflammatory marker is an important step towards prevention and diagnosis of CVD [4]. It is also important to analyze the relationship of hsCRP to other CVD risk factors and inflammatory markers.
Brain-derived neurotrophic factor (BDNF) is a wellestablished neurotrophic factor and studies using mouse models have demonstrated that BDNF and its receptor tropomyosin-related kinase B (TrkB) also have critical roles in the development of the cardiovascular system [5,6]. BDNFdeficient mice develop hypocontractility of the heart that leads to early postnatal death [7]. BDNF has been shown to promote neoangiogenesis and enhance blood flow following mouse ischemic injury. It is also upregulated by the central nervous system in order to protect against cardiac remodeling after myocardial infarction [8,9]. These data suggest that BDNF plays a role not only in mouse cardiovascular development but also in CVD and related pathologies.
The role of BDNF in human CVD is less well understood. Low levels of BDNF have been reported in patients with acute coronary syndrome and in association with increased incidence of coronary events in Chinese patients with angina pectoris [10,11]. In elderly participants in the Baltimore 2
Mediators of Inflammation
Longitudinal Study of Aging, BDNF was positively associated with several CVD risk factors including body mass index, diastolic blood pressure, and metabolic syndrome and these associations varied by sex [12]. In one study, BDNF staining was observed in atherosclerotic coronary arteries but not in nonatherosclerotic coronary arteries [13]. BDNF may lead to plaque instability in atherosclerotic plaques through its ability to induce oxidative stress and promote the generation of superoxide radicals [13,14]. BDNF has also been shown to induce oxidative stress by activating the NAD(P)H oxidase system in coronary vasculature [13]. However, because of the role of BDNF in the survival and development of endothelial cells, BDNF may have dual roles in the cardiovascular system [13]. These studies suggest a potential association of BDNF with human CVD; however, how BDNF may contribute to CVD is not well understood and the relationship between BDNF and the CVD risk factor hsCRP has not been investigated. Furthermore, little is known about how BDNF levels are influenced by race.
Previously, we reported a significant relationship between the oxidative stress marker 8-oxodG and hsCRP in a subcohort of women from the Healthy Aging in Neighborhoods of Diversity across the Life Span (HANDLS) study-a longitudinal, epidemiological study of health disparities based in Baltimore, MD [15]. We found that serum levels of the DNA base lesion 8-oxodG were independently associated with systolic pressure and pulse pressure, both markers of vascular health. CRP induces reactive oxygen species and DNA base lesions, suggesting that CRP may contribute to CVD by increasing oxidative stress [15]. The aim of this study was to examine the possible interrelationship between BDNF and hsCRP in the setting of CVD risk in women of different races.
Study Design.
Previously, we performed a nested-cohort study of women with low (<3 mg/L), mid (>3-20 mg/L), and high (>20 mg/L) hsCRP levels [15]. Each group ( = 39) was matched on age and race and are part of the Healthy Aging in Neighborhoods of Diversity across the Life Span (HANDLS) study of the National Institute on Aging Intramural Research Program (NIA IRP) [16]. Clinical characteristics of the cohort are included in Table 1. The study is approved by the Institutional Review Board of the National Institute of Environmental Health Sciences, NIH, and the study protocol conforms to the Ethical Guidelines of the 1975 Declaration of Helsinki. HANDLS is an interdisciplinary, epidemiologic study on health disparities and aging in a cohort of urban adults (ages 30-64) in Baltimore city. Women were chosen for this subcohort if they gave written consent to store serum, had available serum for examination, and had completed the HANDLS baseline assessment.
We matched three groups of women (39 per group) on age and race into groups based on hsCRP level defined in our previous cohort study [15]. These women had low (<3 mg/L), mid (>3-20 mg/L), or high (>20 mg/L) hsCRP levels. Race and sex were both self-reported from participants. Eightysix women in the total HANDLS study cohort had hsCRP values >20 mg/L. The cohort contains premenopausal ( = 13 in low group, = 10 in mid group, and = 11 in high group) and postmenopausal ( = 24 in low group, = 28 in mid group, and = 24 in high group) women. Group sizes were determined by a power analysis, which showed that 37 women per group provided sufficient power to detect differences at least as large as one-third of a standard deviation using < 0.05.
Physical Measurements, Laboratory, and 8-oxodG Assays.
Blood pressure was taken in both arms and averaged for assessments in both arms while seated after a five-minute rest. Body mass index (weight [kg]/height [m] 2 ) was computed from measured height and weight. Clinical conditions were recorded based on a structured medical history interview and a physical examination. Fasting blood samples were obtained and the serum was assayed by Quest Diagnostics (Nichols Institute, Chantilly, VA) or stored at −80 ∘ C. Fasting glucose, insulin, cholesterol, triglycerides, LDL, HDL, creatinine, LDH, and hsCRP were measured at Quest Diagnostics. BDNF and other cytokine and inflammatory markers were measured in serum using Searchlight protein arrays from Aushon Biosystems (Billerica, MA) [15]. Serum 8-oxodG ELISA assays were performed blindly previously [15] according to the manufacturer's instructions (Genox, Inc., Gaithersburg, MD).
Quantification of mRNA and Protein Levels.
HepG2 cells were incubated in serum-free media and HUVECs were incubated in a 1 : 10 dilution of growth media to serum-free media overnight with or without 1 or 10 ng/mL BDNF and the next day cells were scraped and the cell pellet was split to examine both protein and mRNA levels from the same sample. Total RNA was isolated using TRIzol according to the manufacturer's instructions. RNA was quantified using Nano-Drop ND-1000 Spectrophotometer and equal amounts were reverse-transcribed using random hexamers and SSII reverse transcriptase (Invitrogen). Real-time RT-PCR was performed using gene-specific primer pairs and SYBR Green PCR master mix (Applied Biosystems) on an Applied Biosystems 7500 Real-Time PCR machine. The indicated primers used were CRP forward 5 -AGACATGTCGAGGAAGGCTTTT and reverse 5 -TCGAGGACAGTTCCGTGTAGAA and GAPDH forward 5 -TGCACCACCAACTGCTTAGC and reverse 5 -GGCATGGACTGTGGTCATGAG. For protein analysis, cells were lysed directly in 2X Laemmli sample buffer, boiled, and analyzed using SDS-PAGE. Immunoblots were probed with anti-CRP antibodies (Millipore), anti-TrkB antibodies (Cell Signaling), anti-BDNF (Abcam), anti-APE-1 (Novus Biologicals), and anti-actin antibodies (Santa Cruz Biotechnologies) as a protein loading control.
8-oxoG
Staining. HMVEC-Cs were untreated or pretreated for 18 hrs with 10 ng/mL BDNF in growth media and then treated with either 25 M menadione (Sigma-Aldrich), 25 g/mL CRP, 10 ng/mL BDNF, or BDNF and CRP for 30 min in serum-free EBM-2 media. 8-oxoG staining was performed as previously described [17]. Fluorescent images of 8-oxoG and 4 ,6-diamidino-2-phenylindole DAPI were taken on a Zeiss Observer D1 microscope with an AxioCam1Cc1 camera at a set exposure time, and the fluorescence intensity of 8-oxoG-stained nuclei was quantified from duplicate coverslips using AxioVision Rel. 4.7 software. The histogram represents the normalized average from four experiments.
Statistical Analyses.
We used mixed-model regressions to examine effects on 8-oxodG after adjusting for covariates. We included interactions in regression analyses only after they were significant following a backward elimination procedure, a standard statistical method used to select variables and optimize regression models by sequentially excluding nonsignificant effects [18]. We applied a log 10 transform to hsCRP because the distribution was skewed. We used R [19] to perform all analyses and to draw graphs. We set statistical significance as < 0.05.
Association of 8-oxodG, BDNF, and hsCRP Differs by Race.
Here we have examined serum BDNF levels in a subcohort of women from the HANDLS study with low (<3 mg/L), mid (>3-20 mg/L), or high hsCRP levels (>20 mg/L). Each group contained 39 women and the groups were matched on both age (mean age, 49.7 + 8.1 years) and race (19 whites and 20 African Americans). Clinical characteristics of the cohort are described in Table 1. We wanted to examine the relationship of CVD risk factors and oxidative stress and inflammatory markers over the full range of hsCRP levels, as data from the Women's Health Study has shown that women with high levels (≥10 mg/L) of hsCRP have a high risk and women with very high levels of hsCRP (>20 mg/L) were at the very highest risk for future cardiovascular events [20][21][22]. The mean BDNF levels were 8.37, 8.57, and 38.04 ng/mL for the low, mid, and high level hsCRP groups, respectively (Table 1). Compared to the low hsCRP group, BDNF levels were significantly higher in the mid CRP group ( ≤ 0.0001), but not statistically significant in the high hsCRP group.
We examined the association of BDNF with serum 8-oxodG adjusting for race and poverty status and found a significant, though small, interaction between BDNF and race. We found no significant associations with poverty status and retained the nonsignificant main effects for BDNF and race in the presence of their significant two-way interaction (BDNF × race: = 6.1 × 10 −7 ; 95% CI = 1.1 × 10 −7 , 1.1 × 10 −6 ; = 0.017).
In an attempt to account for this interaction, we examined inflammatory markers hsCRP, IL-18, TNF-, and RAGE to determine whether they moderated the association of BDNF and race with 8-oxodG. Although there were no significant main effects, there were significant interactions between BDNF × race (1. increasing levels of hsCRP, the relationship between 8-oxodG and BDNF diverges. In whites, 8-oxodG increases as BDNF increases. In AAs, 8-oxodG decreases as BDNF increases, suggesting that in AA women with elevated levels of hsCRP, BDNF may protect against CRP-induced DNA damage more efficiently than in white women.
Relationship between 8-oxodG and BDNF with Pulse
Pressure Differs by Race. As a follow-up to our previous finding of an association between pulse pressure and 8-oxodG, we also examined the association of BDNF, race, and pulse pressure on 8-oxodG and found a significant associa- Figure 2). The overall levels of 8-oxodG increases as pulse pressure increases [15]. We tested a variety of moderators in an attempt to explain these associations. However, cigarette smoking, illicit substance use, diabetes mellitus, symptoms of depression, or putative genetic markers for BDNF (rs6265 [Val66MET], rs1519480) and CRP (rs3093080, rs3093066, rs3093062, rs3093059, rs3093058, and rs3091244) were not associated significantly with 8-oxodG. None of these measures eliminated the significant associations of BDNF and pulse pressure with 8-oxodG. Given that our cohort is women and that estrogen has also been reported to be associated with BDNF [23], we also included menopause in the models. Menopause was not associated significantly with 8-oxodG and did not change the relationship between BDNF, pulse pressure with 8-oxodG, or the relationship between BDNF, race, and CRP.
BDNF Reduces CRP Levels and Inhibits CRP-Induced DNA Base Lesions.
Given the significant relationship between BDNF, hsCRP, and 8-oxodG in women, we investigated whether BDNF counteracts or potentiates oxidative stress induced by CRP. Initially, we examined several cell lines for expression of the BDNF receptor, TrkB. TrkB is expressed in human umbilical vein cells (HUVEC), human cardiac microvascular endothelial cells (HMVEC-C), and the hepatocarcinoma cell line HepG2 (Figure 3(a)). Consistent with the vascular expression of TrkB in the developing heart and the adult mouse, the highest expression of TrkB was observed in the HMVEC-C cells [6]. CRP is expressed in these cell lines as well, with the highest expression in HUVECs and HepG2 cells, which is consistent with hepatocytes being a major source of circulating CRP (Figure 3(a)). Therefore, we examined whether BDNF influenced levels of CRP in these cells. Treatment with BDNF at two different doses significantly decreased the mRNA and protein levels of endogenous CRP in both cell lines (Figures 3(b)-3(d)). In neurons, BDNF was recently reported to upregulate protein levels of the DNA repair protein, APE1 [24]. However, BDNF does not affect levels of APE1 in endothelial or hepatocellular cells (Figures 4(a) and 4(b)). In addition, we also found that CRP treatment did not significantly affect BDNF levels in endothelial cells (Figures 4(c) and 4(d)).
Mediators of Inflammation
Previously, we reported that CRP induces ROS and 8-oxoG lesions in endothelial cells [15]. In order to examine whether BDNF can affect CRP induced oxidative DNA damage, we treated HMVEC-C cells with highly purified recombinant CRP in the presence of recombinant BDNF. We chose HMVEC-C since these cells had the highest levels of TrkB expression (Figure 3(a)). As previously reported for HUVECs, hsCRP increased levels of 8-oxoG in HMVEC-Cs [15] ( Figure 5). In addition, menadione (MEN) was used as a positive control since this agent is well known to induce 8-oxoG lesions ( Figure 5). Similar levels of 8-oxodG lesions were observed between untreated and BDNF treated cells, indicating that BDNF does not induce 8-oxoG ( Figure 5; 4th bar in histogram). However, BDNF inhibited CRP induced 8-oxoG lesions ( Figure 5; comparing 3rd to 5th bars in histogram). Since BDNF decreases CRP expression and blocks CRP-induced oxidative DNA damage, these data suggest that BDNF plays a protective role in inhibiting the oxidative damage effects of CRP on vascular cells.
Discussion
Here, we have investigated the relationship of BDNF with hsCRP and a marker of oxidative stress. In women at high risk for cardiovascular events, we found a significant relationship between BDNF and 8-oxodG that was different in AA women than in white women. In the high hsCRP group (hsCRP ≥ 20 mg/L), AA women had serum levels of BDNF that increased with decreasing levels of serum 8-oxodG. White women in the high hsCRP group (hsCRP ≥ 20 mg/L) had serum levels of BDNF that increased with increasing levels of serum 8-oxodG. We further investigated the relationship of BDNF, hsCRP, and 8-oxodG in vitro using cell culture models. We found that BDNF can reduce CRP mRNA and protein levels. BDNF also inhibited CRP-induced oxidative DNA damage. These data suggest that BDNF may have a protective role in the cardiovascular system. The idea that BDNF may have a cardioprotective role is consistent with other reports. For example, in mice, exercise increases BDNF levels and exercise is well known to be beneficial to both the cardiovascular system and the brain [25].
Other modulators of cellular stress such as dietary energy restriction, shear stress, and hypoxia also upregulate levels of BDNF [25][26][27][28]. Therefore, BDNF is postulated to be part of an adaptive response to stress that helps protect the brain and potentially the heart [25][26][27][28]. Here we found an association between the oxidative stress marker 8-oxodG, BDNF, and hsCRP. It is interesting to speculate that BDNF may be upregulated in response to the increasing levels of circulating hsCRP in these women. Consistent with this idea, high levels of BDNF were associated with cardiopulmonary fitness in patients with coronary artery disease [29].
Although these reports suggest that BDNF may have a protective role in the cardiovascular system, the role of BDNF in CVD is still very complex and not completely understood. For instance, low plasma levels of BDNF were found to be an independent predictor of a major coronary event in a Chinese cohort of patients with angina pectoris [10]. In a Danish study, low levels of plasma BDNF were associated with higher mortality [30]. Interestingly, this relationship was only observed in elderly women and not men. However, the opposite association has also been observed. In one study, plasma BDNF levels were positively correlated with several CVD risk factors and metabolic syndrome in white elderly subjects (mean age ∼70 yrs) [12]. Furthermore, in mice, BDNF has been shown to have opposing roles following myocardial infarction [9,31]. In one report, BDNF had a protective role in promoting cardiac remodeling after myocardial infarction [9]. In another report, BDNF negatively affected survival after myocardial infarction [31]. Therefore, these data suggest that we have a lot to learn about the precise role of BDNF with relation to CVD.
We found that at incrementally higher levels of hsCRP the relationship between 8-oxodG and BDNF differs by race. It is possible that hsCRP confers different relative risk in each group. In AA women with high levels of hsCRP, BDNF may help moderate oxidative stress induced by increased hsCRP [32]. It is possible that in white women with high levels of hsCRP that BDNF may not work as efficiently to help decrease oxidative stress levels or to counteract the effects of hsCRP. Why there are different racial effects remains to be determined. This subcohort was not designed to evaluate the role of poverty or other important social determinates of health (behavior, societal psychogenic stresses like perceived discrimination, or education). There may be an underlying and unmeasured process that enhances or modulates biological processes resulting in adverse physiologic outcomes. We found that BDNF inhibits CRP-induced 8-oxoG lesions and it has also been reported that BDNF enhances DNA repair [24]. It may be that BDNF affects DNA repair capacity or DNA damage induction levels differently between whites and AAs with high hsCRP. Additionally, the role of BDNF and hsCRPinduced inflammation, such as the prothrombotic activity of hsCRP, needs to be investigated [32].
Previously it has been shown that the BDNF val/met polymorphism was associated with unstable angina in a Chinese cohort [33]. Therefore, we examined genotyping data from our cohort. Polymorphisms in BDNF (including Val66Met) or CRP, however, did not explain the relationships between BDNF, hsCRP, and 8-oxodG. This may be due to the smaller sample size of our study. Larger sample sizes are required to detect variations in BDNF/hsCRP levels explained by different genotypes. Moreover, the BDNF val/met has a smaller minor allele frequency in AA (0.01) and whites (0.21) compared to the Chinese population (0.49).
Very little has been reported at all about circulating levels of BDNF in AAs regardless of sex. A recent report examining BDNF and cognitive decline from the Health ABC study suggests that in older women (mean age 74.9 years) BDNF levels differ by race [34]. Therefore, it will be important in future studies to further examine in both whites and AAs the role that BDNF plays in metabolism and cardiovascular health. As AA females have disproportionate CVD incidence and mortality rates, it is even more important to gain more knowledge about the differences between markers in whites and AAs and what they mean. This may prove important for evaluating the predictive value of risk and diagnostic factors for CVD in AAs versus whites and for determining therapeutic efficacy for treatments for CVD.
Conclusions
Our findings suggest that there is an important and perhaps clinically relevant relationship between BDNF and CRP and that this relationship may modulate the amount of oxidative DNA damage and/or oxidative stress present in the vasculature. BDNF may attenuate the inflammatory or oxidative stress associated with high levels of CRP, which are directly associated with cardiovascular risk. This may be particularly important in women who have higher hsCRP levels especially African American women. | 4,853.8 | 2015-05-26T00:00:00.000 | [
"Biology"
] |
What is an integrable quench?
Inspired by classical results in integrable boundary quantum field theory, we propose a definition of integrable initial states for quantum quenches in lattice models. They are defined as the states which are annihilated by all local conserved charges that are odd under space reflection. We show that this class includes the states which can be related to integrable boundary conditions in an appropriate rotated channel, in loose analogy with the picture in quantum field theory. Furthermore, we provide an efficient method to test integrability of given initial states. We revisit the recent literature of global quenches in several models and show that, in all of the cases where closed-form analytical results could be obtained, the initial state is integrable according to our definition. In the prototypical example of the XXZ spin-s chains we show that integrable states include two-site product states but also larger families of matrix product states with arbitrary bond dimension. We argue that our results could be practically useful for the study of quantum quenches in generic integrable models.
Introduction
The notion of solvability associated to integrable models usually refers to the possibility of diagonalizing analytically the corresponding Hamiltonian [1]. Although this is a remarkable property, comparison with experiments often requires to go beyond the simple computation of the system's eigenspectrum and to provide predictions for more general physical observables. Indeed, a tremendous effort has been made in the past decade to compute several nontrivial ground-state and thermal properties in different models, significantly improving our understanding of the underlying mathematical structures [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16].
More recently, integrable models offered new challenges to the community, as an increasing attention was devoted to the study of quantum quenches [17]. In this framework, one is interested in the unitary time evolution from an assigned initial state. It is evident that this problem is generally more complicated than those arising at thermal equilibrium, mainly because the initial state itself might be chosen arbitrarily, with an exponentially large number of degrees of freedom. This freedom hardly combines with the intrinsic rigidity of integrability, and one might legitimately wonder whether quench problems are within the reach of integrability-based techniques at all [18]. On the other hand, nearly ideal, out-ofequilibrium integrable models can now be realized experimentally in cold-atomic laboratories [19][20][21][22], elevating the relevance of these questions beyond a purely academic curiosity (see also [23] for a volume of reviews on integrable models out of equilibrium).
Among the most elementary quantities appearing in the study of quantum quenches are the overlaps between the initial state and the eigenstates of the model. These are needed as intermediate building blocks for several calculations both at finite and infinite times. Ever since the early stages of the literature on quantum quenches, several works tackled the problem of their computation, with the first studies dating back to almost ten years ago [24,25]. However, until recently the problem appeared too hard to be attacked, strongly limiting our possibility of analytical calculations.
Paralleling these studies, analytical results for the post-quench time evolution were also obtained in cases where the overlaps are not known. In the works [59,60] an exact computation of the Loschmidt echo was performed in the XXZ chain, starting from arbitrary families of two-site product states. These results could be derived by means of a construction relating the initial state to an integrable boundary condition in an appropriate rotated channel, where the space and time directions are exchanged. In fact, the connection between quantum quenches in one spatial dimension and boundary integrable quantum field theory (QFT) in the rotated channel has been known and exploited for a long time both in the conformal [17,[61][62][63][64] and massive cases [48,[65][66][67][68][69].
This connection was in particular beautifully illustrated in the classical work of Ghoshal and Zamolodchikov [65]. Here, integrability of the boundary field theory was defined by the existence of an infinite number of conserved operators (or charges) which persist after the addition of a boundary term to the bulk Hamiltonian. Remarkably, the conditions on this boundary term to preserve integrability were explicitly translated into a constraint for the boundary state in the corresponding rotated picture: the latter has to be annihilated by an appropriately chosen (infinite) subset of the bulk conserved charges.
Inspired by these classical works, we propose a definition of integrable states for quantum quenches in lattice integrable systems. We identify integrable states as those which are annihilated by all local conserved charges of the Hamiltonian that are odd under space reflection. This definition naturally extends to those models defined on the continuum that are scaling limit of lattice ones. We introduce an efficient method to test integrability of given initial states. By means of the latter, we show that in all of the cases recently considered where closed-form analytical results could be obtained, the initial state is integrable according to our definition.
Going further, we prove that integrable states include the ones which can be related to integrable boundaries in the rotated channel, where the space and time directions are exchanged [60]. This result, which completes the analogy with the picture in QFT, is highly non-trivial, as the lattice model does not display Lorentz invariance. We devote a detailed analysis to the prototypical case of XXZ spin-s chains and show that integrable states include two-site product states but also larger families of matrix product states (MPSs) with arbitrary bond-dimension. A particular case of the latter are the states studied in [35,39,40].
It is important to stress that our findings should not be interpreted as a no-go theorem for obtaining exact results for non-integrable initial states. For example, an exact overlap formula was computed in [25] for the special case of the domain-wall state, which strictly speaking is not integrable. Further analytical results were obtained recently for quenches from the same state, for example regarding the computation of the return probability [70], and in the context of spin transport [71]. However, these problems are inherently inhomogeneous in space, and in the present work we focus on the homogeneous global quenches.
Regarding global problems, relevant examples concern the computation of the postquench rapidity distribution functions of the quasi-particles in XXZ spin-s chains. Indeed, the latter can be computed exactly in many (non-integrable) cases [72]. This has been a recent important achievement, based on a mature understanding of the physical relevance of quasi-local charges in integrable models [73][74][75][76][77][78][79][80][81][82] and mainly motivated by studies regarding the validity of the so-called Generalized Gibbs Ensemble [83][84][85][86][87][88][89][90][91][92][93][94][95][96][97]. However, even in this problem further simplifications occur for integrable states, allowing one to reach closed-form analytical formulas, via the so-called Y -system [72,94]. In any case, our work provides a unified point of view for the many exact results that appeared in the past years. Furthermore, it also provides a useful starting point for the study of models where quench problems have not been investigated: if exact results are to be derived, one should first look at the integrable initial states, for which they are expected.
The organization of this article is as follows. In Sec. 2 we review the classical work of Ghoshal and Zamolodchikov and present the general setting. Integrable states are defined in Sec. 3, where we also discuss some of their properties. In Sec. 4 we show that states related to integrable boundaries in the rotated channel are integrable according to our definition, while in Sec. 5 we present a general construction of integrable matrix product states with arbitrary bond dimension. Sec. 6 is devoted to a critical review of several models considered in the recent literature. Finally, our conclusions are reported in Sec. 7.
Boundary states in integrable quantum field theory
A source of inspiration for the present paper is the classical work of Ghoshal and Zamolodchikov [65], where integrable QFTs in the presence of boundaries are studied. It is natural to start our discussion by briefly reviewing the aspects of that work that are directly relevant for us. Although our focus will be on lattice integrable models, this will allow us to introduce some constructions by analogy with the picture in boundary QFT.
We start with an Euclidean field theory defined on a semi-infinite plane x ∈ (−∞, 0), y ∈ (−∞, +∞). In the so-called Lagrangian approach it can be defined by introducing the corresponding action, which generally reads as Here ϕ(x, y) represent a set of local bulk fields, while ϕ B (y) = ϕ B (x, y)| x=0 are the boundary fields. Analogously, a(ϕ, ∂ µ ϕ) and b (ϕ B , dϕ B /dy) are respectively the bulk and boundary action densities. We recall that from the action (1) one can introduce a Hamiltonian picture in Euclidean time, which is closely related to a 1+1 Lorentzian QFT through the Wick rotation. This procedure of analytic continuation between real and imaginary times is routinely performed in order to apply QFT results to equilibrium statistical mechanics [98]. As pointed out in [65], there are two natural ways to introduce the Hamiltonian picture. This is pictorially illustrated in Fig. 1. First, one can identify the coordinate y in (1) to be the Euclidean time direction. At each time, one can associate a physical Hilbert space to the semi-infinite line x ∈ (−∞, 0). The Hamiltonian is written as whereĥ(x) is the bulk Hamiltonian density, while θ B is an additional boundary term. Alternatively, one can introduce an Hamiltonian picture in the rotated channel, by identifying the time direction with the coordinate x. In this case, the Hilbert space at fixed Figure 1. Pictorial representation of a 2D Euclidean field theory in the presence of boundaries. There are two natural ways to introduce the Hamiltonian picture. In the first one, displayed in the sub-figure (a), the Euclidean time direction is chosen to be parallel to the boundary. The physical Hilbert space is associated with a semi-infinite spatial line at fixed time (dashed black line). In the second way, displayed in the sub-figure (b), the time direction is chosen to be orthogonal to the boundary. The latter plays the role of an initial condition, and is identified with a boundary initial state |B .
time corresponds to the infinite line y ∈ (−∞, ∞) and the Hamiltonian coincides with the bulk one without boundary terms The boundary at x = 0 now plays the role of an initial condition in Euclidean time, and can be identified with an initial boundary state |B , in analogy with the classical constructions in conformal field theory [99]. Consider now the bulk action density obtained from (1) by removing the boundary. If the latter is integrable, there exist an infinite set of integrals of motions where T s+1 , Θ s−1 (T s+1 ,Θ s−1 ) are local field of positive (negative) spin s + 1 and s − 1, satisfying appropriate continuity equations [100]. The spin index s takes value in an infinite subset S of the positive integer numbers and it describes the transformation properties under the Lorentz boosts. Following [65], a boundary field theory is defined to be integrable if infinitely many conservation laws survive the addition of the boundary term. More precisely, in this case the boundary Hamiltonian (2) has an infinite number of integral of motions of the form The index s now takes value in an infinite subset S B ⊂ S: the boundary integral of motions are obtained by selecting a subset of the bulk ones, and adding an appropriate boundary term to them. This is completely analogous to the situation encountered in lattice models with open boundary conditions [101,102]. Remarkably, the existence of an infinite number of conservation laws for (2) can be related to a condition on the boundary state |B in the rotated picture. If the Hamiltonian (2) is integrable then [65] The relevance of (6) for our purposes is that it represents a condition of integrability which can be tested by relying uniquely on the knowledge of the initial boundary state and the bulk conservation laws of the theory. This provides the main source of inspiration for our work. The application of the ideas of [65] to lattice models is not evident. A one dimensional quantum model is always equivalent to a two-dimensional one of classical statistical physics, and similar to the QFT setting it is always possible to define a "rotated channel". However, typically there is no Euclidean invariance on the 2D lattice, and the Hamiltonians (or transfer matrices) of the rotated channel differ from the original physical ones. Moreover, even if the identification of the initial states with the integrable boundary conditions of the rotated channel can be carried out, this construction is non-trivial and in any case model dependent.
On the other hand, a definition analogous to (6) can be straightforwardly introduced in a general way in lattice models, where conservation laws are well known in terms of explicit operators on the physical Hilbert space. In the next subsection we review some basic facts on lattice integrable models which are necessary in order to introduce and discuss our definition of integrable states. The latter will be finally presented in Sec. 3.
Lattice integrable models
We consider a generic one-dimensional model defined on the Hilbert space H = h 1 ⊗. . .⊗h L .
Here h j is a local d-dimensional Hilbert space associated with the site j, while L is the spatial length of the system. The Hamiltonian is indicated as whereĥ j is a local operator. In the following, we will always assume periodic boundary conditions. In an integrable model there exists an infinite number of conserved operators commuting with the Hamiltonian (7). Furthermore, these can be written as a sum over the chain of local densities, and are therefore called local charges. The latter can be obtained by a standard construction within the so-called algebraic Bethe ansatz [2,103], which we now briefly sketch.
One of the main objects of this formalism is the R-matrix R 1,2 (u). The latter is an operator acting on the tensor product of two local spaces h 1 ⊗ h 2 (possibly of different dimension), where u is an arbitrary complex number, the spectral parameter. The R-matrix has to satisfy a set of non-linear relations known as Yang-Baxter equations From the R-matrix, another fundamental object can be constructed, namely the transfer matrix where the trace is over an auxiliary space h 0 and where we introduced the monodromy matrix By means of (8), one can show that transfer matrices with different spectral parameters commute Using (11), an infinite set of commuting operators is readily obtained as Importantly, λ * can be chosen in such a way that the operators Q n are written as a sum over the chain of local operators (or densities). For example, within our conventions, the charges Q n are such that in the Heisenberg chains the corresponding densities span n sites. Then, one can define an integrable Hamiltonian as By construction, H L is of the form (7), and has an infinite number of local charges Q n . For the lattice models considered in this work, local conserved charges can be divided into two subsets: the even ones, Q 2n , and odd ones, Q 2n+1 with n ≥ 1. These sets display different behavior under spatial reflection, namely ‡ where Π is the reflection operator Here we introduced the notation where |k j are the basis vectors of the local space h j , with k = 1, . . . d.
We are interested in global quantum quenches where the Hamiltonian driving the time evolution is integrable. For each model, we will consider several families of initial states and in order to allow for a general discussion, we will represent them as matrix product states (MPSs) [105]. They are a class of states which display a number of important properties: they have exponentially decaying correlations and finite entanglement between two semi-infinite subsystems. Furthermore, it is known that MPSs can approximate ground states of gapped ‡ One of the simplest ways to prove the reflection properties is by using the so-called boost operator B, which connects the charges through the formal relation Q k+1 = [B, Q k ] [104]. The boost operator is manifestly odd under space reflection, and this guarantees the alternating signs appearing in (14). Note that the overall normalization of the transfer matrix affects the parity properties of the charges: a rapidity dependent overall factor introduces constant additive terms. However, in the models considered in this work there is always a natural normalization in which the reflection properties (14) hold.
local Hamiltonians with arbitrary precision [106]. This provides a physical motivation to consider them as initial states.
We recall that a generic (periodic) MPS can be defined as Here d is the dimension of the physical spaces h j , while A where d j are arbitrary positive integer numbers, called bond dimensions. The trace is over the Hilbert space h 0 with dimension d 0 . In a finite chain, every vector of the Hilbert space H can be represented in the form (17) [107]. It is common practice, however, to refer to a state as MPS if the bond dimensions in the corresponding representation (17) do not grow with the system size L.
Finally, it is useful to introduce the following definition: we say that a state |Ψ 0 is p-periodic if p is the smallest positive integer such that where U is the shift operator along the chain In order to ensure a proper thermodynamic limit, we will restrict to initial states that are pperiodic, with p arbitrary but finite (and not increasing with L). The constraints imposed so far (namely, finite bond dimensions and p-periodicity) are extremely loose, and allow one to consider a very large family of initial states. Integrable states will be defined in the following as a small subset of the latter.
Defining integrable states
We can now introduce our definition of integrable states, guided by the analogy with the picture in QFT outlined in Sec. 2.1. As we already stressed, identification of initial states and boundary conditions in an appropriate rotated channel requires preliminary work in lattice models, and the construction can be model dependent. However, it is possible to introduce an immediate and general definition of integrable states in terms of annihilation of a subset of the local conserved charges {Q n } ∞ n=1 , similarly to Eq. (6). We propose the following definition: an initial state is integrable if it is annihilated by all local charges of the model that are odd under space reflection. In the lattice models that we consider these coincide with the set {Q 2n+1 }, with n ≥ 1, cf. Eq. (14). Therefore we require in any finite volume L where the charge Q 2k+1 is well defined. Typically Q n is a sum along the chain of local operators spanning n sites, so that (20) is meaningful if (2k + 1) ≤ L. We stress that, even though the definition (20) is inspired directly by [65], the analogy with QFT is a loose one and its usefulness should therefore be appreciated a posteriori. In particular, Eq. (20) seems to hold for all the initial states for which closed-form analytical results could be obtained, at least in the models considered in this work. Furthermore, we will see in Sec. 4 that the class of states satisfying (20) include all of those which can be related to integrable boundaries in the rotated channel. These facts, together with additional considerations presented in the following, constitute strong evidence that (20) provides a meaningful and useful definition.
Based on the experience with the known cases we expect that in new models and new quench situations analytic solutions can be found in the integrable cases. Therefore, it is important to develop tools to efficiently test the integrability of an initial state. In the following we provide such methods, which are connected directly to the definition (20), and are independent of the knowledge of the overlaps or other composite objects such as the Loschmidt echo.
We consider a model defined by a transfer matrix τ (u), with the so-called regularity condition where U is the translation operator (19). Furthermore, we introduce the constants of proportionality in (12) as where α n are chosen such that the charges Q n+1 are Hermitian. From this definition, one can write down the following formal representation Integrability of a p-periodic MPS |Ψ 0 with finite bond dimension is equivalent to requiring where we used that the charges are Hermitian operators. In order to test (24), we introduce the quantities where p is the periodicity of |Ψ 0 . The motivation to introduce a product of two transfer matrices is to cancel those parts of the transfer matrix which are even and therefore irrelevant to the integrability condition. Our statement is that (24) holds if and only if in a neighborhood of u = 0 In the thermodynamic limit this leads to the condition for some K > 0. Indeed, if the initial state is annihilated by all the odd charges in a given finite volume L, then the action of τ (u)τ (−u) reduces to U 2 + O(u L ) and we obtain (27) and hence (28). The proof of the other direction of the statement is also easy, but for the sake of clarity it is reported in Appendix A. Both functions G(u) andG(u) can be efficiently computed on MPSs of finite bond dimension by standard techniques, as we will also show explicitly in Sec. 3.2. As a consequence, Eqs. (27) and (28) provide an efficient test for integrability of given initial states.
An alternative definition of integrability can be given by requiring where Π is the reflection operator (15). First, note that (29) directly implies two-site shift invariance: where we used τ (0) = U . Next, the annihilation by the odd charges follows from (29) simply by Taylor expanding τ (u) at u = 0. Since (29) implies two-site shift invariance, it is a stronger condition than (20). In fact, based on the analytic properties of the transfer matrix it can be argued that (29) follows from (20) and two-site invariance. However, the question whether the latter property is actually a consequence of the annihilation by all odd charges is an open problem.
Transfer matrix evaluation of the integrability condition
We consider a translational invariant MPS defined as and address the computation of the functions G(u) andG(u) introduced in the previous section. It is convenient to employ a graphical notation close to the one routinely used in the literature of tensor networks. This will help us to reduce the level of technicality of our discussions. First, we represent the R-matrix as so that for the transfer matrix (9) one has the graphical representation The horizontal line denotes the auxiliary space h 0 . Trace over h 0 is denoted with dashes, while vertical lines are are in 1-to-1 correspondence with the sites along the chain.
Using this notation the functions G(u) andG(u) in Eq. (25) and (26) can be represented as partition functions of appropriate 2D statistical physical models. This is pictorially depicted in Fig. 2, where filled boxes represent the matrices appearing in the MPS, whereas the internal lines stand for the insertion of the transfer matrices τ (u) and τ (−u).
The partition function of the model displayed in Fig. 2 can be evaluated by building an alternative transfer matrix (generally called the Quantum Transfer Matrix) which acts in the horizontal direction. The calculations are standard, and analogous to those reported, for example, in Ref. [60]. In particular, it is easy to obtain where N = Ψ 0 |Ψ 0 whilẽ and where A 1 and A 2 are d-dimensional spaces on which the matrices A (j) act. HereĀ (i) indicates the complex conjugated of A (i) . An analogous computation can be carried out for G(u) and also for MPSs that are p-periodic.
The integrability conditions (27) translate into a constraint for the eigenvalues {Λ
Comparing with the special point u = 0 we obtain the norm as
It can be seen that (36) is satisfied when for
Condition (37) implies strict annihilation in finite volume. If we only require asymptotic annihilation in the thermodynamic limit then it is enough for (37) to hold for the eigenvalues with maximal magnitude. However, we suggest to use exact annihilation for the definition of integrability, and we will show that all the previous cases studied in the literature fit this definition. It is important that for a given MPS the matrix (35) has finite dimension (the latter growing with the bond dimension of the MPS). Accordingly its eigenvalues can be investigated either analytically or numerically. Therefore, it is immediate to test integrability of given MPSs. In Appendix B we report two examples where the integrability condition is tested both for an integrable and a non-integrable initial state.
It is a relevant question to find all integrable MPSs for a given model. One strategy would be to write down a set of conditions for the matrices A (j) appearing in (31) such that the eigenvalues of the Quantum Transfer Matrix (35) have the necessary properties. However, this approach appears to be extremely complicated and not viable. In general, we have not succeeded in solving this problem. Nevertheless, in the following section we present a general method to construct integrable MPSs of arbitrary bond dimension. We will show later that the integrable MPSs studied in the recent literature all fit into our framework. The question whether all integrable MPSs can be generated by our method is left for future work.
The pair structure
The integrability condition (20) has immediate consequences for the overlaps between integrable states and the eigenstates of the Hamiltonian (which in the following will be also called Bethe states). For a generic model, it is well known that the latter can be parametrized by sets of quasi-momenta or rapidities {λ j } N j=1 , where N is the number of quasi-particles associated to the eigenstate. Any initial state can be written as where the sum is over all the sets of rapidities, and where c {λ j } j are the corresponding overlaps with the initial state. Given a local charge Q r , its action on Bethe states is where q r (λ) is some known function. As a consequence, an integrable state can have a nonvanishing overlap c {λ j } j only with the Bethe states |{λ j } j such that for all n such that Q 2n+1 exists in the chain of length L. In an interacting model this is a very strong constraint for the rapidities, because they have to satisfy a set of additional quantization conditions known as Bethe equations. Accordingly, only special configurations are consistent with (40).
It follows from the space-reflection properties that the functions q 2n+1 (λ) are odd with respect to λ. Depending on the specific model chosen, there can be a finite set S λ of rapidity values such that q 2n+1 (λ) = 0 if λ ∈ S λ . Accordingly, the constraint (40) is obviously satisfied by states corresponding to a set where R is some positive integer number. Eq. (41) encodes the so-called pair structure. More precisely, we say that an initial state has the pair structure if it has non-vanishing overlap only with Bethe states corresponding to sets of rapidities of the form (41). The presence of the pair structure was already observed in the seminal works [28,29,34,46,47] in the Lieb-Liniger and XXZ models, and has been repeatedly encountered in the recent literature for different initial states and systems. In the context of boundary QFT, it leads to the specific "squeezed" form of the boundary states [65], see also [31,38,42,67,[108][109][110][111][112][113]. Furthermore, the pair structure has immediate consequences for the entropy of the steady state arising after a quench [57,58,114], in particular for its relation with the so-called diagonal entropy [114][115][116][117][118][119][120]. Therefore, it is an important question whether the pair structure follows in general from (40). In passing, we note that in the context of QFT it has been argued that for interaction quenches near criticality the pair structure does not occur. Namely the initial state (the ground state of a critical Hamiltonian) does not consist uniquely of pairs of particles with opposite momenta [18], see also the related work [121].
In a generic interacting model the Bethe equations are algebraically independent of (40) and a fine tuning of the couplings and/or an interplay with the volume parameter is needed to find exceptions to the pair structure. Such fine tuned examples can be found in the XXZ model at the special points ∆ = cos(pπ/q), including the free fermionic point ∆ = 0, which will be treated in more detail in section 6.2. In the case of the isotropic Heisenberg chain a rigorous proof of the pair structure can be given using a result of [122]. Since the argument is very simple, we present it here for completeness.
For an arbitrary Bethe state we have the relation Π|{λ} N ∼ |{−λ} N . Taking scalar product of the two sides of the equation (29) with the Bethe state |{λ} N we see that the overlaps can be non-zero only if for all u, where τ (u, {λ} N ) is the eigenvalue of the transfer matrix corresponding to |{λ} N . It was proven in [122] that the spectrum of the transfer matrix τ (u) is simple: if two different eigenstates share the same eigenvalue then they belong to the same SU (2) multiplet. This implies that |{λ} N , and Π|{λ} N are related by an SU (2) rotation. Since Π does not change the S z eigenvalues, we conclude that Π|{λ} N ∼ |{λ} N , and the pair structure holds.
To conclude this section, we stress that while the pair structure might or might not hold for a specific model (and has to be investigated separately), the conditions (20) and (40) are general unifying properties of integrable initial states.
Relation with integrable boundaries
In the previous sections we have introduced the definition of integrable states in terms of the bulk conserved charges of the Hamiltonian. This definition was inspired by the picture in QFT, where integrable boundaries are directly related to integrable initial states. It is then a natural question to ask, whether such a relation holds also in lattice models, and whether our definition (20) is compatible with it.
In this section we prove one direction of this relation: we present a method to relate integrable boundary conditions to initial states, and show that states obtained in this way indeed satisfy the condition (20). This construction naturally produces local two-site product states, which are presented in Sec 4.1. In Section 5 we show how integrable MPSs can also be taken into account in this framework.
The general construction
The construction to relate integrable initial states to integrable boundaries relies on the path integral evaluation of the partition function In QFT the exchange of time and space directions is straightforward due to the Euclidean invariance of the path integral. However, this is less immediate in lattice models, where space is discrete and time is continuous. The standard method to circumvent this problem is to introduce a discretization in the time direction and then to develop a lattice path integral for the resulting partition function. This is achieved by employing a Trotter decomposition In integrable models the Hamiltonian H can be related to the transfer matrices, and this makes it possible to introduce the lattice path integrals. For the sake of clarity here we focus on the XXZ spin-1/2 model, defined by the Hamiltonian where σ α j are the Pauli matrices, and periodic boundary conditions are assumed, σ α L+1 = σ α 1 . In this case, the R-matrix is where η = arccosh(∆). The normalization of the R-matrix is such that the corresponding transfer matrix both satisfies the regularity condition (21) and has charges Q n with the correct even/odd behavior (14). Moreover, it satisfies the so-called crossing relation Here R t 0 0,1 (u) denotes partial transposition in the space h 0 , V 0 is a gauge matrix acting on h 0 while γ(u) is a function. In particular, for the R-matrix (46) one has γ(u) = sinh(u)/ sinh(u + η) and V 0 = σ y . While here we focus on the XXZ spin-1/2 chain, the construction described in this section can be carried out straightforwardly for all the integrable models whose R-matrix satisfies a crossing relation, and is thus very general. An example is given by higher spin versions of the Hamiltonian (45), as we will see in the following. On the other hand, generalization of this construction to models for which there is no relation of the type (47) (such as the SU (3)-invariant spin chain) is not evident and needs further research.
We follow the derivation of [60], to which we refer for all the necessary technical details, providing here only the main ideas. For simplicity, we start by considering an initial state of the form while MPSs with arbitrary bond dimension will be considered in the next section. It is important to recall that, similarly to (2), one can consider the Hamiltonian (45) where and where ξ j are arbitrary inhomogeneities. The 2×2 matrices K ± 0 (u) are boundary operators acting on the auxiliary space h 0 , which in this case read K ± (u) = K(u ± η/2, ξ ± , κ ± , τ ± ), with Here ξ, κ and τ are arbitrary parameters. The K-matrix has to satisfy the so-called reflection (or boundary Yang-Baxter) equations which guarantee the commutativity of the two-row transfer matrices (49).
We have now all the elements to evaluate the partition function (43) from the Trotter decomposition (44). In the particular case of the XXZ Heisenberg chain, one can write [123,124] where τ (λ) is the usual periodic transfer matrix and w * = (w sinh η)/2. Eqs. (44) and (54) can be interpreted as follows: the continuous time evolution can be approximated by a discrete one where the elementary time step is obtained from the application of a two-row transfer matrix. In order to parallel the discussion of Sec. 2.1, we consider Euclidean time.
From the Trotter-Suzuki decomposition (44) and (54) it is evident that the computation of (43) reduces to a classical partition function in two dimensions. This is illustrated in Fig. 3, where we made use of the graphical notation introduced in Sec. 3.2. From Fig. 3 the analogy with the field theory case displayed in Fig. 1 becomes evident. Indeed, in the two-dimensional lattice the Euclidean time direction can now be chosen parallel to the boundaries, as shown in Fig. 4. Accordingly, the partition function can be thought of as generated by iterative application of an open transfer matrix in the rotated channel, which implements the discrete time evolution.
The question now is: for which initial states the open transfer matrix generating the time evolution in the rotated channel is integrable? (cf. Fig. 4). The answer to this question can be found as follows. By working out the algebraic steps encoded in Figs. 3 and 4 (see Ref. [60]), the computation of (43) reduces to the problem of finding the leading eigenvalue of the operator in u = 0. Here we have defined A pictorial representation of the operator (55) is given as follows : Then, we find that the initial state is related to integrable boundaries if T (u) is proportional to an appropriate integrable open transfer matrix τ B (u). Going further, one can easily infer that this can happen only if the initial state is written in terms of the matrix elements of K(u) in Eq. (52). Indeed, following the steps sketched above (and reported in full detail in Ref. [60]), one can explicitly write down a parametrization of the family of initial states related to integrable boundaries. For example, in the XXZ spin-1/2 chain, this reads where the functions k ij (u) are defined in (52). Explicitly Eq. (58) is the final result of this construction. Namely, we have identified a family of initial states which play the role of the boundary states in QFT, cf. Sec. 2.1. In the following, we will sometimes refer to them as lattice boundary states. Once again, the derivation presented in this section is completely general, provided that the R-matrix satisfies the crossing relation (47) for an appropriate matrix V . Repeating the steps above, one can write down an expression for the boundary states in a generic model, which reads, up to a global numerical factor, where we employed the notation (16) for the basis vectors of the local Hilbert spaces h j . Here [K(−η/2)V ] i,j are the matrix elements of the product K(−η/2)V , while d is the dimension of the spaces h j . It is straightforward to check that (59) reduces to (57) for the spin-1/2 chain (where V = σ y ), while an explicit expression for the spin-1 case will be provided in the following.
In the next section we will prove that boundary states are integrable according to our definition, as they satisfy (20). Incidentally, we note that in the special case of the spin-1/2 chain any two-site state can be parametrized as in (58) (where one also allows for the parameters to go to infinity). We will see that this is not usually the case for arbitrary models, where only a subset of two-site product states are integrable.
Integrability from reflection equations
In the previous section we have seen that a two-site product state (48) is related to integrable boundaries in the rotated channel if its building block |ψ 0 is written in terms of the elements of the reflection matrix K(u), see. Eq. (57). In this section we show that for these states the condition (20) follows from general properties of integrability. As a consequence, we establish a direct connection between integrable boundaries in the rotated picture and integrability in terms of annihilation by bulk conserved charges. In turn, this unveils a direct connection with the pair structure discussed in Sec. 3.3.
For the sake of clarity we once again detail the case of the XXZ spin-1/2 chain. However, it will be clear from our discussion that our treatment is in fact much more general, and it holds straightforwardly in all the cases where the R-matrix satisfies the crossing relation (47) (see, for instance, Sec. 6.3 where the case of spin-s chains is worked out).
From the above definition it is clear that |Φ 0 (−η/2) = |Ψ 0 . Furthermore, it follows from the reflection equations (53) thať Figure 5. Pictorial representation of the reflection equations (62).The orientation of the arrows reflects the fact that (62) is written in terms of boundary states, rather than boundary Kmatrices.
Eq. (62) is a crucial relation, which is pictorially represented in Fig. 5. Even though we are now focusing on the spin-1/2 chain, it is in fact quite general: it is simply a rewriting of the boundary reflection equations in terms of states, rather than boundary K-matrices. As one should expect, for a generic model the state (61) will be replaced by a different one, related to the corresponding K-matrix. An explicit example will be given in Sec. 6.3 for the case of higher spin chains.
Consider now a chain of length L with two auxiliary spaces h 0 and h L+1 , where L is an even number. From repeated use of Eq. (62), as pictorially represented in Fig. 6, one can prově Setting v = 0 we geť From the definition ofŘ i,j , the l.h.s. of (65) can be rewritten as where we introduced |W i,j = P i,j |φ(−η/2 + u) i,j . Analogously, the r.h.s. of (65) yields We now make use of the identities P L+1,L . . . P 1,2 P 0,1 = U −1 , where U is the shift operator (19). Plugging these into (66) and (67) we finally obtain From this equation it is now easy to conclude the proof, by writing down its components. In particular, it is shown in Appendix C that (70) implies which is exactly (29), a pictorial representation of which is given on figure 7. The proof presented in this section has far reaching consequences. Most prominently, it directly relates boundary states on the lattice with the pair structure frequently encountered in the recent literature of quantum quenches. In particular, this also provides us with a direct relation between the presence of latter and the validity of the so-called Y -system. Since this is a rather technical point, we consign its discussion to Appendix D.
Our proof relied on a direct application of the boundary Yang-Baxter relations, which leads to the annihilation by the odd charges. However, it is possible to formulate an alternative proof which derives the eigenvalue condition (37). The idea is to introduce the Quantum Transfer Matrices for the inhomogeneous states (60), and to use their commutativity and (29) for a generic integrable initial state |Ψ 0 . It is proven in the main text that for states generated from boundary integrability this relation is an immediate consequence of (64), which is depicted in Fig. 6. certain simple properties at degenerate points. For the sake of brevity we omit the details of this second proof.
Constructing integrable matrix product states
In this section we address a systematic construction of integrable MPSs of arbitrary bond dimension. The main idea is to obtain new integrable MPSs starting from the known boundary two-site product states. Once again, for the sake of clarity we focus on the XXZ spin-1/2 chain, but it will be apparent that our construction is in fact more general.
As a first example consider the state where |Ψ 0 is a boundary two-site product state of the form (48) and τ (u) is the fundamental transfer matrix (9). By the proof presented in the previous section |Ψ 0 is integrable. Then, it follows straightforwardly from the commutativity of the transfer matrices that (72) is also integrable, as it is annihilated by all the odd charges. This simple observation is at the basis of our construction. More generally, further integrable states can be constructed using the so-called fused (higher-spin) transfer matrices {τ (d) (u)} ∞ d=1 , where we used the convention τ (1) (u) = τ (u). These operators have a similar matrix product form and can be written as Here L (1,d) j,0 (u) are the fused Lax operators, which are matrices acting on the tensor product of the local spaces h j C 2 and the auxiliary space h 0 C d+1 . The trace is taken over h where 1 is the identity operator on the space h 1 ⊗h 2 , σ α are the Pauli matrices while S α are the operators corresponding to the standard (d + 1)-dimensional representation of SU (2). In the anisotropic case, ∆ = 1, Eq. (74) has to be replaced by an appropriate deformed expression, involving the generators of the quantum group U q (sl 2 ) [125]. The operators (73) are called fused transfer matrices, as they can be obtained from (9) by an appropriate procedure named fusion [126].
Importantly, all the transfer matrices (73) commute, so that we can immediately construct an infinite family of integrable MPSs. Consider where |Ψ 0 is an arbitrary boundary two-site product state of the form (48). It follows immediately, that (76) is integrable, as where we used that the fused transfer matrices commute with the local charges (which follows from (22) and (75)). The state (76) can be cast into the canonical MPS form where H D is an auxiliary space of dimension D = n j=1 (d j + 1). Here A and their explicit form can be derived from the knowledge of the operators L (1,d j ) (u j ). Note that |χ in (78) is in general two-site invariant, analogously to the state |Ψ 0 .
We note that the MPSs (76) could be interpreted as lattice versions of the "smeared boundary states" introduced in the study of quantum quenches in the context of conformal field theories [17,63,64] Importantly, the MPSs (76) can also be related to integrable boundaries in the rotated channel. Consider for example the state where |Ψ 0 is defined in (48), with its building block |ψ 0 satisfying (57). Then, in analogy with (33) one has the following pictorial representation Here we indicated the auxiliary row (corresponding to a Hilbert space of dimension d + 1) with a thick red line, to distinguish it from the 2-dimensional representation appearing in the usual transfer matrix (9). One can consequently repeat the steps outlined in Sec. 4: in this case the time evolution in the rotated channel is represented pictorially in Fig. 8. We see that application of τ (d) (w) only results in the insertion of a line in the two-dimensional partition function with respect to the situation displayed in Fig. 4. In particular, using the properties of fused transfer matrices one can see that the open transfer matrix appearing in this construction is still integrable. Finally, it is clear that the same happens by applying a From (76), we see that the set of integrable states is infinite, and their construction involves a large number of free parameters. Even fixing the number n of transfer matrices, their bond dimensions d j and spectral parameters can still be chosen arbitrarily. It is an important question whether this family exhausts all possibilities of integrable MPSs with finite bond dimension. While we can not give a definite answer to this question, it is remarkable that all known cases indeed fit into this framework, including the MPSs studied in [35,39]. This will be detailed in the following section.
An alternative way to understand our construction is to interpret the MPSs as non-scalar solutions to the boundary Yang-Baxter equations. Then the task of finding all integrable MPSs can be split into two parts: determining whether the boundary Yang-Baxter equations are necessary for the integrability condition to hold, and finding all finite dimensional solutions in terms of local Lax operators acting on local K-matrices. We leave this problem for future research.
To conclude this section we note that the overlaps between integrable MPSs of the form (76) and the eigenstates of the Hamiltonian are immediately obtained if the overlaps with the state |Ψ 0 are known. This follows from the fact that the transfer matrices act diagonally on the Bethe states and the eigenvalues are known from the algebraic Bethe ansatz, see for example [127] for explicit formulas. Employing the same notation of Sec. 3.3, we have where τ (dr) (w r |{λ j }) is the eigenvalue of τ (dr) (w r ) corresponding to the eigenstate |{λ j } .
Integrable quenches: analysis of specific models
In this last section we review several recent studies of quantum quenches in different models, where closed-form analytical results could be obtained. We show that in all of these cases the initial states are integrable according to our definition. These include the MPSs constructed in the works [35,39], which are shown to fit into the framework of the previous sections. We also present new results by producing concrete formulas for the integrable two-site states of the spin-1 XXZ chain.
The XXZ spin-1/2 chain
We begin our analysis with the prototypical case of the XXZ spin-1/2 chain (45), where quench problems have been extensively investigated in the past few years [46,47,59,60,80,86,89,92,94,114,[128][129][130][131]. In particular, a large number of analytical results have been obtained. Exact overlap formulas between assigned initial states and the eigenstates of (45) have been derived for special classes of two-site states [26][27][28][32][33][34] and, in the isotropic case, for more general matrix product states [35,39,40]. Closed-form results for the long-time limit of local observables were derived in [46,47], by means of the Quench Action method [44,45], while exact computations for the Loschmidt echo have been reported in [59,60] for arbitrary twosite product states. Finally, an exact result for the time evolution of entanglement entropies has been obtained in [132]. From the previous section, it is now clear that all two-site states considered in these works are integrable, as they are boundary states. These include, as a special case, the Néel and the so-called dimer states studied in [46,47], but also the tilted Néel and ferromagnet states considered in [94]. Note that even though these states are very simple, the computation of their overlaps is extremely difficult: in fact, the latter are still unknown in the case of tilted Néel and ferromagnet states. Nevertheless, it follows from our derivation that for generic ∆ the pair structure holds for all local two-site states.
Among the most interesting recent developments was the discovery of exact overlap formulas for MPSs in the isotropic case ∆ = 1 [35,39,40]. The overlaps were shown to have the same structure as in the case of the Néel state: they included the same Gaudin-like determinants, and only the pre-factors were different. Here we show how these MPSs can be embedded into our framework of integrable initial states. In particular, we argue that they can be obtained by the action of (fundamental or fused) transfer matrices on simple two-site states; in the first few examples we explicitly calculate the corresponding dimensions and spectral parameters.
In [39] the following family of states was considered where Here S (k) α are the (k + 1) × (k + 1) matrices corresponding to the standard representation of SU (2) generators, and the trace in (82) is over the associated (k + 1)-dimensional space.
In order to test integrability of these MPSs we numerically constructed the corresponding QTMs (35) in the first few cases. By evaluating the eigenvalue condition (37) we confirmed integrability of these states. In fact, this also directly follows from the corresponding overlap formulas which were computed in [39], explicitly displaying the pair structure.
The question of how these states fit into our framework of boundary integrability is not immediately clear from their MPS representation. Below we show by explicit calculations that the two simplest vectors |χ are the translationally invariant components of simple two-site product states; we believe that this is a new result. Going further, the structure of the next few states could be investigated using a recursion relation derived in [39], which expresses |χ . This relation explicitly involves fundamental transfer matrices of the form (9) with an auxiliary space of dimension 2 and is such that higher MPSs are expressed as sums of lower ones. Our goal is to relate these to integrable boundaries, namely we intend to express higher MPSs in the product form (76). This could be achieved by a careful analysis of the recursion formulas of [39] and by the so-called T -system, a set of functional relations for different fused transfer matrices [127]. However, the full implementation of this program is beyond the scope of this paper, and we content ourselves with deriving explicit formulas for |χ (2), so we are allowed to perform global rotations of the physical space (if necessary, this rotation can easily be restored at the end of our calculations). We consider the operator where W j is a local rotation at the site j, described by the matrix After applying the operator (85), the state (82) is rewritten, up to an irrelevant global numerical phase, as where nowZ For k = 1, it is immediate to simplify this expression. In this case (S ± ) 2 = 0, and we get immediately where U is the shift operator (19), while |N is the Néel state, We see that, for k = 1, (87) is nothing but the zero-momentum component of the Néel state. Restoring the rotation by (85) we see that in this simplest case the MPS (82) is the zeromomentum component of the tilted Néel state in the y-direction: A decomposition similar to (90) can be performed for generic k, using standard techniques in the literature of MPSs. In particular, one can prove that (87) where Here we have defined where we introduced the projectors P 1 and P 2 , which are diagonal matrices with elements One can now check for each value of k that |Φ (k) is of the form (76) for an appropriate choice of the two-site state |ψ 0 . We have done this explicitly up to k = 5. In particular, we obtained where |N is the Néel state (91) while U is the shift operator. After rotating back with the inverse of (85), we obtain the following list for the original MPSs: This representation is a new result of our work. We conjecture that all |χ can be written in a similar product form using higher spin fused transfer matrices. The proof of this conjecture and the derivation of explicit formulas is beyond the scope of this work.
As a final remark on the spin-1/2 XXZ model, we note that additional integrable MPSs can be constructed for special values of the anisotropy ∆, in the regime ∆ < 1. These correspond to the so-called "root of unity points", where additional representations of the underlying quantum group, and hence of transfer matrices, exist [125]. MPSs obtained using these additional transfer matrices can be incorporated into our discussion, because they still commute with the transfer matrix (9).
The XX model
At the non-interacting point ∆ = 0 the XXZ Hamiltonian (45) deserves special attention. In this case, one recovers the so-called XX chain, which has served as a prototypical benchmark for countless studies on non-equilibrium dynamics in isolated many-body systems. In particular, analytic results for global quantum quenches have been presented in [37,128,129], where the dynamics arising from the Néel state was mainly addressed. This model presents an important example where integrable states do not display the pair structure discussed in Sec 3.3. In particular, the special structure of conserved charges lead to a different constraint for the rapidities of the eigenstates annihilated by the odd ones. This is briefly discussed in the following, while we refer to [37] for a systematic treatment of quantum quenches from the Néel state.
We recall that the XX model can be studied by introducing fermionic operators through the Jordan-Wigner transformation which satisfy the relations: The Hamiltonian is then written as H = p ε pc † pc p , wherẽ e −ipj c j , and ε p = cos(p). Alternatively, the model can be studied by considering the limit ∆ → 0 (namely, η → iπ/2) of the Bethe ansatz solution of (45) for generic ∆. In the latter language the quasi-particles are associated to the rapidities λ, which are related to the lattice momentum through It follows from the Bethe equations that the Bethe rapidities are either real or they have imaginary part equal to π/2, and the corresponding one-particle energy eigenvalues are e(λ) = − 1 cosh(2λ) .
As in the case of generic ∆, a set of conserved charges Q j can be generated via the transfer matrix construction explained in Sec. 2.2. Alternatively, a set of charges can be constructed using the fermionic operators, such that their commutativity follows from (108). The relation between the two sets of charges is non-trivial and has been studied in detail in [104], see also [78,133]. From the fermionic point of view, one possible choice for the charges is simply the set of occupation numbersñ k for the Fourier modes. These operators are inherently non-local, but they form a complete commuting family. An alternative choice is to consider the local operators
and the Hermitian combinations
It follows directly from the commutation relations (108) that every I j is conserved. The charges I + j are even and the I − j are odd under space reflection. On the other hand, it was shown in [104] that in the XX model all charges can be expressed using the operators e αβ n = L j=1 e αβ n,j , e αβ n,j = σ α j σ z j+1 . . . σ z j+n−2 σ β j+n−1 .
In particular, there are two families of charges defined as H + n = e xx n + e yy n , n even , e xy n − e yx n , n odd , and H − n = e xy n − e yx n , n even , e xx n + e yy n , n odd .
The canonical charges Q j obtained from the transfer matrix are linear combinations of the first family. Explicit formulas can be found in [104]: in the simplest example Q 3 = −H + 3 , for which the one-particle eigenvalues read Using the Jordan-Wigner transformation it can be seen that the families {H ± n } and {I ± n } contain the same operators, but with alternating identification: In the present case, integrability of the initial state is equivalent to annihilation by the charges H + n for all odd n. In the following, we show that this does not imply the pair structure. Consider the rapidity transformation It is straightforward to verify that this implies Analogously, one can see that the eigenvalues of all odd charges are invariant under (117).
Taking an arbitrary eigenstate, this transformation can be performed for every rapidity individually, so that a large number of states can be generated which share the same eigenvalues for all odd Q j . Eigenstates corresponding to sets of rapidities (41) are obviously annihilated by the odd charges, but so do all the other states obtained after repeated application of (117). The rapidities of these eigenstates do not uniquely display pairs of opposite rapidities. Since integrable states will in general overlap with all eigenstates annihilated by odd conserved charges, it follows from this discussion that they will not posses the pair structure. This is explicitly shown in the case of the Néel state in [37], to which we refer for further detail. As a final remark, we note that the XX chain is closely related to the q → ∞ limit of the q-boson model studied in [134,135]. However, the two models are connected by a highly non-local transformation, which alters the locality of the charges. A detailed analysis of the latter for the q-boson model is out of the scope of the present paper.
XXZ chains with higher spin
Analytic results for quantum quenches in higher-spin chains were presented in [94]. In particular, closed-form characterizations of post-quench steady states were obtained for the spin-1 chain known as the Zamolodchikov-Fateev model [136]. The Hamiltonian reads where the indices a, b in the second sum take the values x, y, z and where periodic boundary conditions are assumed, S α N +1 = S α 1 . The coefficients A ab are defined by A ab (η) = A ba (η) and while η plays the role of the anisotropy parameter along the z-direction. Finally the operators S α j are given by the standard three-dimensional representation of the SU (2) generators Two particular initial states were considered in [94], namely where |j , j = 1, 2, 3 represent the basis vectors of the local Hilbert space h C 3 . For these states analytical formulas were obtained by means of a Y -system, cf. Appendix D.
Not surprisingly, one can show that these states are integrable according to our definition. In particular, they belong to the class of boundary states, as we detail in the following.
It is easy to check that the states (124) and (125) belong to this class. Then, from the proof of Sec. 4, which also applies for the spin-1 case, we immediately obtain that they satisfy the condition (20), and hence they are integrable. Note that the integrability of the class (139) can also be verified by constructing the Quantum Transfer Matrix (35), and numerically evaluating the corresponding eigenvalues in a neighborhood of u = 0. Contrary to the spin-1/2 formula (61), it is not true that all two-site states can be parametrized as in (140). In other words, boundary states for the spin-1 model are only a subset of the two-site states: one can explicitly check that for arbitrary choices of the latter (28) does not hold. This is actually true for all the spin-s generalizations of the XXZ Hamiltonian (45) with s ≥ 1. These models can be obtained by the well-known fusion procedure [126], from which also the corresponding K-matrices can be built [138]. As the spin s increases, the dimension of the local Hilbert spaces h j will also increase. On the other hand, there are always just 3 free parameters of the fused K-matrices, and they are not enough to parametrize all the states in the tensor product h j ⊗ h j+1 .
The SU (3)-invariant chain
Finally, we touch upon higher rank generalization of the SU (2) chain, and focus on the SU (3)-invariant Lai-Sutherland Hamiltonian [139] Here the local Hilbert space is h j C 3 , while S α j are given once again by the standard three-dimensional representation of the SU (2) generators (123). The analytical description of this model is significantly more involved as it requires a nested Bethe ansatz treatment [5]. Nevertheless, many elements of the algebraic construction discussed in section 2.2 are also valid in this case. The R-matrix of the model is where P 1,2 is the permutation matrix (63). The transfer matrix can be simply obtained by (9). Recently, a remarkable overlap formula was conjectured in [41] for a particular matrix product state, with a form which is reminiscent of the one in SU (2) chains. The state is where the trace is over the auxiliary space h 0 C 2 , and where σ α 0 are Pauli matrices acting on h 0 . Note that the auxiliary space has a different dimension from the physical spaces h j C 3 . This state is translational invariant, and the integrability conditions can easily be tested. We constructed the corresponding QTM and verified the eigenvalue condition (37) numerically, demonstrating that the state (143) is indeed integrable. Note that this also follows from the conjectured form of the overlap with the Bethe states [41], from which the pair structure is evident.
An important question is whether this state can be understood in terms of integrable boundary conditions in the rotated channel. On the one hand, in the SU (3) case the study of integrable boundary transfer matrices is more complicated [140]. On the other hand, in the argument of Sec. 4.2 we explicitly used the crossing relation (47) of the R-matrix, which is no longer true for the SU (3) model [141]. Accordingly, care has to be taken to generalize those constructions to this case. We hope to return to these topics in a future work.
Conclusions
In this work we have proposed a definition of integrable states for quantum quenches in lattice integrable systems, which is directly inspired by the classical work on boundary quantum field theory of Ghoshal and Zamolodchikov [65]. We have identified integrable states as those which are annihilated by all the odd local conserved charges of the Hamiltonian. We have proven that these include the states which can be related to integrable boundaries in an appropriate rotated channel, in loose analogy with QFT. Furthermore, we have shown that in all of the known cases where closed-form analytical results could be obtained, the initial state is integrable according to our definition.
In the prototypical case of XXZ spin-s chains we have shown that integrable states include two-site product states together with larger families of MPSs. In the spin-1/2 chain all two-site states are integrable, whereas for higher spin this is true only for a subset of them. We have characterized this subset in terms of the fused K-matrices.
One of the properties of integrable quenches seemed to be the pair structure for the overlaps, because this was observed in simple cases in the spin-1/2 XXZ chain and also the Lieb-Liniger model [28,29,34,46,47]. The pair structure has important consequences for the entropy of the steady state arising after a quantum quench [57,58,114], therefore it is important to clarify its relation with the integrability of the initial state. We have argued that the pair structure indeed follows from integrability for generic values of the coupling constants (and could actually be proven for the isotropic Heisenberg chain). Together with our proof of integrability this constitutes a general confirmation of the pair structure for a wide variety of states, including already known cases and new states where the actual overlaps are not yet known. On the other hand, we have also discussed possible exceptions to the pair structure, such as the XX model detailed in Sec. 6.2. Nevertheless, the integrability condition (20) can be always introduced without modifications.
Remarkably, in almost all of the cases encountered, MPSs annihilated by the odd charges could be understood as boundary states in the rotated channel. It is an important open question whether this family exhausts all integrable MPSs with finite bond dimensions. In the SU (3)invariant model the MPS introduced in [41] and studied here in 6.4 is integrable according to our definition, but its interpretation in terms of boundary integrability is not known yet. We hope to return to this question in future work.
As a final remark, we stress that our results should not be interpreted as a no-go theorem for obtaining exact results for non-integrable initial states. However, our work provides a unified point of view for the many exact results that appeared in the past years. Furthermore, if exact results for time evolution are to be derived (be it results for correlators or the Loschmidt echo or other quantities), one should first look at the integrable initial states, regardless of the model considered. The test of integrability is straightforward, therefore our framework gives an extremely useful starting point for the study of models where quantum quenches have not yet been investigated. An example is the case of SU (N )-invariant spin chains, where only a small number of results are currently available [41,55].
Acknowledgments
We are very grateful to Pasquale Calabrese and Gábor Takács for inspiring discussions and useful comments. B. P. is grateful to Pasquale Calabrese and SISSA for their hospitality. E. V. acknowledges support by the ERC under Starting Grant 279391 EDEQS. B. P. acknowledges support from the "Premium" Postdoctoral Program of the Hungarian Academy of Sciences, and the K2016 grant no. 119204 of the research agency NKFIH.
Appendix A. Integrability condition for p-periodic states
In this appendix we show that the validity of (27) implies the annihilation of all odd conserved charges, namely (20). For simplicity, we assume that the initial state is two-site shift invariant with Ψ 0 |Ψ 0 = 1, as an analogous derivation holds in the general case.
First, it follows from (23) that one can write down the following formal expansion Note that here we used U 2 |Ψ 0 = |Ψ 0 and that for parity invariant states G(u) =G(u). At the first orders in u 2 we have If G(u) ≡ 1 then we have immediately that Ψ 0 |Q 3 |Ψ 0 = 0. However, at the next order we have a sum of two terms. If the state is parity invariant, then Ψ 0 |Q 5 |Ψ 0 = 0 and we also have If the state is not parity invariant, then we use the information coming fromG(u). Using that the latter is also identically 1, we get the vanishing of the two terms separately. If (A.3) holds then Q 3 |Ψ 0 = 0, namely this means that Q 3 can be neglected from the Taylor series completely. Going to the next order, we get the vanishing of the mean value of Q 7 , and similarly we can prove Q 5 |Ψ 0 = 0. Proceeding iteratively, the annihilation by all odd charges is proven.
Appendix B. Numerical study of the Quantum Transfer Matrix for integrable and non-integrable MPSs
In this appendix we numerically study the matrixT (u) introduced in (35) and compute its eigenvalues for both an integrable and a non-integrable case. Figure B1. Magnitude of the eigenvalues of the operatorT (u) corresponding to the (translationally invariant) dimer state (B.1). We see that the eigenvalues which are non-zero for u = 0 remain constant for a non-vanishing neighborhood of u = 0. In fact, they are completely u-independent, but level crossings can occur. The eigenvalues with the second largest magnitude describe the exponentially decaying overlap between the original and the one-site shifted dimer states. For the integrable case, we consider the zero-momentum dimer state defined as It is easy to check that this state is generated by a translationally invariant MPS of the form (31), with As a non-integrable state, we choose the translationally invariant component of a four-site domain-wall state, namely This state was already considered in [94] (and mentioned in [72]), where it was shown that the corresponding rapidity distribution functions did not satisfy the Y -system, cf. Appendix D. Therefore it is natural to expect that this state is not integrable according to our definition. The state in (B.3) can still be written as a translational invariant MPS of the form (31), where now We computed the eigenvalues of the QTMT (u) for these two MPSs. For concreteness, we focused on the XXX chain, with the R-matrix normalization given by R(u) = (u + iP )/(u + i) (here P is the permutation operator defined in (63)). The QTM is symmetric with respect to the sign of u and in the following we restricted to positive values of u. The numerical results for the magnitude of the eigenvalues are shown in Figs. B1 and B2. Note that a single curve here corresponds to at least two different eigenvalues due to the sign difference. Further degeneracies can be present, but we can omit to specify them as they are irrelevant to the test of integrability.
In the case of the dimer state, we see from
Appendix C. Integrability of boundary states: technical details
In this appendix we provide further technical details on the proof presented in Sec. 4.2. We start by showing that (62) is a simple rewriting of the reflection equations (53).
Appendix D. The Y -system
In this appendix we briefly touch upon another property of the boundary states, which is related to the so-called Y -system, an ubiquitous structure of integrability [142]. The latter is a system of equations for a set of functions in the complex plane. In the case of the XXZ model, the Y -system takes the form where for generic values of ∆, one has j = 1, 2, . . . ∞. In the framework of quantum quenches, the Y -system first appeared in the study of quenches from the Néel state [46]. In this case the Y -functions are obtained starting from the Bethe ansatz rapidity distributions of the corresponding long-time steady state. Subsequently, the same relations were found to hold more generally for all two-site product states in the XXZ spin-1/2 model [72,94,96,97], and for some initial states in the spin-1 chain [94]. An explanation for the Y -system in this context was found in Ref. [60]. Using the identification of two-site states with boundary states reviewed in Sec. 4, the Y -system emerged from the fusion properties of the corresponding boundary transfer matrices. From the practical point of view, the existence of a Y -system represents a major computational advance, allowing for a closed-form analytical characterization of the rapidity distribution functions of the postquench steady state [60,72,94].
So far, it was not clear why the existence of the Y -system should be expected to imply the presence of the pair structure discussed in Sec. 3.3. The results of Sec. 4 gives us further understanding of this point. Explicitly, we have proven that boundary states, which were shown to be characterized by a Y -system in [60], satisfy the condition (20). In turn, the latter implies the pair structure if no fine tuning of the couplings is made. Hence, in this case the Y -system and the pair structure are seen to have the same origin, rooted once again in integrability.
We remark that if the initial state is such that the overlaps can be factorized algebraically, i.e. they can be written in the form of such that v(λ) is a meromorphic function and C({λ j } N/2 ) is O(L 0 ) in the thermodynamic limit, then the resulting Y -functions (as obtained within the Quench Action formalism [44]) necessarily satisfy the Y -system equations. This can be proven using a simple analytic manipulation of the resulting TBA equations, in analogy with the methods presented in an early work on Y -systems by Al. B. Zamolodchikov [143]. The same statement can be proven even for non-integrable quenches, where the pair structure does not hold: the Y -system would still hold if the overlaps can be factorized as It follows that if the Y -system does not hold, then algebraic factorization of the overlaps is not possible. This argument suggests that the specific form (D.2) is a further characteristic property of integrable initial states.
Appendix E. Generalities on matrix product states
In this appendix we provide some technical details on MPSs which are useful for our discussion in Sec. 6.1. In particular, we show that the MPS (87) can always be decomposed as in Eq. (93). We start by noting a special property of the state (87). Consider the operator is the complex conjugate ofZ (j) , and whereZ (1) andZ (2) are given in (88) and (89). The eigenvalues of N form pairs with the same magnitude and different sign. This can be seen by performing a similarity transformation using the matrix C (k) , which represents a rotation of π/2 around the z-axis. We have and C (k) ⊗ C (k) N (C (k) ) −1 ⊗ (C (k) ) −1 = −N .
It can be checked that the leading eigenvalues have no further degeneracies. From general theorems regarding MPSs [105], this implies that there exist two projectors P 1,2 acting in auxiliary space such that P 1 + P 2 = 1 and P 1Z (j) =Z (j) P 2 , P 2Z (j) =Z (j) P 1 , j = 1, 2 . (E.2) In our case, the operators P 1 and P 2 are defined by the matrices in (99). Following [105], one can simply compute Using now P 1,2 = P 2 1,2 and (E.2), it is straightforward to recover (93). | 17,495.8 | 2017-09-14T00:00:00.000 | [
"Physics"
] |
Combined analysis of gestational diabetes and maternal weight status from pre-pregnancy through post-delivery in future development of type 2 diabetes
We examined the associations of gestational diabetes mellitus (GDM) and women’s weight status from pre-pregnancy through post-delivery with the risk of developing dysglycaemia [impaired fasting glucose, impaired glucose tolerance, and type 2 diabetes (T2D)] 4–6 years post-delivery. Using Poisson regression with confounder adjustments, we assessed associations of standard categorisations of prospectively ascertained pre-pregnancy overweight and obesity (OWOB), gestational weight gain (GWG) and substantial post-delivery weight retention (PDWR) with post-delivery dysglycaemia (n = 692). Women with GDM had a higher risk of later T2D [relative risk (95% CI) 12.07 (4.55, 32.02)] and dysglycaemia [3.02 (2.19, 4.16)] compared with non-GDM women. Independent of GDM, women with pre-pregnancy OWOB also had a higher risk of post-delivery dysglycaemia. Women with GDM who were OWOB pre-pregnancy and had subsequent PDWR (≥ 5 kg) had 2.38 times (1.29, 4.41) the risk of post-delivery dysglycaemia compared with pre-pregnancy lean GDM women without PDWR. No consistent associations were observed between GWG and later dysglycaemia risk. In conclusion, women with GDM have a higher risk of T2D 4–6 years after the index pregnancy. Pre-pregnancy OWOB and PDWR exacerbate the risk of post-delivery dysglycaemia. Weight management during preconception and post-delivery represent early windows of opportunity for improving long-term health, especially in those with GDM.
Methods
Study participants. A total of 1450 participants were recruited into the on-going GUSTO mother-offspring cohort study (ClinicalTrials.gov identifier: NCT01174875), which studies the impact of gene-environment interaction on long-term maternal and child health 19 . Pregnant women aged 18 years and above were recruited at < 14 weeks gestation from two main public maternity hospitals in Singapore. The Chinese, Malay or Indian participants were Singapore citizens or permanent residents. Mothers receiving chemotherapy, psychotropic drugs or who had type 1 diabetes mellitus were excluded. Mothers with possible pre-existing T2D and chronic hypertension were not excluded at the outset, but we conducted sensitivity analyses excluding these participants in the current study. The design of the study has been detailed elsewhere 19 . This study was approved by the National Health Care Group Domain Specific Review Board (reference D/09/021) and the SingHealth Centralized Institutional Review Board (reference 2009/280/D). All research was performed in accordance with the relevant guidelines and informed consent was obtained from all participants upon recruitment.
Maternal data. Ethnicity, educational attainment, family history of diabetes were self-reported at study enrolment. Maternal age at delivery was calculated from the date of birth retrieved from national registration and the date of delivery. Parity, personal history of chronic hypertension and pregnancy-induced hypertension (including pre-eclampsia and non-proteinuric pregnancy-induced hypertension) were abstracted from medical records. Cigarette smoking, breastfeeding duration and medical history were obtained through intervieweradministered questionnaires.
Ascertainment of GDM and dysglycaemia after delivery. Both GDM during pregnancy and dysglycaemia at 4-6 years post-delivery were diagnosed using a 2-h (2 h) 75 g oral glucose tolerance test (OGTT) after an overnight fast. GDM was defined by the WHO 1999 criteria which was in use at the time of the study (fasting glucose ≥ 7.0 mmol/L and/or 2 h glucose ≥ 7.8 mmol/L). Any dysglycaemia at 4-6 years post-delivery were defined as having pre-diabetes [impaired fasting glucose (IFG, fasting glucose 6.0-6.9 mmol/L), impaired glucose tolerance (IGT, 2 h glucose 7.8-11.0 mmol/L)] or type 2 diabetes mellitus (T2D; fasting glucose ≥ 7.0 mmol/L and/or 2 h glucose ≥ 11.1 mmol/L) (see Supplemental Table 1) 20,21 . T2D was also investigated as a separate outcome.
Maternal anthropometry. Pre-pregnancy weight was self-reported at recruitment. Routinely measured weights during pregnancy at up to nine time-points spanning the first to the last antenatal visit were abstracted from the medical records. Additionally, maternal weight and height at 26-28 weeks' gestation were measured in duplicates using SECA 803 Weighing Scale and SECA 213 Stadiometer (SECA Corp, Hamburg, Germany) Pre-pregnancy BMI. Pre-pregnancy BMI was calculated by dividing participants' self-reported pre-pregnancy weight in kilogram (kg) by the participants' measured height in meter-squared (m 2 ). Participants were then categorized as being underweight (< 18.5 kg/m 2 ), normal weight (18.5-22.9 kg/m 2 ), overweight (23.0-27.4 kg/m 2 ), or obese (≥ 27.5 kg/m 2 ) using established Asian cut-offs 22,23 .
Gestational weight gain (GWG). Participants were classified into groups of inadequate, adequate and excessive weight gain based on the Institute of Medicine (IOM) recommended absolute weight gain (for total gestational weight gain) and rate of weight gain (kg/week) (for weight gain during second and third trimesters) according to pre-pregnancy BMI category (see Supplemental Table 2) 24,25 . Total gestational weight gain was computed by subtracting first antenatal visit weight from last antenatal visit weight. To compute rate of weight gain during second and third trimesters, linear mixed-effects model with the Best Linear Unbiased Predictor was used to estimate linear trajectory of GWG per week 26 . Because participants might have changed their lifestyle behaviors after GDM diagnosis (at approximately 26-28 weeks' gestation), weight gain rates for periods before and after GDM diagnosis (or OGTT conduct in the case of non-GDM cases) were generated separately. For both weight gain rates, we only included participants with at least two weight measurements within the defined periods [(1) at or after 12 weeks' gestational age but before OGTT/GDM diagnosis and (2) at or after OGTT/GDM diagnosis until delivery]. Inadequate GWG was defined as an absolute weight gain or weight gain rate less than the IOM recommended lower limit, whereas excessive weight gain was defined as absolute weight gain or weight gain rate greater than the recommended upper limit. Other participants with weight gain or weight gain rate within the recommended range were classified as having adequate GWG, the reference group in our analyses.
Statistical analyses.
Descriptive statistics are reported as n (%) for categorical variables and means (SD) for continuous variables. Chi-square tests and independent t-tests were used to compare characteristics. All statistical tests were two sided and a P value < 0.05 was considered to be statistically significant.
The primary outcomes were dysglycaemia and T2D post-delivery while the exposures were GDM and weight status/gain/change from pre-pregnancy through post-delivery (i.e., pre-pregnancy BMI, GWG, PDWR). Relative risk (RR) and 95% confidence intervals (CI) of GDM and peri-pregnancy weight status with any dysglycaemia and T2D post-delivery were calculated using Poisson regression with robust standard errors. The regressions were conducted unadjusted and adjusted for important covariates based on existing literature: ethnicity, age at delivery, education (as a measure of socioeconomic status), parity, family history of diabetes, insulin treatment during pregnancy and pregnancy-induced hypertension. Apart from investigating the risk factors individually, we also modelled the risk for any dysglycaemia by looking at combinations of these risk factors (there were insufficient numbers for T2D modelling). No data imputation was undertaken for missing data and only cases with relevant datasets were included.
In the sensitivity analyses, we excluded (1) participants who had antenatal OGTT conducted < 24 weeks or > 32 weeks (n = 30; outside the window period conventionally used to define glycaemia thresholds in pregnancy) and with possible pre-existing T2D suggested by fasting glucose ≥ 7.0 mmol/L and/or 2 h glucose ≥ 11.1 mmol/L (n = 4) and (2) participants with chronic hypertension (n = 14; a common co-morbid condition of T2D) to confirm the consistency and robustness of association between GDM and post-delivery dysglycaemia. To assess if adoption of the newer criteria could potentially alter the relationships between combination of risk factors and future dysglycaemia, we also retrospectively applied the IADPSG criteria in a partial manner (GDM diagnosed by ≥ 5.1 mmol/L for fasting glucose and/or ≥ 8.5 mmol/L for 2 h glucose, without the 1-h [1 h] measure which was not performed at that time). All analyses were performed using Stata software (version 15.1, Statacorp, College Station, Texas).
Results
Among recruited women, 1239 had singleton pregnancies and still remained in the study at 26-28 weeks' gestation. Of these subjects, 1165 (94.0%) had antenatal OGTT results and 692 (59.4% of total n with antenatal OGTT) had both antenatal and post-delivery OGTT (see Supplemental Fig. 1 for participant flow chart). Characteristics of participants who had both antenatal and postnatal OGTT conducted (n = 692) with relevant covariate data and included in this study, were slightly older, more likely to be parous and less likely to have had pregnancy-induced hypertension compared with those who only had antenatal OGTT (n = 473) and therefore not included in this study (Supplemental Table 3). Ethnicity, family history of diabetes, smoking status, breastfeeding duration and peri-pregnancy BMI were comparable between the groups (Supplemental Table 3).
Among included subjects, 142 (20.5%) had GDM, of which 99.3% were diagnosed based on an elevated antenatal 2 h glucose measure alone. Table 1 (Table 1). Compared with participants with normal glucose tolerance post-delivery, participants with dysglycaemia 4-6 years post-delivery (18.6%) were older, had higher BMI from preconception to post-delivery, and more likely to have a family history of diabetes, and pregnancy-induced hypertension and insulin treatment for GDM in the index pregnancy (Table 1).
GDM and post-delivery dysglycaemia. In both unadjusted and adjusted analyses, GDM was associated with a significantly higher risk of having any dysglycaemia (IFG/IGT/T2D) and T2D post-delivery. Among mothers with a GDM-complicated pregnancy, 43.4% developed dysglycaemia at 4-6 years post-delivery, as Pre-pregnancy BMI and post-delivery dysglycaemia. Compared to normal weight women, women who were overweight and obese pre-pregnancy had a significantly higher risk of developing any dysglycaemia and T2D post-delivery. There was a gradation of effect with increasing BMI. In adjusted models, overweight and obese women had approximately two times and three times the risk (both P < 0.01), respectively, of developing dysglycaemia compared to normal weight mothers ( Table 2). These associations were independent of GDM diagnosis, as both pre-pregnancy overweight and GDM remained statistically significant risk factors for dysglycaemia when they were mutually adjusted for (results not shown). The relative risks of developing T2D were even greater; almost four times for overweight and seven times for obese women ( Table 2).
Gestational weight gain (GWG) and post-delivery dysglycaemia.
Overall, we did not observe any consistent association between total GWG, GWG rate before or after GDM diagnosis, with the risk of developing any dysglycaemia or T2D post-delivery ( Table 2). An exception was noted for inadequate total GWG, which was associated with a higher risk of T2D [RR (95% CI) 3.03 (1.03, 8.92)] compared with adequate total GWG ( Table 2).
Post-delivery weight retention (PDWR), BMI change and post-delivery dysglycaemia. PDWR
(≥ 5 kg with reference to pre-pregnancy weight) at 4 years post-delivery was associated with 1.5 times the risk of dysglycaemia; no consistent associations were observed for PDWR at 18 months. However, when weight change was categorised according to pre-pregnancy and post-delivery lean (< 23 kg/m 2 ) and overweight/obese (OWOB; ≥ 23 kg/m 2 ) status, women who were OWOB pre-pregnancy and remained OWOB at 18 months or 4 years post-delivery had consistently higher risk of developing any dysglycaemia (approximately three times) and T2D (approximately four times), as compared with women who were lean at both time-points (Table 2). Moreover, albeit based on small numbers, participants who transitioned from pre-pregnancy lean to post-delivery OWOB at 18 months also showed an increased risk of dysglycaemia. Also, despite transitioning from pre-pregnancy OWOB to post-delivery lean at 4 years there remained a higher risk of post-delivery dysglycaemia ( Table 2).
Combinations of risk factors and post-delivery dysglycaemia.
We further investigated the combined influence of GDM, substantial PDWR, and pre-pregnancy lean/OWOB status on dysglycaemia. Participants with the lowest risk (i.e. non-GDM, no substantial PDWR at 4 years, and pre-pregnancy lean) were used as the reference group. Compared to this reference group, substantial PDWR alone (in pre-pregnancy lean and non-GDM participants) was associated with 2.46 times (95% CI 1.09, 5.55) the risk of dysglycaemia at 4-6 years post-delivery; the risk was further doubled [4.82 (2.31, 10.07)] if participants had also been OWOB pre-pregnancy in addition to having substantial PDWR (Fig. 1). GDM alone (lean and without substantial PDWR) demonstrated 4.47 times (2.00, 9.98) the risk of dysglycaemia compared with the reference, a magnitude similar to that of the non-GDM group with both the two other risk factors (i.e. pre-pregnancy OWOB and substantial PDWR). Having these further two risk factors on top of GDM incrementally increased the relative risk for post-delivery dysglycaemia. In participants with all the three risk factors, the risk of developing dysglycaemia 4-6 years post-delivery was 10.64 times as high (5.02, 22.58) compared to participants with none of the risk factors (Fig. 1). When the reference group was changed to GDM participants without PDWR and who were lean pre-pregnancy, GDM participants with PDWR and who were OWOB pre-pregnancy had an adjusted relative risk of 2.38 (1.29, 4.41) of developing post-delivery dysglycaemia, indicating that having PDWR and prepregnancy OWOB exacerbated the adverse influence of GDM. In sensitivity analysis with GDM defined using partial IADPSG criteria without the 1 h glucose measure, the overall trends of a higher risk with an increasing number of risk factors, as compared to participants without any risk factors, were similar, but the effect estimates were attenuated (see Supplemental Fig. 2).
Discussion
In this multi-ethnic Asian prospective cohort, women who had a GDM-complicated pregnancy had 12 times the risk of developing T2D within 4-6 years after the index pregnancy compared to non-GDM cases. Overall, 43.4% of women who had GDM developed dysglycaemia within 4-6 years post-delivery, representing a substantial proportion who required clinical management. Independent of GDM but to a lesser extent, pre-pregnancy OWOB and (separately) substantial PDWR also increased the risk of dysglycaemia post-delivery when compared to the lowest risk group. Although each of these risk factors (GDM, pre-pregnancy OWOB, and substantial PDWR) incrementally increased risk of dysglycaemia, having GDM alone contributed equivalent risk to the combination of having pre-pregnancy OWOB and substantial PDWR without GDM. The highest risk was observed when all three risk factors were present, with ten times the risk of post-delivery dysglycaemia compared to those with none of these risk factors. No consistent associations were observed between GWG and post-delivery dysglycaemia. To reduce the risk of long-term dysglycaemia our study highlights the need for a combination of public health messaging to maintain BMI in a healthy range prior to pregnancy combined with weight management Scientific Reports | (2021) 11:5021 | https://doi.org/10.1038/s41598-021-82789-x www.nature.com/scientificreports/ interventions (such as improvement of diet and increased physical activity) 29 after pregnancy, especially in those who had pregnancies complicated by GDM.
Our work contributes to the existing knowledge on the development of type 2 diabetes after a GDM-complicated pregnancy 4 , particularly amongst multi-ethnic Asian women. In accord with published observations 17,30,31 , women with a history of GDM in our cohort demonstrated a high incidence of impaired glucose regulation (43.4%, of which 12.6% were consistent with new onset T2D) within a relatively short time period of 4-6 years after delivery. This increased risk of T2D [unadjusted and adjusted RR (95% CI) 13.84 (5.26, 36.94) and 12.07 (4.55, 32.02), respectively] is higher than those reported in the meta-analysis by Bellamy et al. 4 [pooled unadjusted RR (95% CI) 7.43 (4.79, 11.51)] and Vounzoulaki et al. 6 [pooled adjusted RR 9.51 (7.14-12.67)], which included studies conducted over longer periods of time, up to 28 years after delivery, and included many studies performed in White Caucasian and Western populations. The magnitude of risks we report here is more akin to other studies conducted in an Asian context. An Indian cohort revealed that 32.5% of women with a history of GDM progressed to T2D when screened at a median of 14 months post-delivery 32 . A Korean study also reported that 17% of women with a history of GDM developed T2D by 4 years post-delivery 17 . Universal GDM screening, an approach increasingly advocated by international authorities 33 and used in our study population, addresses a common limitation in the literature, as no assumptions were made on the GDM status of those not screened. We and others have shown that selective screening of GDM based on risk factors could result in close to half of the GDM cases being missed 34,35 , and therefore misclassified as non-GDM. Moreover, studies including populations who were only selectively screened during pregnancy, are more likely to be biased towards inclusion of those who already had pre-existing risk factors and thus of a higher baseline metabolic risk; and if they had screened negative and treated as 'controls' for assessments of associations between GDM and T2D, the impact of GDM on T2D development could then be underestimated due to dilution of contrast.
Previous studies have primarily investigated the impact of pre-pregnancy weight, GWG and post-delivery weight retention cross-sectionally at specific points in time on the development of T2D 12,[16][17][18] . A strength of our study is that we considered weight status of a woman from pre-pregnancy through postpartum longitudinally to assess their combined influence on the development of post-delivery dysglycaemia, in addition to GDM status.
Our result is in accordance with another Asian study, which showed that post-delivery weight retention or gain during 4 years of follow-up adjusted for pre-pregnancy BMI and last post-delivery follow-up BMI was associated with an increased risk of T2D in women with a history of GDM 17 . However, in our study we also demonstrated that post-delivery weight retention or gain even without a history of GDM was associated with an increased risk of dyslgycaemia 4-6 years post-delivery. In our population, GWG had limited implications for development of post-delivery dysglycaemia. A recent meta-analysis assessing effectiveness of lifestyle interventions for T2D prevention also reported that, among women with GDM, interventions initiated during pregnancy were not effective in reducing the risk of post-delivery T2D; nonetheless, only four studies were included 36 . Our www.nature.com/scientificreports/ observation that women with inadequate total GWG had a higher risk of T2D may represent reverse causation where some women with metabolic risk factors chose to adopt healthier lifestyle while pregnant, thus gaining less weight. Lifestyle intervention post-GDM delivery has been shown to be highly effective for the prevention of T2D [pooled RR (95% CI) from ten randomized controlled trials: 0.57 (0.42, 0.78)] 36 . It is also cost-effective, if not cost-saving 37,38 . Using a mathematical model, it was estimated that at least two disability-adjusted life years (DALYs) were averted with proper post-delivery lifestyle management 38 . Among women who were diagnosed with GDM in our study, 43.4% had an abnormal OGTT finding 4-6 years post-delivery and would have benefited from early intervention immediately after delivery. This includes 30.8% who had IFG or IGT, where the progression towards T2D can be prevented or delayed 39,40 .
A common underlying mechanism for GDM development is relative pancreatic insufficiency (β-cell dysfunction) 41 , which is possibly the predominant mechanism in women with normal BMI and among East Asian ethnicities including the Chinese 42 . Increase in insulin resistance is an important normal physiological change with advancing gestation to preserve nutritional supply to the fetus 43 , but the resulting increased pancreatic demands of such maternal adaptation is postulated to accelerate ongoing pancreatic β-cell exhaustion leading to increased T2D risk post-delivery 44 . Alternatively, in OWOB women, excessive adiposity may promote a proinflammatory state and insulin resistance, which contribute to both GDM development and later T2D 45 . Both types of mechanisms could thus result in additive effects that may underlie our study observations. Several limitations to this study need to be acknowledged. The pre-pregnancy weight which was self-reported by the participants at study enrolment may be affected by recall limitation. Nonetheless, the self-reported prepregnancy weight and measured booking weight in the GUSTO cohort were highly correlated (ρ = 0·96). BMI is used in this study as a measure of adiposity as commonly used in epidemiological studies. However, we acknowledge that the use of BMI is suboptimal since it does not take differences in body composition into account. The antenatal OGTT at the time of the study visit in 2010 was conducted based on 2 time-points (fasting and 2 h) and GDM diagnosed using the WHO 1999 criteria prior to the release of the IADPSG/WHO 2013 criteria. We had previously reported that if we had adopted the IADPSG/WHO 2013 criteria, without the 1 h glucose measurement, the GDM incidence in GUSTO would have reduced because of the raised threshold for 2 h glucose (and the lack of 1 h glucose), but post-delivery dysglycaemia risk would remain similar 46 . Now, we observed in our sensitivity analysis that had the IADPSG/WHO 2013 criteria been adopted, the trends of a higher risk of developing future dysglycaemia with an increasing number of risk factors (IADPSG-GDM, PDWR, pre-pregnancy OWOB) remained, with some attenuation in effect estimates. This could be due to several reasons that diluted between-group contrasts and BMI effects; the new non-GDM group may have been contaminated by (1) previously diagnosed GDM cases based on WHO 1999 criteria with an intermediate 2 h glucose between 7.8 and 8.4 mmol/L, where healthy lifestyle advice and treatment were given during pregnancy with possible persistent effects post-delivery, and are now reclassified as non-GDM cases, and (2) missed diagnosis of new GDM cases by IADPSG/WHO 2013 criteria (due to lack of data) where there would only have been an isolated abnormal 1 h glucose. Therefore, our results based on the retrospective adoption of the newer criteria should be interpreted with caution. Maternal postnatal OGTT was conducted only at 4-6 years post-delivery and not before, thus the timing of onset of the disease is unknown and a Cox proportional-hazards regression analysis could not be conducted. In addition, of the initial 1165 participants who had pregnancy OGTT conducted, only 59.4% (n = 692) went on to have a postnatal OGTT conducted. The modest sample size is a limitation in such modelling work and our findings warrant replication in other cohorts. Furthermore, there could be potential selection bias as the women with both antenatal and postnatal OGTTs were older, tended to have higher educational attainment, and less likely to be nulliparous or had pregnancy-induced hypertension; our observed associations in women who were generally healthier and of higher socio-economic status could be an underestimate for populations with higher underlying risks.
In conclusion, GDM, pre-pregnancy overweight/obesity and post-delivery weight retention independently increase the risk of dysglycaemia at 4-6 years after delivery, although GDM itself poses the highest risk. Overall, the greatest increased risk is observed in women with all three risk factors: a GDM-complicated pregnancy, overweight/obese pre-pregnancy and subsequent substantial post-delivery weight retention. As obesity is a modifiable risk factor, the results of this study support the importance of attaining a healthy weight before pregnancy and avoiding weight retention or gain post-delivery. Unfavourable peri-pregnancy weight status and the high risk of women with a history of GDM progressing to prediabetes and T2D within a relatively short period of time are factors driving further escalation of the epidemic of non-communicable diseases at immense personal, societal, and global health and economic cost. Effective prevention strategies are urgently needed. Pregnancy and postdelivery are times of intensive engagement with healthcare professionals and represent potential opportunities for education and management. However, focusing only on gestational weight gain and interventions during pregnancy alone are not going to have major impact on women's future health. Instituting preconception care 47 and effective post-delivery follow-up, especially for those who had GDM, can provide windows of opportunity for promoting long-term health.
Data availability
Data are available upon request to the GUSTO team for researchers who meet the criteria for access to confidential data. | 5,462.8 | 2021-03-03T00:00:00.000 | [
"Medicine",
"Biology"
] |
Classical Prognostic Factors Predict Prognosis Better than Inflammatory Indices in Locally Advanced Cervical Cancer: Results of a Comprehensive Observational Study including Tumor-, Patient-, and Treatment-Related Data (ESTHER Study)
Systemic inflammation indices were found to be correlated with therapeutic outcome in several cancers. This study retrospectively analyzes the predictive role of a broad range of systemic inflammatory markers in patients with locally advanced cervical cancer (LACC) including patient-, tumor-, and treatment-related potential prognostic factors. All patients underwent definitive chemoradiation and pretreatment values of several inflammatory indices (neutrophil/lymphocyte ratio (NLR), platelet/lymphocyte ratio, monocyte/lymphocyte ratio, systemic immune inflammation index (SII), leukocyte/lymphocyte ratio, combination of platelet count and NLR, aspartate aminotransferase/platelet ratio index, aspartate aminotransferase/lymphocyte ratio index, systemic inflammatory response index, and aspartate transaminase/neutrophil ratio index) were calculated. Their correlation with local control (LC), distant metastasis-free (DMFS), disease-free (DFS), and overall survival (OS) was analyzed. One hundred and seventy-three patients were included. At multivariable analysis significant correlations were recorded among clinical outcomes and older age, advanced FIGO stage, lower hemoglobin levels, larger tumor size, and higher body mass index values. The multivariate analysis showed only the significant correlation between higher SII values and lower DMFS rates (p < 0.01). Our analysis showed no significant correlation between indices and DSF or OS. Further studies are needed to clarify the role of inflammation indices as candidates for inclusion in predictive models in this clinical setting.
Introduction
Cervical cancer is one of the most common cancers worldwide [1].Concurrent chemoradiation (CRT) is the standard treatment option for patients with locally advanced cervical cancer (LACC).Although CRT achieves high rates of local tumor control [2], about one third of patients show treatment failure after the treatment [3,4].In the literature, several prognostic models, also for cervical cancer patients, have been published in last years.They could help clinicians in predict clinical outcomes allowing a more and more personalized treatment, based on stage, risk of recurrence, and demographic characteristics.Multiple predictors have been studied and included in predictive models and, in the LACC setting, tumor size, histological type, lymph node metastases, and FIGO stage are prognostic factors significantly related to overall survival (OS) [5,6].Furthermore, anemia has been known for decades to be a negative prognostic factor in LACC patients [7][8][9][10].However, among published predictive models there is often heterogeneity for clinical setting, analyzed outcomes and included predictors, which makes it sometimes difficult to apply the model in the real daily practice.
However, most of these studies analyzed only one index or a limited number of indices with partial assessment of potential confounders.Therefore, the aim of this study was to analyze the predictive role of a broad range of pre-treatment nutritional and systemic inflammatory markers, in a large population of patients with LACC treated with standard CRT, including clinical prognostic factors such as clinical, nutritional, tumor-related, and treatment-related data.
Aim and Design of the Study
The aim of this study was to correlate the prognostic impact in LACC of different pretreatment nutritional and systemic inflammation indices on the following clinical endpoints: local control (LC), distant metastasis free survival (DMFS), DFS, and OS.We retrospectively analyzed patients treated in our institution from July 2007 to July 2021 and enrolled in an observational study approved by our local Ethical Committee (ESTHER study, code CE 973/2020/Oss/AOUBo).Patients signed an informed consent to participate in the study.No patients were excluded from our analysis, to make the correlation as much as possible corresponding to daily practice.
Staging, Treatment, and Follow-Up
LACCs were retrospectively classified according to the 2018 FIGO staging system [28].Patients underwent definitive CRT using a combination of external beam radiotherapy (EBRT) to the pelvis (45-50 Gy, 1.8-2 Gy per fraction) and intracavitary interventional radiotherapy (brachytherapy-BRT, either with pulsed or high dose rate) to reach a total equivalent dose of 85-90 Gy on the macroscopic primary tumor.The clinical target volume (CTV) was defined as the gross tumor volume, the uterus, the upper third of the vagina, the parametria, and the pelvic nodes (internal, external, and common iliac, obturator, and presacral nodes) with a 7 mm expansion.Para-aortic lymph nodes were irradiated only in case of nodal metastases in this nodal region.The planning target volume was defined as the CTV plus 10 mm isotropic expansion.Suspicious or metastatic pelvic nodes received a sequential or simultaneously integrated boost up to a total equivalent dose of 55-65 Gy.A daily check of the patient set-up was performed by electronic portal imaging device until 2015 and subsequently by on-board cone-beam CT [29].Concurrent chemotherapy consisted of intravenous Cisplatin (40 mg/m 2 weekly).Patients were followed up with physical examination every three months for two years and then every six months for the next three years.A thoracic-abdominal-pelvic computed tomography (CT) was performed if clinically indicated or every six months in the first two years and every year in the following three years.
Evaluated Parameters 2.3.1. Patients Related Data
The following data were included in this analysis: age, body mass index (BMI, calculated as weight (Kg) divided by the square of height (m)), hemoglobin level (Hg, in g/100 mL), and prognostic nutritional index (PNI, calculated as serum albumin multiplied by 10 (g/dL) + 0.005 × total lymphocyte count (per mm 3 )).All these data refer to before CRT started.
Tumor Related Data
The following data were included in this analysis: histological type (squamous cell carcinoma, adenocarcinoma), Federation of Gynecology and Obstetrics (FIGO) stage, based on the 2018 version, clinical tumor stage, clinical nodal stage, and maximum tumor diameter.
Treatment Related Data
The following data were included: radiotherapy technique (3-D conformal radiotherapy, intensity modulated radiotherapy, or volumetric modulated arc therapy, EBRT dose (Gy) and fractionation on the pelvis, brachytherapy boost dose (Gy), total tumor dose (Gy), and overall treatment time (EBRT plus BRT, days).
Statistical Analysis
Patient and tumor characteristics and treatments data were reported using descriptive statistics.Categorical data were reported with numbers and percentages while continuous data were reported with medians and ranges.LC was calculated as the time since CRT start to local-regional recurrence, as evidenced by imaging studies or clinical findings, or until last follow-up in patients without pelvic recurrence.DMFS was calculated as the time since CRT start to distant failure, as evidenced by imaging studies or clinical findings, or until last follow-up in patients without extra-pelvic recurrence.DFS was calculated as the time since CRT start to any treatment failure, or until last follow-up in patients without LACC recurrence.OS was calculated as the period from CRT start until death or the date of the last follow-up.For each of the four considered endpoints, a univariate Cox s regression was performed including all the variables specified above.Moreover, a multivariate Cox s regression was performed including all variables showing a p-value less than 0.25 in univariate analysis.A 5% level of statistical significance was used (p < 0.05).In both univariate and multivariate analysis, the impact on the various endpoints of the inflammation indices was performed considering the latter as continuous variables, and therefore without dichotomizing them using prespecified cut-offs.We did the same with other continuous variables, such as age, BMI, and tumor diameter.Data were analyzed using SPSS for Windows (version 20.0; SPSS Inc., Chicago, IL, USA).
Patients Characteristics
One hundred and seventy-three patients were included in this analysis.Patients characteristics are reported in Table 1.Median follow-up was 36 months (range: 3-151 months).
Treatment Characteristics
All patients underwent concurrent CRT with weekly Cisplatin.Treatment characteristics are shown in Table 1.Positive lymph nodes were treated in 57 patients with an additional dose delivered either with sequential or simultaneous boost.BRT was delivered in all patients as Pulsed or High Dose Rate BRT.In our retrospective analysis we included all LACC treated patients from July 2007 to July 2021 in our institution, also including those who have interrupted or modified the treatment, mainly for clinical reasons.In this regard, we would like to report a patient with a known psychiatric pathology who prematurely stopped her EBRT treatment (26 Gy), due to poor compliance, for which we then personalized the dose of BRT boost (42 Gy).Moreover, a patient with a very large and not uniform tumor, after the first BRT fraction (4 Gy), was boosted with EBRT because the tumor and organs a risk anatomy did not permit us to deliver an accurate and safe BRT treatment.
Clinical Outcomes
During the follow-up 30 patients showed a local-regional recurrence, while distant metastases were recorded in 42 patients.Overall, 60 patients showed a treatment failure and 42 patients died.Moreover, 2-year LC, DMFS, DFS, and OS was 83.0%, 79.9%, 69.1%, and 87.4%, respectively, and 5-year LC, DMFS, DFS, and OS was 82.1%, 74.7%, 64.0%, and 71.5%, respectively.Median LC, DMFS, and DFS was not reached, while median OS was 122 months (95%CI: 117-NR).Older patient age was significantly correlated with lower DMFS rates at both univariate and multivariate analyses.Similarly, older patients had lower OS rates at both univariate and multivariate analysis.Furthermore, higher BMI values were significantly correlated with worse DFS and worse OS, both in univariate and multivariate analysis.Moreover, Hb values >12 g/dL resulted (compared to patients with Hb < 10 g/dL) in better LC, better DFS, and higher OS rates.Finally, patients with Hb > 12 g/dL showed better DFS at multivariate analysis even compared to patients with Hb levels between 10 and 12 g/dL (Table 2).
As regards the clinical outcomes, compared to patients with FIGO stage I-II LACC, patients with FIGO stage III showed, at univariate analysis, worse results in terms of LC, DMFS, DFS, and OS.At multivariate analysis only negative correlations with DMFS, DFS and OS were confirmed.Furthermore, patients with FIGO stage IV, compared with stage I-II, showed worse LC and DFS (both: p < 0.01) at univariate analysis but these correlations were not confirmed at multivariate analysis.Finally, larger tumor diameter correlated with worse LC, DFS, and OS.Instead, multivariate analysis confirmed only the negative correlation with LC (Table 2).
Moreover, none of the treatment-related parameters was significantly correlated with any of the analyzed outcomes.
Inflammatory Indices
Higher COP-NLR scores and higher ANRI values were significantly correlated with lower LC rates at univariate analysis, but these correlations were not confirmed at multivariate analysis.Higher SII values were significantly correlated with lower DMFS rates at both univariate and multivariate analysis, as well as lower DFS rates, only at univariate analysis.None of the analyzed indices showed significant correlations with OS. (Table 2).
Discussion
In a comprehensive analysis of inflammatory indices and patient-, tumor-, treatment-, and nutrition-related parameters, the negative prognostic impact of older age, advanced FIGO stage, lower hemoglobin levels, and largest tumor size was recorded in LACC patients treated with CRT plus BRT boost.These results have been known for a long time even if, at least with regard to age, the relationship with outcomes seems to have a complex J-shaped nonlinear correlation, with some studies showing a worse prognosis even in younger patients' subgroups [13,27].
In terms of nutritional parameters, our study showed a negative effect of BMI on DFS and OS, as previous studies [30].However, similarly to the correlation with age, the complex relationship between BMI and prognosis should be highlighted.In fact, not only high values but also lower than normal values (BMI < 18.5) seem to be associated with a worse prognosis [31,32].Furthermore, also studies based on the analysis of sarcopenia gave conflicting results in this setting, with analyses showing a significantly unfavorable impact of this parameter on OS [33], and studies failing to demonstrate this effect [34,35].Our analysis did not show an impact of PNI on any of the evaluated endpoints, contrary to what was recorded in a previous study [12].These contradictory results could arise from the different methodologies of these two analyses.Indeed, in the study by Haraga et al., the impact of the PNI was analyzed in combination only with age, nodal metastasis, FIGO stage, histological type, maximum tumor size, and PLR, while in our study also BMI, anemia, multiple inflammatory indices, and treatment characteristics were also included.Moreover, Gangopadhyay reported a significant impact of PNI on complete response rate after CRT but correlation with survival outcomes was not analyzed in her study [27].
Contrary to the literature data [36], no impact of treatment-related parameters on any of the analyzed endpoints was recorded in our analysis.An explanation for this outcome could result from the relative homogeneity of the delivered CRT and BRT, prescribed in one single center by the same group of radiation oncologists.
In terms of inflammation indices, our multivariate analysis confirmed only a significant correlation between increasing SII values and worse DMFS, in contrast to other studies reporting a significant correlation between pretreatment indices values and DFS [13,18,19,22,37] and OS [16][17][18][19][20]22,24,26,37].This difference can be explained by several reasons.In fact, our study included the largest number of potentially confounding factors in the analysis (Table 3).Furthermore, unlike other studies, we did not evaluate the indices using predefined cut-offs or cut-offs defined based on ROC curve analysis but considering their values as continuous variables.In fact, our aim was to screen several indices in order to identify those able to impact on prognosis, even considering multiple confounding factors.Legend: ALRI: aspartate aminotransferase to lymphocyte ratio index; ANRI: aspartate transaminase to neutrophil ratio index; APRI: aspartate aminotransferase/platelet count ratio index; BLR: basophil/lymphocyte ratio; BMI: body mass index; cN + : clinical positive nodes; COP-NLR: combination of platelet count and neutrophil to lymphocyte ratio; CR: complete response; CRT: chemoradiation; DFS: disease free survival; ELR: eosinophils lymphocyte ratio; FIGO: International Federation of Gynecology and Obstetrics; Hb: hemoglobin; LLR: leukocyteto-lymphocyte ratio; MLR: monocyte to lymphocyte ratio; N: nodal; NLR: neutrophil to lymphocyte ratio; OS: overall survival; PFS: progression free survival; PLR: platelet to lymphocyte ratio; PNI: prognostic nutritional index; RT: radiotherapy; SII: systemic immune inflammation index; SIRI: systemic inflammatory response index; T: tumor.
However, it should be noted that also other studies did not observe a significant impact on survival outcomes of pre-treatment inflammation indices [11,14,15,23] or reported a significant correlation only with DFS but not with OS [13].Also in this regard, methodological or sample size differences could explain these dissimilarities.
In our study we analyzed only pre-treatment nutritional and inflammation indices.The reason for this choice is that predictive models, in LACC patients, seem useful only before and not after CRT.In fact, attempts to improve outcomes with treatments following CRT (e.g., adjuvant systemic treatments) were not successful, as demonstrated by very recent publications [38,39].However, it should be noted that some studies showed a significant prognostic impact of post-treatment inflammatory indices or pre-post-treatment changes, even without significant correlations with pre-treatment values [19,23].The results of these studies could be useful to plan further trials testing post-CRT adjuvant systemic therapies only in patients subgroups with higher risk of treatment failure.
Our study has obvious limitations.The number of analyzed patients, although relatively large, at least for some subgroup analyses may be too small to identify significant differences.Furthermore, even though we had planned a comprehensive analysis, some known prognostic factors were found to be unavailable in our series.For example, the squamous cell carcinoma cell antigen (SCC), useful in monitoring during follow-up [40], but also able to predict prognosis [41], was not included in the analysis due to the small number of patients with available data.Furthermore, even if our aim was to provide a comprehensive analysis of the inflammation indices in LACC, some of the indices used in the literature were not considered, such as platelet-to-neutrophil ratio, monocyte-to-neutrophil ratio, platelet-to-white blood cell ratio, platelet-to-monocyte ratio, lymphocyte-to-monocyte ratio, eosinophil-to-lymphocyte ratio, and eosinophil-to-monocyte ratio [42].Finally, even if all our patients were into the LACC category, it was still a rather inhomogeneous series since the FIGO stage ranged from IB to IVA.This issue could have further limited the possibility of detecting the prognostic effect of inflammation indices considering that the latter, on the basis of a meta-analysis [43], seems to vary according to tumor stage and patient age.
Even considering these limitations, based on our and other published studies, further analyses of the prognostic impact of inflammation indices in LACCs seem warranted, also considering their favorable cost-effectiveness ratio.However, from a clinical practice point of view, incorporating the assessment of inflammatory indices into LACC management could be beneficial but at the moment, given the variability of scientific evidence, it would also seem premature.
Therefore, based on the available reports, the most promising candidate for inclusion in predictive models seem to be the NLR, given the significant prognostic impact recorded in several analyses [16][17][18][19][20]24,37], even though not confirmed in others [15,23] and our study.Moreover, future studies should be directed to analyze the possible combined impact of multiple inflammation indices.Indeed, the analysis by Lee et al. [14] showed worse OS only in case of increased pretreatment values of both NLR and PLR.
Finally, further analyses are needed to correlate the values and variations in inflammatory indices with the biological and molecular characteristics of the tumor and in particular to understand how these markers may be related to tumor response to therapies, both locally and out-of-target.
Table 2 .
Univariate and multivariable Cox s analysis.
Table 3 .
Comparison between the results of previous analyses and those of our series. | 3,922.8 | 2023-08-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs
Multilingual T5 pretrains a sequence-to-sequence model on massive monolingual texts, which has shown promising results on many cross-lingual tasks. In this paper, we improve multilingual text-to-text transfer Transformer with translation pairs (mT6). Specifically, we explore three cross-lingual text-to-text pre-training tasks, namely, machine translation, translation pair span corruption, and translation span corruption. In addition, we propose a partially non-autoregressive objective for text-to-text pre-training. We evaluate the methods on seven multilingual benchmark datasets, including sentence classification, named entity recognition, question answering, and abstractive summarization. Experimental results show that the proposed mT6 improves cross-lingual transferability over mT5.
Multilingual pretrained models are typically trained on multilingual unlabeled text with unsupervised language modeling tasks, e.g., masked language modeling (Devlin et al., 2019), causal language modeling (Conneau and Lample, 2019), and span corruption (Raffel et al., 2020). These unsupervised tasks are built upon large-scale monolingual texts. In addition, several studies propose cross-lingual tasks that utilize translation data from multilingual parallel corpora, such as translation language modeling (Conneau and Lample, * Contribution during internship at Microsoft Research. 2019), cross-lingual contrast (Chi et al., 2021a), and bidirectional word alignment (Hu et al., 2020a). Thanks to the translation data, the pretrained models produce better-aligned cross-lingual representations and obtain better cross-lingual transferability.
Recently, the multilingual text-to-text transfer Transformer (MT5; Xue et al. 2020) achieves stateof-the-art performance on several cross-lingual understanding benchmarks. MT5 inherits the benefits of T5 (Raffel et al., 2020) that treats every text processing problem as a text-to-text problem, i.e., the problem of generating some target text conditioned on the input text. Despite the effectiveness of MT5, how to improve MT5 with translation data is still an open problem.
In this paper, we present MT6, standing for improving multilingual text-to-text transfer Transformer with translation data. MT6 differs from MT5 in terms of both pre-training tasks and the training objective. We present three cross-lingual tasks for text-to-text Transformer pre-training, i.e., machine translation, translation pair span corruption, and translation span corruption. In the translation span corruption task, the model is trained to predict the text spans based on the input translation pair. The cross-lingual tasks encourage the model to align representations of different languages. We also propose a new objective for text-to-text pre-training, called partially non-autoregressive (PNAT) decoding. The PNAT objective divides the target sequence into several groups, and constrains that the predictions should be only conditioned on the source tokens and the target tokens from the same group.
We conduct experiments on both multilingual understanding and generation tasks. Our MT6 model yields substantially better performance than MT5 on eight benchmarks. We also provide an empirical comparison of the cross-lingual pre-training tasks, where we evaluate several variants of MT6 under the same pre-training and fine-tuning procedure.
Moreover, our analysis indicates that the representations produced by MT6 are more cross-lingual transferable and better-aligned than MT5. The contributions are summarized as follows: • We introduce three cross-lingual tasks for textto-text Transformer pre-training, which improves MT5 with translation data.
• We propose a partially non-autoregressive objective that pretrains the decoder to use more information from the source sequence.
• We provide extensive evaluation results of various pre-training tasks and training objectives.
Background on T5 and MT5
Multilingual text-to-text transfer Transformer (MT5; Xue et al. 2020) is the multilingual variant of T5 (Raffel et al., 2020) pretrained on the mC4 (Xue et al., 2020) dataset, which consists of natural text in 101 languages drawn from the public Common Crawl web scrape. The backbone architecture of MT5 is the simple encoder-decoder Transformer (Vaswani et al., 2017), which is trained in a unified text-to-text manner. In specific, text-based NLP problems are formulated as text-to-text transfer, i.e., the model is trained to predict the target text conditioned on the input source text. For example, in text classification, the model predicts the label text rather than a class index. This feature enables the MT5 to be fine-tuned with the same training objective for every task. Formally, let x and y denote the input sequence and the output sequence, the loss function of training the x → y transfer is where y <i = y 1 , · · · , y i−1 . With the unified textto-text formulation, the pre-training task can be designed by constructing the input and output text sequences. Specifically, MT5 employs the span corruption task as the pre-training task, which is an unsupervised masked language modeling task. As shown in Figure 1, we provide an example of constructing the input and output sequences for span corruption. Given a natural sentence s, it first randomly selects several spans of s as the spans to be masked. Then, the input sequence is constructed by replacing the selected spans with unique mask Figure 1: Example of the span corruption task (Raffel et al., 2020) used in T5 and MT5. tokens. The output sequence is the concatenation of the original tokens of the masked spans, each of which starts with a unique mask token to indicate the span to be decoded. We denote the above two operations as g i and g o , standing for converting the original sentence s into the input or the output formats of span corruption. Thus, the loss function of the span corruption task can be written as
Methods
In this section, we first present three text-to-text pre-training tasks for improving MT5 with translation data. Then, we introduce the partially nonautoregressive decoding objective, and provide the detailed fine-tuning procedures for the classification, question answering, and named entity recognition tasks.
Cross-lingual Pre-training Tasks with Translation Pairs
As shown in Figure 2, we illustrate an overview of our cross-lingual text-to-text pre-training tasks. Given the same translation pair, the three tasks construct different input and output sequences.
Machine Translation
Machine translation (MT) is a typical text-to-text task with the goal of translating a sentence from the source language into a target language. It is a natural design to use MT as a text-to-text pre-training task for sequence-to-sequence learning (Chi et al., 2020). Let e and f denote a sentence and its corresponding translation. We directly use e and f as the input and output sequences, respectively. The loss function of MT is Thanks for your invitation last week .
Thanks for your invitation last week .
Merci pour votre invitation la semaine dernière .
Thanks for your invitation last week .
Merci pour votre invitation la semaine dernière . Figure 2: Overview of three cross-lingual text-to-text pre-training tasks. For each task, we provide an example of the input and target text. The words marked with "×" are randomly replaced with unique mask tokens like [M 1 ].
Notice that in the translation span corruption task, we mask tokens only in one language.
Translation Pair Span Corruption
Inspired by the translation masked language modeling (Conneau and Lample, 2019) task, we propose the translation pair span corruption (TPSC) task that aims to predict the masked spans from a translation pair instead of a monolingual sentence. Let e and f denote a sentence and its corresponding translation.
We concatenate e and f as a single sentence, and perform the span corruption on the concatenated sentence. Formally, we construct the input and output sequences by
Translation Span Corruption
A potential issue of translation pair span corruption is that the spans in the target sequence can be organized in unnatural word order. As shown in Figure 2, the output sequence of TPSC is organized as It can be found that the French word "invitation" is after the English word "week", which could harm the language model of the decoder. This motivates us to propose the translation span corruption (TSC) task where we only mask and predict the spans in one language. Given a translation pair (e, f ), we randomly select the e or f to perform span corruption. Without loss of generality, we consider e as the sentence for span corruption. Then, the input and output sequences are constructed by [g i (e); f ] and g o (e), respectively. With the resulting input and output sequences, the loss function of TSC can be written as
Pre-training Objective: Partially Non-autoregressive Decoding
Recall that the predictions in MT5 are conditioned on both the source tokens and the target tokens to the left. When predicting the tokens closer to the end, the model can use more information from the target sequence, resulting in the insufficient training of the encoder. To encourage the model to utilize more information from the encoding side while preserving the ability of autoregressive decoding, we propose a new training objective for text-to-text training, called partially non-autoregressive decoding (PNAT). In Figure 3, we provide an example for PNAT. Specifically, given a target sequence containing several spans, we divide the target sequence into groups, and train the model to decode each group separately. With the PNAT objective, a prediction is only conditioned on the source tokens and the target tokens from the same group. Consider the target sequence consisting of m spans. We divide the spans into n g groups, each of which contains m/n g consecutive spans. For the j-th group, we denote l j and r j as the start position and the end position, respectively. The PNAT objective is defined as The text-to-text loss L(x → y) is a specially case of L PNAT (x → y) with n g = 1.
The MT6 model is jointly pretrained on both monolingual and parallel corpora, where we use the span corruption and one of the three cross-lingual text-to-text tasks. For both tasks, we use the partially non-autoregressive decoding as the training objective where we divide the target sequence into n g groups. The overall pre-training objective is to minimize where L PNAT X stands for the one of the loss functions of machine translation (MT; Section 3.1.1), translation pair span corruption (TPSC; Section 3.1.2) and translation span corruption (TSC; Section 3.1.3), with PNAT as the training objective.
Cross-lingual Fine-tuning
We fine-tune all parameters of the MT6 model with Equation (1) regardless of the end task. Unlike language generation tasks, language understanding tasks should be pre-processed as the text-to-text format. We introduce how to convert the following three types of the language understanding task into the text-to-text format, i.e., constructing the input and output sequences from the original examples.
Classification The goal of the text classification task is to predict the label of a given text. Following T5 (Raffel et al., 2020), we directly use the label text as the output text sequence. We provide an example for the MNLI natural language inference task . Given an input sentence pair of "You have access to the facts ." and "The facts are accessible to you .", the goal is to classify the input into the relationships of "entailment", "contradiction", or "neutral". The input and target sequences are constructed as Input: bos You have access to the facts. eos The facts are accessible to you. eos Output: bos entailment eos Since multi-task fine-tuning is not the focus of this work, we do not prepend a task prefix in the input text. We also adopt a constrained decoding process, where the decoded text is constrained to be one of the labels.
Question Answering For the extractive question answering (QA) task, we concatenate the passage and the question as the input, and directly use the answer text as the target instead of predicting the answer span positions. We provide an example of converting a QA training example into the text-totext format.
Input: bos It has offices in Seoul, South Korea. eos Where is the office in South Korea? eos Output: bos Seoul eos We use the constrained decoding for the QA tasks where we use the tokens shown in the input passage as the decoding vocabulary.
Named Entity Recognition In named entity recognition (NER), we do not directly use the original tag sentence as the output. We find that the model tends to repeat decoding the "O" tag if the model directly learns to decode the tag sequences. Alternately, we construct the target text by concatenating the entity spans, each of which starts with the entity tag and ends with the entity tokens. We show an example of converting a NER training example into the text-to-text format.
Input: bos Italy recalled Marcello Cuttitta . eos Output: bos loc Italy sep per Marcello Cuttitta sep eos loc and per are entity tags denoting location and person. The sep tag means the end of entity span. We use the following constrained decoding rules: (1) The model should decode entity tags or the end-of-sentence tag ( eos ) after a bos token or a sep token; (2) Otherwise, the model should decode the tokens from the input sentence or the sep token for the other situations.
Setup
Data Following previous work on cross-lingual pre-training (Conneau et al., 2020;Chi et al., 2021a), we use the natural sentences from CC-Net (Wenzek et al., 2019) in 94 languages for monolingual text-to-text tasks. For crosslingual text-to-text tasks, we use parallel corpora of 14 English-centric language pairs, collected from MultiUN (Ziemski et al., 2016), IIT Bombay (Kunchukuttan et al., 2018), OPUS (Tiedemann, 2012), and WikiMatrix . Details of the pre-training data are described in Appendix.
Training Details
In the experiments, we consider the small-size Transformer model (Xue et al., 2020), with d model = 512, d ff = 1, 024, 6 attention heads, and 8 layers for both the encoder and the decoder 1 . We use the vocabulary provided by XLM-R (Conneau et al., 2020), and extend it with 100 unique mask tokens for the span corruption tasks. We pretrain our MT6 for 0.5M steps with batches of 256 length-512 input sequences. The model is optimized by the Adam optimizer (Kingma and Ba, 2015) with a linear learning rate scheduler. The pre-training procedure takes about 2.5 days on an Nvidia DGX-2 Station. Details of the pre-training hyperparameters are described in Appendix.
XTREME Cross-lingual Understanding
To validate the performance of MT6, we evaluate the pretrained models on XTREME (Hu et al., 2020b), which is a widely used benchmark for cross-lingual understanding. Following MT5 (Xue et al., 2020), we consider six downstream tasks included by XTREME: the named entity recognition (NER) task on the WikiAnn (Pan et al., 2017;Rahimi et al., 2019) dataset in 40 languages, the question answering (QA) task on MLQA (Lewis et al., 2020b), XQuAD (Artetxe et al., 2020), and TyDiQA-GoldP (Clark et al., 2020), the cross-lingual natural language inference task on XNLI (Conneau et al., 2018), and crosslingual paraphrase adversaries on PAWS-X . The models are evaluated under the cross-lingual transfer setting (Conneau et al., 2020;Hu et al., 2020b). Under this setting, the models should be fine-tuned only on English training data but evaluated on all target languages. Moreover, for each pretrained model, only one model is used for all languages rather than selecting fine-tuned models separately. Details of the fine-tuning hyperparameters are described in Appendix.
As shown in Table 1, we present the evaluation results of the pretrained models on the XTREME benchmark. We observe that MT6 achieves the best performance on XTREME, improving the average score from 45.0 to 50.4, as we go from MT5 to MT6. It is worth mentioning that pre-training the model only with the machine translation task performs even worse than MT5. We have noticed that several target languages in TyDiQA and WikiAnn are not covered by our parallel corpora. However, the NMT pretrained model still shows poor results on the other four tasks, where all target languages are covered by the training data. Detailed results can be found in Appendix.
Comparison of Pre-training Tasks
To provide a clear comparison among the pretraining tasks, we implement the text-to-text pretraining methods presented in Section 3, and pretrain variants of MT6 with the same training data and resources for fair comparisons. Table 1 compares the evaluation results of the models pretrained with seven different combinations of span corruption (SC), machine translation (MT), translation pair span corruption (TPSC), translation span corruption (TSC), and partially non-autoregressive decoding (PNAT). It can be observed that jointly training SC+TSC with PNAT achieves the best overall performance on the XTREME benchmark, with substantial gains over the models trained on monolingual data only. The same trend can be observed for the other models pretrained on both monolingual data and parallel data. This demonstrates that introducing translation data to text-to-text pre-training can improve the performance on the end tasks of cross-lingual understanding. Moreover, PNAT provides consistent gains over SC and SC+TSC, showing that PNAT is effective on both monolingual and cross-lingual tasks. Surprisingly, SC+PNAT obtains comparable results to SC+MT without any parallel data. Comparing TSC with MT and TPSC, we observe that SC+TSC brings noticeable improvements on question answering tasks. Although SC+MT shows competitive results on XNLI, the results on the other tasks are relatively low, indicating that simply jointly training SC with MT is not the most effective way to pretrain MT6.
Abstractive Summarization
Multilingual Summarization In addition to language understanding tasks, we also evaluate our MT6 model on the abstractive summarization task. Abstractive summarization aims to generate a summary of the input document while preserving its original meaning. We use the Gigaword dataset provided by Chi et al. (2020). The dataset is constructed by extracting the first sentences and head- lines as the input documents and summaries, respectively. The dataset consists of examples in the languages of English, French, and Chinese. For each language, it contains 500K, 5K, and 5K examples for the training, validation, and test, respectively. We fine-tune the models for 20 epochs with a batch size of 32 and a learning rate of 0.00001. During decoding, we use the greedy decoding for all evaluated models. As shown in Table 2, we report the ROUGE (Lin, 2004) scores of the models on Gigaword multilingual abstractive summarization. We observe that MT6 consistently outperforms MT5 on all the three target languages. Comparing with the XLM (Conneau and Lample, 2019) and XNLG (Chi et al., 2020) models with 800M parameters, our MT6 model achieves a similar performance with only 300M parameters. Besides, under the setting with fewer training data, MT6 shows more improvements over MT5.
Cross-Lingual Summarization
The crosslingual summarization task aims to generate summaries in a different language. We use the Wikilingua (Ladhak et al., 2020) dataset containing passage-summary pairs in four language pairs. We fine-tune the models for 100K steps with a batch size of 32 and a learning rate of 0.0001. We use the greedy decoding for all evaluated models. The evaluation results are shown in Table 3, where MT6 outperforms MT5 on the test sets of four language pairs.
Cross-lingual Transfer Gap
To explore whether our MT6 model achieves better cross-lingual transferability, we compare the crosslingual transfer gap scores of our MT6 with MT5. Cross-lingual transfer gap (Hu et al., 2020b) is defined as the difference between the performance on the English test set and the average performance on the non-English test sets. The transfer gap indicates how much the end-task knowledge preserves when transferring from English to the other target languages. Empirically, a lower transfer gap score indicates better cross-lingual transferability. Following Hu et al. (2020b), we compute the transfer gap scores over the sentence classification and question answering tasks. As shown in Table 4, MT6 consistently reduces the transfer gap across all the five tasks, demonstrating that our model is more effective for cross-lingual transfer than MT5.
Cross-lingual Representations
We analyze the cross-lingual representations produced by our MT6 model. Following Chi et al. (2021a), we evaluate the representations on the Tatoeba (Artetxe and Schwenk, 2019) cross-lingual sentence retrieval task. The test sets consist of 14 English-centric language pairs covered by the parallel data in our experiments. Figure 4 illustrates the average accuracy@1 scores of cross-lingual sentence retrieval. The scores are averaged over 14 language pairs and both the directions of xx → en and en → xx. From the figure, we observe that MT5 shows a parabolic trend across different layers, which also appears in other cross-lingual encoder models (Jalili Sabet et al., 2020;Chi et al., 2021a). Differently, we obtain better performance Table 5: Evaluation results on word alignment. We report the alignment error rate scores (lower is better). We use the hidden vectors from the last encoder layer, and apply the SimAlign (Jalili Sabet et al., 2020) tool to obtain the resulting word alignments. Table 6: Effects of noise density. We report the average results over different task types and the average results over all the six tasks on the XTREME benchmark. We vary the noise density of the translation span corruption task from 15% to 100%. All results are averaged over five runs.
as we use higher layers of our MT6 model. At layer-8, our MT6 model achieves an average ac-curacy@1 of 43.2, outperforming the MT5 model by 35.6, which means our MT6 model produces better-aligned text representations. We believe the better-aligned representations potentially improve the cross-lingual transferability. Furthermore, the results also indicate that our pre-training objective is more effective for training the encoder than MT5.
Word Alignment
In addition to cross-lingual sentence retrieval that evaluates sentence-level representations, we also explore whether the representations produced by MT6 are better-aligned at token-level. Thus, we compare our MT6 with MT5 on the word alignment task, where the goal is to find corresponding word pairs in a translation pair. We use the hidden vectors from the last encoder layer, and apply the SimAlign (Jalili Sabet et al., 2020) tool to obtain the resulting word alignments. Table 5 shows the alignment error rate (AER) scores on the test sets provided by Jalili Sabet et al. (2020). Among the three language pairs, MT6 achieves lower AER scores than MT5, indicating that the cross-lingual representations produced by MT6 are also betteraligned at token-level.
Effects of Noise Density
In the translation span corruption (TSC) task, the input parallel sentences provide redundant information in two languages, which is different from the standard monolingual span corruption task. Thus, we explore the effects of noise density by varying the noise density in the translation span corruption task, with the other hyperparameters fixed. To reduce the computational load, we do not apply the partially non-autoregressive decoding, i.e., we pretrain the models with the original text-to-text objective. We pretrain MT6 models with the noise density of 0.15, 0.3, 0.5, and 1.0 respectively. It means 15%, 30%, 50%, or all of the source or target tokens are replaced with the masked tokens. Notice that setting the noise density as 1.0 is identical to machine translation, where the decoder is required to decode the whole target sentence.
In Table 6, we report the average scores on the XTREME benchmark. From the results, we observe that MT6 achieves the best results with the noise density of 0.5, rather than a higher noise density such as 1.0. The results indicate that the TSC task prefers a higher noise density, so that the model can learn to use more cross-lingual information. This finding is different from that reported by T5 (Raffel et al., 2020), where the span corruption task works better with the noise density of 0.15 under the monolingual setting.
Related Work
Cross-lingual LM Pre-training Cross-lingual language models are typically built with the Transformer (Vaswani et al., 2017) architecture, and pretrained with various pre-training tasks on largescale text data. Multilingual BERT (mBERT; Devlin et al. 2019) and XLM-R (Conneau et al., 2020) are pretrained with masked language modeling (MLM; Devlin et al. 2019) on large-scale unlabeled text in about 100 languages. MASS (Song et al., 2019) and mBART are pretrained in an auto-encoding manner, which provides improvements on the neural machine translation tasks. MT5 (Xue et al., 2020) is pretrained with the span corruption (Raffel et al., 2020) task under the text-to-text formulation (Raffel et al., 2020). Cross-lingual pretrained models also benefit from translation data. XLM (Conneau and Lample, 2019) jointly learns MLM and the translation language modeling (TLM) task. Unicoder (Huang et al., 2019) presents three cross-lingual tasks to learn mappings among languages. ALM converts the translation pairs into code-switched sequences as the training examples. Word-aligned BERT models (Cao et al., 2020;Zhao et al., 2020) improves the cross-lingual representations by fine-tuning the mBERT with the objective of minimizing the distance between aligned tokens. AMBER (Hu et al., 2020a) propose to maximize the agreement between the forward and backward attention matrices of the input translation pair. InfoXLM (Chi et al., 2021a) proposes the cross-lingual contrastive learning task that maximizes the InfoNCE (Oord et al., 2018) lower bound of the mutual information between the input translation pair. XLM-Align (Chi et al., 2021b) leverages token-level alignments implied in translation pairs to improve cross-lingual transfer. XNLG (Chi et al., 2020) introduces the cross-lingual transfer for NLG tasks, and achieves zero-shot cross-lingual transfer for question generation and abstractive summarization. VECO (Luo et al., 2020) pretrains a variable cross-lingual pre-training model that learns unified language representations for both NLU and NLG. ERNIE-M (Ouyang et al., 2020) utilizes the back-translation masked language modeling task that generates pseudo parallel sentence pairs for learning TLM.
Encoder-Decoder Pre-training Raffel et al. (2020) use span corruption to pretrain text-to-text Transformer, where both language understanding and generation tasks are formulated as sequenceto-sequence fine-tuning. Song et al. (2019) propose masked sequence-to-sequence pre-training where the model predicts a randomly masked span. BART (Lewis et al., 2020a) design various denoised autoencoding tasks to recover the whole original sentence. PEGASUS introduces the gap sentence generation task for abstractive summarization pre-training. Chi et al. (2020) use both denoised autoencoding and machine translation for cross-lingual language generation. Another strand of research follows unified language model pre-training (Dong et al., 2019;Bao et al., 2020;Luo et al., 2020), where the encoder and the decoder share parameters. Ma et al. (2020Ma et al. ( , 2021 reuse pretrained multilingual encoder for sequence-to-sequence pre-training.
Conclusion
In this paper, we propose MT6 that improves the multilingual text-to-text transfer Transformer with translation data. We introduce three text-to-text pre-training tasks that are built on parallel corpora, and a training objective for improving text-to-text pre-training. Nonetheless, we present a comprehensive comparison of the text-to-text tasks, and show that our MT6 model outperforms MT5 on both cross-lingual understanding and generation benchmarks. For future work, we would like to pretrain MT6 models at a larger scale, and explore more applications, such as machine translation.
B Hyperparameters for Pre-Training
As shown in Table 9, we present the hyperparameters for pre-training MT6. We extend the vocabulary of the XLM-R (Conneau et al., 2020) with external 100 unique mask tokens as the vocabulary of MT6 and our MT5 re-implementation.
C Hyperparameters for Fine-Tuning
In
D Results on XTREME Cross-Lingual Understanding
We present the detailed results of the MT6 and our re-implemented MT5 models on XTREME in Table 11-16.
E Results on Wikilingua Cross-Lingual Summarization
As shown in Table 17, we present the detailed results of the MT6 and our re-implemented MT5 models on Wikilingua cross-lingual summarization. | 6,253.8 | 2021-04-18T00:00:00.000 | [
"Computer Science"
] |
Aberration-free aspherical lens shape for shortening the focal distance of an already convergent beam
The ideal lens surface for refocusing an already convergent beam is found to be one sheet of a Cartesian oval. This result is applied to the optimal construction of a compound refractive lens for X-ray nanofocusing.
Introduction
Compound refractive lenses (CRLs) have been used to focus X-ray beams since Snigirev et al. (1996) demonstrated that the extremely weak refraction of X-rays by a single lens surface could be reinforced by lining up a series of lenses. As the X-ray focal spot size has been brought down below 1 mm, more lenses have been necessary to achieve the very short focal lengths required. Because the absorption of X-rays in the lens material is generally significant, it thus becomes critical to design the CRL with the shortest length possible for the given focal length in order to minimize the thickness of the refractive material through which the X-rays must pass.
In recent years, designs for novel nanofocusing lenses have been proposed. X-ray refractive lenses will deliver ideally focused beam of nanometer size if the following conditions can be satisfied: (1) The lens material should not introduce unwanted scattering.
(2) The fabrication process should not introduce shape errors or roughness above a certain threshold.
(3) Absorption should be minimized in order to increase the lens effective aperture.
(4) The lens designs should not introduce geometrical aberrations.
In response to the first three of these conditions, planar micro-fabrication methods including electron beam lithography, silicon etch and LIGA have successfully been used to fabricate planar parabolic CRLs, single-element parabolic kinoform lenses (Aristov et al., 2000) and single-element elliptical kinoform lenses (Evans-Lutterodt et al., 2003) from ISSN 1600-5775 silicon. However, the best focus that can be obtained is strongly dependent on material and X-ray energy. A collimating-focusing pair of elliptical silicon kinoform lenses with a focal length of 75 mm has successfully focused 8 keV photons from an undulator source of 45 AE 5 mm full width at half-maximum (FWHM) into a spot of 225 nm FWHM (Alianelli et al., 2011). On the other hand, although silicon lenses can be fabricated with very high accuracy, they are too absorbing to deliver a focused beam below 100 nm at energy values below 12 keV, unless kinoform lenses with extremely small sidewalls can be manufactured. As a result, diamond has come to be viewed as a useful material for CRLs. Diamond, along with beryllium and boron, is one of the ideal candidates to make X-ray lenses due to good refractive power, low absorption and excellent thermal properties. Planar refractive lenses made from diamond were demonstrated by Nö hammer et al. (2003). Fox et al. (2014) have focused an X-ray beam of 15 keV to a 230 nm spot using a microcrystalline diamond lens, and an X-ray beam of 11 keV to a 210 nm spot using a nanocrystalline diamond lens. Designs of diamond CRLs proposed by Alianelli et al. (2016) would potentially be capable of focusing down to 50 nm beam sizes. Many technical problems remain to be solved in the machining of diamond; however, our current aim is to provide an ideal lens design to be used when the technological issues are overcome. We assume that technology in both reductive and additive techniques will advance in the coming decade and that X-ray refractive lenses with details of several tens of nanometers will be fabricated. This will make X-ray refractive optics more competitive than they are today for the ultra-short focal lengths. When that happens, lens designs that do not introduce aberrations will be crucial.
The aim of this paper is to define a nanofocusing CRL with the largest possible aperture that can be achieved without introducing aberrations to the focus. The determination of the ideal shape of each lens surface becomes more critical as the desired aperture grows. Suzuki (2004) states that the ideal lens surface for focusing a plane wave is an ellipsoid, although in fact this is true only if the index of refraction increases as the X-rays cross the surface [see Sanchez del Rio & Alianelli (2012) and references therein, as well as x2.4 of this paper]. The same author also proposes the use of two ellipsoidal lenses for point-to-point focusing, but a nanofocusing X-ray CRL requires a much larger number of lenses because of the small refractive power and the short focal length. Evans-Lutterodt et al. (2007), in their demonstration of the ability of kinoform lenses to exceed the numerical aperture set by the critical angle, used Fermat's theorem to calculate the ideal shapes of their four lenses, but explicitly described the shape of only the first lens (an ellipse). Sanchez del Rio & Alianelli (2012) pointed out the general answer, known for centuries, that the ideal shape of a lens surface for focusing a point source to a point image is not a conic section (a curve described by a second-degree polynomial). Rather, it is a Cartesian oval, which is a type of quartic curve (i.e. a curve described by a fourth-degree, or quartic, polynomial). An array of sections of Cartesian ovals is therefore one possible solution to the task of designing an X-ray lens for single-digit nanometer focusing. Previous authors in X-ray optics have not calculated analytical solutions (which do exist), but instead relied on numerical calculations of the roots without asking how well conditioned the quartic polynomial is; that is, how stable the roots of the polynomial are against small changes in its coefficients. However, in this paper it will be shown that finite numerical precision can cause errors in the calculation of the Cartesian oval when the change in refractive index across the lens surface becomes very small, as is usually the case with X-rays. Moreover, it has not been made explicit in the literature when it is reasonable to approximate the ideal Cartesian oval with various conic sections (ellipses, hyperbolas or parabolas). As a result, for the sake of rigor, the authors have considered it worthwhile to find the analytical solutions explicitly. This has not been done before in any recent papers. Finally, Alianelli et al. (2015) state that no analytical solution exists for a lens surface that accepts an incident beam converging to a point and that focuses this beam to another point closer to the lens surface. This paper will concentrate on that very case and will show that in fact such a lens surface can be described by a Cartesian oval. The existence of such solutions removes the necessity of using pairs of lens surfaces of which the first slightly focuses the beam and the second collimates the beam again. Schroer & Lengeler (2005) proposed the construction of 'adiabatically' focusing CRLs, in which the aperture of each lens follows the width of the X-ray beam as the beam converges to its focus. Fig. 1 displays a schematic of an adia-
Figure 1
Schematic drawing of an adiabatically focusing CRL for X-rays. The X-ray beam runs along the central axis from left to right. A i , R i and L i are, respectively, the geometrical aperture, the radius at the apex and the length along the beam direction of the ith lens surface. If the lens surfaces are assumed to be parabolic as in Schroer & Lengeler (2005), L i = A 2 i =ð8R i Þ. q 1i and q 2i are, respectively, the distance of the object and the distance of the image of the ith lens surface from that surface's apex. batic CRL. A potential example of such a CRL, which would be made from diamond, is given in Table 1. This example treats X-rays of energy 15 keV, for which diamond has an index of refraction 1 À Á where Á = 3.23 Â 10 À6 . The very small difference between the index of refraction of diamond and that of vacuum is typical for X-ray lenses. Calculations of surfaces 2, 24 and 48 of Table 1 will be demonstrated in the following treatment. The loss of numerical precision when using the exact analytical solutions of the Cartesian oval at small Á will be avoided by an approximation of the quartic Cartesian oval equation to lowest order in Á. The cubic equation resulting from this 'X-ray approximation' will be shown to be numerically stable. The approximate cubic and the exact quartic equation will be shown to agree when the latter is numerically tractable. As in Sanchez del Rio & Alianelli (2012), conic approximations will be made to the ideal surface in the paraxial case. The results of this paper agree with theirs in showing that Cartesian ovals, even when calculated by the cubic 'X-ray' approximation, introduce no detectable aberrations, and that elliptical or hyperbolic lens surfaces introduce less aberration than the usually used parabolic surfaces. However, this paper will also demonstrate that at sufficiently high apertures even the elliptical or hyperbolic approximation will produce visible tails in the focal spot. This is especially true for surface 48, the final surface and the one with the smallest focal length, where the elliptical and hyperbolic approximation produces tails in the focus at an aperture only slightly larger than that given in Table 1.
At the end of this treatment, the focal spot profiles calculated by ray tracing for the compound refractive lens (CRL) of Table 1 will be compared with the diffraction broadening that inevitably results from the limited aperture. The absorption in the lens material limits the passage of X-rays through a CRL to an effective aperture A eff that is smaller than the geometrical aperture. This in turn restricts the numerical aperture (NA) of a CRL of N surfaces to a value A eff /(2q 2N ), where q 2N is the distance from the last (Nth) surface to the final focus. Lengeler et al. (1999) derive a FWHM of 0.75/(2NA) for the Airy disk at the focal spot. This yields the diffraction broadening and hence the spatial resolving power of the CRL. Formulas for the effective aperture of a CRL in which all lens surfaces are identical have been derived by Lengeler et al. (1998Lengeler et al. ( , 1999, and Schroer & Lengeler (2005) have derived formulas for the effective aperture of an adiabatically focusing CRL. Very recently Kohn (2017) has re-examined the calculation of the effective aperture, surveying the various definitions appearing in the literature and distinguishing carefully between one-dimensionally and two-dimensionally focusing CRLs. In this paper the effective aperture and the numerical aperture will be estimated numerically by ray tracing, taking full account of the absorption to which each ray is subjected along the path from the source to the focus. For simplicity, the lens surfaces will all be assumed to be one-dimensionally focusing parabolic cylinders. It will be shown that the parabola is an adequate approximation to the ideal Cartesian oval within the effective aperture of the CRL in Table 1.
Definitions and derivation of ideal lens surface
The first task of this article is to calculate the exact surface yðxÞ of a lens that bends an already convergent bundle of rays into a new bundle of rays converging toward a closer focus. Table 1 Lens surfaces proposed for a diamond nanofocusing CRL for X-rays of energy 15 keV.
The distance from the apex of the last lens surface to the final focal plane is 11.017 mm. i is the place of each surface in the CRL. R i is the radius at the apex. f i is the focal length. A i is the geometrical aperture. q 1i is the downstream distance of the object of lens surface i. (A negative value of q 1i means that the object of lens surface i is upstream.) q 2i is the downstream distance of the image of lens surface i. The distance between the apices of consecutive lens surfaces is 0.005 mm. For legibility, R i and A i are rounded to four significant digits, and f i , q 1i and q 2i are rounded to three decimal places. The required quantities are labelled and defined in Fig. 2. According to Snell's Law taken for the rays at an arbitrary point P, Let the coordinate vector of P be ½xx x þ yðxÞŷ y. Fig. 2 shows thatk It is also seen in Fig. 2 that Substitution of equations (1)-(4) into (5) yields the first-order ordinary differential equation x þ y 0 ðxÞ q 1 þ yðxÞ Â Ã One can rearrange this to find an expression for the surface slope y 0 ðxÞ, which will be useful for design calculations once a solution for yðxÞ has been obtained, This is a nonlinear differential equation, but nevertheless it can be solved by noticing that the numerators of each fraction of equation (6) are the derivatives of that fraction's denominator. A simple variable substitution thus presents itself, As the initial condition, one may set yðx ¼ 0Þ = 0 as shown in Fig. 2. In that case, V 1 ðx ¼ 0Þ = q 2 1 and V 2 ðx ¼ 0Þ = q 2 2 . Integration of both sides of equation (6) starting from x = 0 can then be written where s is a dummy variable. Now, V 0 1 ðsÞ ds = dV 1 and V 0 2 ðsÞ ds = dV 2 , allowing equation (10) to be rewritten in the very simple form The integrals on both sides of equation (11) are elementary and yield the result Equation (12) describes a Cartesian oval with the two foci F 1 and F 2 shown in Fig. 2. Note that the distances of any point on yðxÞ from F 1 and F 2 are r 1 = fx 2 þ ½q 1 þ yðxÞ 2 g 1=2 and r 2 = fx 2 þ ½q 2 þ yðxÞ 2 g 1=2 , respectively. Equation (12) can then be written in the standard form for a Cartesian oval (Weisstein, 2016), Schematic drawing of surface yðxÞ (to be calculated in this paper) across which incident rays in a medium of refractive index n converging to a focus F 1 are refracted into rays in a medium of refractive index n 0 converging to a new closer focus F 2 . q 1 and q 2 are, respectively, the distances of F 1 and F 2 from the coordinate origin O along the central axis, which is parallel toŷ y.x x andŷ y are the coordinate unit vectors. P is an arbitrary point along yðxÞ.t t P andN N P are, respectively, the unit tangent and the unit inward normal to yðxÞ at P.k k P andk k 0 P are, respectively, the unit wavevector of the incident ray and the refracted ray passing through P. P is the angle of the incident ray to the inward normal at P, and 0 P is the angle of the refracted ray to the inward normal at P.
The case ½q 1 À ðn 0 =nÞq 2 = 0 may be of some interest, and is physically achievable for the problem of this article (q 1 > q 2 ) if n 0 > n. In this case, equation (12) can be squared and rearranged into and q 1 À ðn 0 =nÞ 2 q 2 = Àðn 0 =nÞ½ðn 0 =nÞ À 1q 2 < 0. This is a circle of radius centred at ð0; ÀÞ, where 2.2. Closed-form solutions of ideal lens surface 2.2.1. Derivation of algebraic equation. By adding q 1 to both sides of equation (12) and then squaring the equation, one obtains Rearranging this equation to put the radical alone on one side, then squaring it again, yields Equation (17) is a quartic polynomial equation in both x and y. It can be written as a quadratic equation in x 2 , since no odd powers of x appear in it, and by using the quadratic formula a closed-form expression of xðyÞ can be calculated. However, the inversion of this function to obtain yðxÞ is difficult, and yðxÞ would be far more useful for design calculations of the lens surface. It was thus decided to solve equation (17) for yðxÞ explicitly. Writing equation (17) in powers of y yields The calculation of yðxÞ therefore amounts to finding the roots of the quartic polynomial equation (18) for any x. As a quartic polynomial equation with real coefficients, equation (18) is guaranteed to have four solutions, of which -all four may be real, or -two may be real, while the other two are complex and conjugates of each other, or -all four may be complex, forming two pairs of complex conjugates.
No more than one of these roots can satisfy the original equation of the Cartesian oval, equation (12). To be physically significant, that root must be real. If equation (18) produces no real root that satisfies equation (12) when calculated at some particular x, the ideal lens surface does not exist at that x. This raises the possibility that the ideal lens surface may be bounded; that is, it has a maximum achievable aperture.
Calculation of roots of quartic polynomial equation.
Analytical procedures for calculating the roots of cubic and quartic equations were worked out in the 16th century. Nonetheless, explicit solutions of such equations are published so rarely that a detailed description of the method will be useful for the reader. Note that the procedure for quartic equations includes the determination of one root of a cubic equation. Many standard mathematical texts explain the solution of cubic and quartic equations; Weisstein (2016) has been followed closely here.
The first step in solving a general quartic equation ay 4 þ by 3 þ cy 2 þ dy þ e = 0 is the application of a coordinate transformation to a new variable " y y given by research papers and the division of both sides of the general equation by a such that one obtains a 'depressed quartic', that is, a quartic with no cubic term, in " y y. This has the form " y y 4 þ p" y y 2 þ q" y y þ r = 0, where By substituting the coefficients of equation (18) into equations (19) and (20), one obtains the coordinate transformation, and the depressed equation in " y y, which has the following coefficients, The strategy now is to add to both sides of the depressed equation a quantity ð " y y 2 u þ u 2 =4Þ, where u is a real quantity that will be determined shortly. Knowing that ð " y y 4 þ " y y 2 u þ u 2 =4Þ = ð " y y 2 þ u=2Þ 2 , one obtains from the depressed equation the following, The left side of equation (25) is thus a perfect square. Notice that the right side of equation (25) is a quadratic equation. Therefore it too will be a perfect square if u can be chosen to make its two roots equal; that is, if its discriminant D equals zero, Equation (26) is known as the 'resolvent cubic'. As a cubic equation with real coefficients, it is guaranteed to have three roots, of which either one or all will be real. The analytical solution of a general cubic equation u 3 þ fu 2 þ gu þ h = 0 begins with the calculation of two quantities A and B, of which the general formula is shown on the left, and the value for the resolvent cubic is shown on the right, research papers The discriminant of a general cubic equation is D c = A 3 þ B 2 . The next step depends on the value of D c .
(i) D c > 0. One root of the cubic equation is real and the other two are complex conjugates. The real root is For the resolvent cubic, the real root u 1 is (ii) D c ¼ 0. All roots of the cubic equation are real and at least two are equal. S and T in equation (28) are then equal. Thus, for the resolvent cubic, (iii) D c < 0. All roots of the cubic equation are real and unequal. In this case, an angle ' is defined such that One of the real roots of the general cubic equation is then given by which for the resolvent cubic yields Note that in all these cases one may write U 1 x 2 À Á : Substitution of u 1 into equation (25) then yields a perfect square on both sides, If u 1 > p, then equation (33) falls into two cases. In the first, one simply equates the square root of both sides, Equation (34) is a quadratic equation in " y y. Its two solutions are In the second case, one equates the square root of the left side of equation (33) with the negative of the square root of the right side, so that " y y 2 þ 1 2 Equation (36), like equation (34), is a quadratic equation in " y y. Its two solutions are " y y IAE and " y y IIAE are the four solutions of the depressed quartic. The explicit expressions in P, Q and U 1 were calculated assuming that 1 À ðn 0 =nÞ 2 > 0; however, the same expressions are also valid if 1 À ðn 0 =nÞ 2 < 0. The only change is that the explicit expression for " y y IAE would appear like that for " y y IIAE in equation (37), and the explicit equation for " y y IIAE would appear research papers like that for " y y IAE in equation (35). Because this does not affect the solution, it will not be mentioned further.
The solutions of the original quartic equation (18) are easily obtained from equations (35) and (37) by using equation (21), In principle, any of these roots could be the one that satisfies the original equation for the Cartesian oval, equation (12).
The simplest way to find this root is to evaluate equations (38) and (39) at x = 0. The root that equals zero there is the correct one. Equation (18) will have one and only one root equal to zero at x = 0, because then the constant (y 0 ) term vanishes but the linear (y 1 ) term does not. If u 1 = p, the formulas for the roots become somewhat simpler, These roots must also be checked to determine which one fulfills equation (12) for the Cartesian oval. If u 1 < p, then equation (33) can have a real solution only if both of the squared factors are equal to zero. This would require that " y y 2 þ u 1 =2 = 0 and " y y À q=½2ðu 1 À pÞ = 0 simultaneously. If that is not possible for the values of p, q and u 1 calculated above, then no real solution exists. Examples of solutions of the Cartesian oval equation (18) at various values of n 0 =n for a representative set of values for q 1 (23.0929 m) and q 2 (8.6912 m). The solutions 'yðxÞ' were calculated by using the exact equations (38) and (39) for x > 0; note that yðÀxÞ = yðxÞ. The solutions 'xðyÞ' were calculated by solving equation (18) as a quadratic equation in x 2 with y-dependent coefficients [see equation (45)]. The top row demonstrates three cases in which n 0 =n > 1 and the bottom row demonstrates three cases in which n 0 =n < 1. In each row, n 0 =n ! 1 from left to right. The sheet labelled 'surface of lens' is the one that fulfills the original lens equation (12). the original lens equation (12). If n 0 =n > 1, it is the inner sheet; if n 0 =n < 1, it is the outer sheet. It is evident that as n 0 =n ! 1 the outer sheet becomes very much larger than the inner sheet. How large the outer sheet becomes at x = 0 can be estimated as follows. First, define the following quantities from equations (22)-(24), Substitution of these values into equation (27) yields A = 0 and B = 0. Thus the discriminant D c of the resolvent cubic is zero, and from equation (29) one can define U 10 = lim n 0 =n!1 U 1 ðx ¼ 0Þ = ð1=3ÞP 0 . Because P 0 < 0, U 10 > P 0 and equations (38) and (39) apply. Letting " = 1 À ðn 0 =nÞ 2 and recalling the initial assumption q 1 > q 2 , one finds three solutions that remain bounded while the fourth solution y IIÀ ðx ¼ 0Þ diverges as À4" À1 ðq 1 À q 2 Þ. This sensitivity of the fourth root on the exact value of " makes the quartic equation (18) ill-conditioned. Therefore the numerical evaluation of equations (38) and (39) is very sensitive to roundoff errors caused by the limited precision in cases in which " is very small, even though the equations themselves remain theoretically exact. Such cases are not only common but normal in X-ray optics, for which = ðn 0 =nÞ À 1 generally has a magnitude on the order of 10 À5 or less. An approximation that can capture the three bounded roots of equation (18) with high accuracy while ignoring the divergent root is therefore justified.
The 'X-ray approximation' to the ideal lens surface
Equation (18) can be rewritten as a quadratic equation in x 2 simply by rearranging terms. The quadratic formula can then be applied to determine an equation for x 2 in terms of y, The 'AE' accounts for the two roots of any quadratic equation. However, if the plus sign is chosen, the resulting equation cannot be satisfied by the condition yðx ¼ 0Þ = 0 as is required. Therefore only the minus sign yields a useful set of solutions for the lens surface. Equation (45) is exact. However, the radical can be expanded by using the binomial theorem if [For the X-ray case where n 0 =n ' 1, this condition reduces approximately to j2ð"yÞ=ðq 1 À q 2 Þj ( 1.] The binomial theorem yields Hence 1 À ð1 þ zÞ 1=2 ' Àð1=2Þz þ ð1=8Þz 2 À ð1=16Þz 3 plus higher-order terms that will be discussed later. Note that this has no term in z 0 . Substituting this into equation (45) and summing terms with the same power of ð"yÞ on the right-hand side, one obtains the approximate equation For the linear term on the right-hand side, one obtains As n 0 =n ! 1, this quantity becomes very small as the terms in the numerator almost cancel out. Equation (48) therefore becomes subject to numerical errors caused by limited precision. However, remembering that in the X-ray case ðn 0 =nÞ = 1 þ where jj is much less than 1, one can make a power series expansion of C 1 in . The lowest term of this power series is For the quadratic term on the right-hand side of equation (47), one obtains Like C 1 , C 2 also approaches zero as ðn 0 =nÞ ! 1. The lowest term of the power series expansion of C 2 in terms of is For the cubic term on the right-hand side of equation (47), one obtains whose power series in terms of is found simply by setting = 0, Finally one can calculate the lowest term in the power series expansion of ",
research papers
Substituting equations (49), (51), (53) and (54) into equation (47) and keeping only the lowest-order terms in yields It is justified to keep all terms up to cubic on the right-hand side of equation (55) because all are multiplied by the same power of . The neglected higher-order terms Y n (n ! 4) on the right-hand side of equation (55), which arise from the binomial expansion in equation (46), are given to lowest order in by In the X-ray approximation, these terms diminish rapidly with increasing n. Therefore the fourth-order term is already much less than the cubic term included in the right-hand side of equation (55), and higher-order terms are smaller still. This justifies the neglect of terms beyond the cubic in equation (55). In standard form, equation (55) is This equation can be solved analytically. When x = 0 the roots are trivial: 0, Àq 1 and Àq 2 . The solutions for general x can be determined by the same methods used to calculate the resolvent cubic of the exact equation. The discriminant of equation (57) is Notice that, since q 1 > 0 and q 2 > 0, A XR < 0 because ðq 2 1 À q 1 q 2 þ q 2 2 Þ = ðq 1 À q 2 Þ 2 þ q 1 q 2 > 0. From these expressions, one obtains Now one needs to determine the sign of D XR at any given x. Notice that D XR depends quadratically on x 2 . Therefore one can use the quadratic formula to find the values x D0 at which D XR = 0, where B XR0 = B XR ðx ¼ 0Þ. From this one finds that where D XR0 = D XR ðx ¼ 0Þ. Inspection of equation (60) shows that D XR0 < 0 and that therefore x 2 D0þ x 2 D0À < 0, which proves that x 2 D0þ and x 2 D0À have opposite signs. Equation (61) shows that, if =ðq 1 À q 2 Þ > 0, x 2 D0þ > 0 and x 2 D0À < 0, since ðÀA XR Þ 3=2 > 0. Likewise, if =ðq 1 À q 2 Þ < 0, x 2 D0þ < 0 and x 2 D0À > 0. Only the positive squared x can yield real values of x; the negative squared x is discarded. There are thus two values of x at which the discriminant D XR ðxÞ = 0: (i) =ðq 1 À q 2 Þ > 0.
The positive coefficient of the x 4 term in equation (60) shows that D XR ðxÞ increases with increasing x 2 . Therefore, the solutions of equation (57) are as follows: (a) jxj < x D0 , all values of . Here the discriminant D XR of equation (57) is negative. In this case the three roots of equation (57) are all real and unequal. The calculation of the roots begins with the calculation of an angle such that The roots are then research papers One can now check these roots at x = 0, defining XR0 = XR ðx ¼ 0Þ. By using the trigonometric identity cos = 4cos 3 ð=3Þ À 3 cosð=3Þ, one can show that if ð1=3Þðq 1 þ q 2 Þ=ð2 ffiffiffiffiffiffiffiffiffiffiffiffi ffi ÀA XR p Þ = cosð XR0 =3Þ, then cos XR0 = B XR0 =ðÀA XR Þ 3=2 , thus satisfying equation (65). One can also use the common trigonometric identity sin 2 þ cos 2 = 1 to find that, for q 1 > q 2 as assumed here, sinð XR0 =3Þ = ð1= ffiffi ffi 3 p Þðq 1 À q 2 Þ=ð2 ffiffiffiffiffiffiffiffiffiffiffiffi ffi ÀA XR p Þ. Thus y XR1 ðx ¼ 0Þ = 0, y XR2 ðx ¼ 0Þ = Àq 1 and y XR3 ðx ¼ 0Þ = Àq 2 , as expected. Note that y XR1 ðxÞ is the solution for the shape of the lens.
(b) jxj = x D0 , > 0 (assuming q 1 > q 2 ). The discriminant D XR ðAEx D0 Þ = 0. Therefore the roots are all real and two of them are equal. Substitution of equation (63) into equation (59) shows that in this case B XR ðAEx D0 Þ = ÀðÀA XR Þ 3=2 . Therefore, according to equation (65), XR ðAEx D0 Þ = arccosðÀ1Þ = . Using equations (66)-(68), one finds the roots Therefore, in this case, y XR1 ðxÞ and y XR3 ðxÞ together form the inner sheet of the Cartesian oval. Since we know that y XR1 ðxÞ is the desired lens surface, this is consistent with the exact quartic equation.
(d) jxj > x D0 , > 0 (assuming q 1 > q 2 ). The discriminant D XR is now positive [see equation (60)]. Therefore only one real root exists. (The other two are complex and hence not physically significant.) This root must join up with y XR2 ðxÞ in equation (70). The real root is given by the expression ffiffiffiffiffiffiffiffiffiffiffiffi ffi ÀA XR p À ð1=3Þðq 1 þ q 2 Þ, which does indeed join up with equation (70) as expected. Note that this does not form part of the solution to the lens surface, but it is included here for completeness.
(e) jxj > x D0 , < 0 (assuming q 1 > q 2 ). Again, as the discriminant D XR is positive, only one real root exists. This root must join up with y XR1 ðxÞ in equation (72), ffiffiffiffiffiffiffiffiffiffiffiffi ffi ÀA XR p À ð1=3Þðq 1 þ q 2 Þ, which does indeed join up with equation (72) as expected. This does form part of the solution to the lens surface.
Examples of the cubic X-ray approximation of the Cartesian oval are shown in Fig. 4. The outputs displayed in these graphs were calculated using MATLAB (MathWorks, 2004) in the default double precision. At jj = 10 À4 , the exact equation for the Cartesian oval is still well conditioned enough to deliver stable output, and the X-ray approximation already agrees well with it. It is at values of below this that the usefulness of the X-ray approximation becomes obvious. Attempts to use the exact formula result in inconsistent output, while the output of the X-ray approximation remains stable. Fig. 5 displays a series of SHADOW3 ray-tracing simulations (Sanchez del Rio et al., 2011) of the lens surfaces determined by using the X-ray approximation for the six cases in Fig. 4. All of the 500000 rays originate from a two-dimensional Gaussian source of 1 mm height and width. This is much smaller than a normal synchrotron electron beam source, but was chosen to keep down aberrations that appear when the source size becomes comparable with the lens aperture. The rays are randomly sampled in angle over a uniform distribution of horizontal width 1.6 mrad and vertical width 1.6 mrad. These widths were chosen in order to just exactly cover the full aperture of the lens surface. Two optical elements are used in each simulation. The first optical element is used solely to turn the divergent rays from the source point into a convergent beam. It is a perfectly reflecting, ideally shaped ellipsoidal mirror located 23.10287 m from the source point. This mirror is set to a central grazing incidence angle of 3 mrad. It is shaped so that its source point coincides with the original source of the rays and its image point lies 23.10287 m downstream. The second optical element is the lens surface. It is located 0.010 m downstream from the first optical element. The image point lies 8.6912 m downstream. The basic shape is a plane surface of aperture 0.07425 mm horizontal  0.07425 mm vertical. To this plane is added a spline interpolation generated by the SHADOW3 utility PRESURFACE from a 501  501 mesh of points calculated by MATLAB from the X-ray approximation of this section. The index of refraction is taken as constant in the medium upstream from the lens surface and in the medium downstream from the lens surface. Absorption is neglected in both media. The displayed plots are all taken at the final focal point. In all six simulations, the distribution of rays in the image fits well to Gaussians of FWHM very close to 0.886 mm, the geometrical demagnified source size, in both height and width. A calculation of the spot size using the SHADOW3 utility RAY_PROP on 17 frames over a range within AE 0.8 m from the image point at 8.6912 m showed that this point was indeed, as required, the point at which the rays converged (see Fig. 6). It is therefore demonstrated that the X-ray approximation can indeed generate lens surfaces that focus convergent beam.
The paraxial approximation to a conic section
If the incident rays deviate from the central line x = 0 by only a small amount, the calculations of the ideal lens surface and of the lens surface in the X-ray approximation both show that the value yðxÞ of the lens surface will also be small. In this case, one can assume that the cubic term in equation (47) is much smaller than the quadratic term, thus leading to the condition for which the cubic term in equation (47) can be neglected.
[Recall that " = 1 À ðn 0 =nÞ 2 and that C 2 and C 3 are defined in equations (50) and (52), respectively.] In the X-ray approximation, equation (77) reduces to the simple condition y q 1 þ q 2 ( 1: If equation (77) (for the general case) or equation (78) (for the X-ray approximation) is fulfilled, then the paraxial approximation is valid. Equation (47) then reduces to and equation (57) for the X-ray approximation reduces to The solutions yðxÞ of these equations are conic sections. The type of conic section depends on the sign of the quadratic term y 2 . Beginning with general values of n 0 =n, one can complete the square of equation (79) for two cases: (i) C 2 < 0. The paraxial approximation to the ideal lens surface is the ellipse (ii) C 2 > 0. The paraxial approximation to the ideal lens surface is the hyperbola Examples of solutions of the cubic X-ray approximation [equation (57)] at various values of for a representative set of values for q 1 (23.0929 m) and q 2 (8.6912 m). The solutions of the cubic approximation are labelled yðxÞ. The solutions 'xðyÞ' were calculated by solving equation (18) as a quadratic equation in x 2 with y-dependent coefficients [see equation (45)]. The top row demonstrates three cases in which > 0 and the bottom row demonstrates three cases in which < 0. In each row, ! 0 from left to right. Note the loss of numerical precision in the solutions of the exact formula as jj decreases, even while the cubic approximation remains numerically stable.
In the X-ray approximation, one can complete the square of equation (80) for two cases: (i) > 0. The paraxial approximation to the lens surface is the ellipse Six ray-tracing simulations performed by SHADOW3 on the cases displayed in Fig. 4, except for a slight modification in (e) and ( f ) to accommodate the limited precision given by SHADOW3 to the index of refraction. The values of = n 0 =n À 1 are shown on each diagram. In all simulations, q 1 is 23.0929 m and q 2 is 8.6912 m as in Fig. 4. All of the rays originate from a Gaussian source of 1 mm r.m.s. height and width. They are caused by an ideal primary focusing element to converge to the initial focus x = 0, y = Àq 1 . The refractive surface given by the X-ray approximation in each case is calculated on a 501  501 mesh of points over an aperture 0.07425 mm  0.07425 mm. The SHADOW3 utility PRESURFACE is used to determine a spline function over this mesh. The resulting spline is then added to a refractive plane surface, which thus causes the rays from the primary focusing element to converge onto the final focal point x = 0, y = Àq 2 . Units of distance are centimetres. See text for further details.
Figure 6
Depth of focus calculation of lens surface calculated by the X-ray approximation. A distance of zero denotes the image point at 8.6912 m downstream from the lens surface.
In the limit q 1 ! þ1 and jj ( 1, equations (83) and (84) approach the conic sections calculated by Sanchez del Rio & Alianelli (2012). An even stricter paraxial approximation is obtained if, in addition to the condition given in equations (77) or (78), one demands that the quadratic y 2 term in equations (79) or (80) be much smaller than the linear y term. For general n 0 =n, this imposes the additional requirement which in the X-ray approximation becomes [One can see that equation (86) is in fact more stringent than equation (78) by showing that 1=ðq 1 þ q 2 Þ < ðq 1 þ q 2 Þ=q 1 q 2 for positive q 1 and q 2 .] If the conditions of equations (85) or (86) are fulfilled, the lens surface may be approximated as a parabola. For general n 0 =n, the lens surface is then approximately " C 1 x 2 ¼ y; and in the X-ray approximation the lens surface is approximately where F is the geometrical focal length. Double differentiation of equation (88) yields the well known relationship between the radius R and the focal length F of a single lens surface in the X-ray approximation, F = R=.
Testing the paraxial approximation
Figs. 7 and 8 demonstrate how, in the paraxial approximation, the best conic section (ellipse for > 0, hyperbola for < 0) and the best parabola deviate from the X-ray approximation to the ideal Cartesian oval for = AE 3:23 Â 10 À6 , the value for diamond at 15 keV. Surfaces 2, 24 and 48 were selected from Table 1 to demonstrate that the conic section approximations fail at decreasing apertures as the curvature of the surface increases. As mentioned by previous authors, the parabola deviates from the X-ray approximation at much smaller apertures than does the best ellipse or hyperbola. Each plot's horizontal axis is scaled to make visible the aperture at which even the best ellipse or hyperbola begins to deviate from the X-ray approximation. Thus, for surface 2, which has an aperture of 74.25 mm, one would expect the parabolic approximation to be sufficient because it matches the X-ray approximation well out to jxj < 2500 mm. For surface 24, which has an aperture of 59.52 mm, the parabolic approximation could still be sufficient, but, as the parabola only matches the X-ray approximation out to jxj < 200 mm, one might prefer to give this surface an elliptical or hyperbolic shape. For surface 48, which has an aperture of 46.760 mm, even the elliptical/hyperbolic approximation begins to fail at the edges; therefore, this surface must follow the ideal curve. As a result, surface 48 was chosen for the research papers In all plots, = þ3:23 Â 10 À6 . The label 'Cubic' means that the X-ray approximation of the ideal Cartesian oval was used to calculate the curve. Refer to Table 1 for the list of surfaces. (a, c, e) Comparison of cubic curve to paraxial ellipse and parabola of surfaces 2, 24 and 48, respectively. (b, d, f ) Deviation of paraxial ellipse and parabola from cubic curve of surfaces 2, 24 and 48, respectively. Solid circles at the ends of a curve indicate that the curve terminates there because the slope dy=dx diverges.
SHADOW ray traces of Fig. 9. To emphasize the improvement offered by the X-ray approximation over the ellipse/hyperbola, the aperture of surface 48 was slightly widened to 63.600 mm, at which Figs. 7 and 8 show that the ellipse or hyperbola fails severely at the edges. A value = AE 3:2 Â 10 À6 was chosen because of the limited precision given to the index of refraction input in SHADOW. In each simulation, 500000 rays were randomly selected from a Gaussian source of root mean square width 0.1 mm (FWHM 0.23548 mm) and uniform angular distribution. Although the chosen size of the source is much smaller than the electron beam sizes of real synchrotron storage rings, it is applied here to approximate a true point source, eliminating aberrations that would appear in the focal spot if the source size were comparable with the lens surface's aperture. Two optical elements were created. The first was a purely theoretical spherical mirror designed to reflect all rays from the source at normal incidence. This element exists only to produce the necessary convergent beam for the second element, which is the lens surface itself. The second element is situated 12.227 mm upstream from the focus of the spherical mirror. It is simulated with a plane figure to which a spline file generated by the SHADOW utility PRESURFACE is added. MATLAB was used to calculate a cylinder for one-dimensional focusing with 501 points over a width of 46.760 mm in the non-focusing direction and 681 points over a width of 63.600 mm in the focusing direction. The rays in the calculated profiles were sorted into 250 bins according to their position. Figs. 9(a) and 9(c) show the beam profiles generated at the nominal focus 10.999 mm downstream from surface 48, comparing them with the original source. The profiles generated by the ellipse or hyperbola are slightly but noticeably lower at the peak and have slightly larger tails than those generated by the X-ray approximation to the ideal curve. The profiles generated by the parabola show a loss of about 50% of the peak intensity and correspondingly severe tails. Depth of focus plots showing the variation of the beam size versus the distance along the beam direction from the nominal focus were generated by the SHADOW3 utility RAY_PROP and are displayed in Figs. 9(b) and 9(d). The X-ray approximation yields a lens surface that minimizes the beam width at the nominal focus, as required. The FWHM of the beam profile at this minimum is 0.210 mm, which matches the geometrically demagnified source size. The ellipse/hyperbola shifts the minimum of the beam width closer to the lens surface by 10-20 mm, and this minimum is still not quite as small as that achieved by the X-ray approximation. The parabola shifts the minimum of the beam width by about 60-70 mm from the nominal focus, and this minimum is considerably larger than that achieved by either the X-ray approximation or the ellipse/hyperbola. These results again demonstrate that the X-ray approximation can generate lens surfaces that focus convergent beam better than the approximate conic sections can do.
Diffraction broadening
SHADOW was used to calculate the effective aperture of the CRL in Table 1. 500000 rays of 15 keV energy were created from a point source. They were uniformly distributed in angle so that the entire geometrical aperture of the first lens surface (0.075 mm  0.075 mm) was illuminated. Each lens surface was assumed to be a parabolic cylinder, focusing in the vertical direction only. All of the lens surfaces except the first were taken to be unbounded so that their limited geometrical In all plots, = À3:23  10 À6 . The label 'Cubic' means that the X-ray approximation of the ideal Cartesian oval was used to calculate the curve. Refer to Table 1 for the list of surfaces. (a, c, e) Comparison of cubic curve to paraxial hyperbola and parabola of surfaces 2, 24 and 48, respectively. (b, d, f ) Deviation of paraxial hyperbola and parabola from cubic curve of surfaces 2, 24 and 48, respectively. apertures would not cut off any of the rays propagating inside the CRL. The lens material is diamond, which for 15 keV X-rays has an index of refraction differing from 1 in its real part by À3.23  10 À6 . The linear absorption coefficient is 0.282629 mm À1 . The calculated intensity distribution on the last surface (number 48) and the calculated angular distribution of the intensity converging onto the final focus are displayed in Figs. 10(a) and 10(b), respectively. The effective aperture is the FWHM of the plot in Fig. 10(a), 26.47 mm. The corresponding numerical aperture is half the FWHM of the angular plot in Fig. 10(b), 1.204 mrad. The diffraction broadening therefore amounts to 0.75/(2 NA) = 25.74 nm, which is much less than the focal spot widths in Fig. 9. Moreover, within the FWHM effective aperture in Fig. 10(a), a parabola is still a sufficiently good approximation to the ideal shape of the final lens surface, as shown in Figs. 7(e) and 7( f).
Conclusions
The immediate goal of this paper was to prove that an analytical solution, namely a Cartesian oval, exists for a lens surface that is to refocus an incident beam converging to a point into a new beam converging to a point closer to the lens surface. This result serves the long-term goal of designing aberration-free aspherical CRLs that will in future produce X-ray beam spots of 50 nm width and, further on, even 10 nm width. Numerical difficulties that arose in the analytical calculation of the Cartesian oval when the change in refractive index across the lens surface is small, as is usual for X-ray optics, were overcome by a cubic approximation that was numerically stable. The focusing performance of lens surfaces following the cubic 'X-ray' approximation was compared with that of lens surfaces shaped either as ellipses or hyperbolas, or as parabolas, as previous authors have suggested. Elliptical or hyperbolic lens surfaces yield stronger peaks and lower tails at the focus than do parabolic lens surfaces, but surfaces that follow the cubic X-ray approximation provide better focal profiles than either. Examples taken from a proposed adiabatically focusing lens, in which the radius of curvature and the aperture of the lens surfaces both decrease along the beam direction, indicate that the advantages of the X-ray approximation over conic sections are most apparent in the final, most strongly curved, lenses. SHADOW ray-tracing calculations using surface 48 of Table 1 over a geometrical aperture A x of 63.600 mm. The source is a Gaussian with a root mean square width of 0.1 mm. (a, c) Profiles of source and of focal spots produced by the cubic curve, the paraxial conic section (ellipse/hyperbola) and the parabola for = þ3:2 Â 10 À6 and = À3:2 Â 10 À6 , respectively. (b, d) FWHM of focal spots produced by the cubic curve, the paraxial conic section (ellipse/hyperbola) and the parabola for = þ3:2 Â 10 À6 and = À3:2 Â 10 À6 , respectively, as a function of distance along the beam from the nominal focus. See text for details. | 11,844 | 2017-10-06T00:00:00.000 | [
"Engineering",
"Physics"
] |
Performance modelling of direct contact membrane distillation using a hydrophobic/hydrophilic dual-layer membrane
HFP-co-PVDF/N6 hydrophobic/hydrophilic dual-layer membrane was used to study desalination with direct contact membrane distillation (DCMD). A one-dimensional (1-D) model was proposed to predict the fl ux and thermal ef fi ciency. Heat and mass transfer equations were solved numerically for the combined hydrophilic and hydrophobic layers. The membrane characteristics of the hydrophobic layer were considered for the calculation of the mass transfer coef fi cients, while the hydrophilic layer was ignored since it was assumed to be fi lled with water. However, the hydrophilic layer was taken into account during the calculations of conductive heat transfer. Therefore, the equations are different, compared to single-layer hydrophobic membranes. It was found that with the same hydrophobic membrane characteristics, the single-layer membranes performed with better fl ux and thermal ef fi ciency than the dual-layer membranes. Furthermore, the improvement of fl ux and thermal ef fi ciency by an addition of the hydrophilic layer has not been observed experimentally, and it is suggested that the improved performance for dual-layer membranes reported previously is due to improved permeability by using thinner and more porous hydrophobic layers that can be mechanically reinforced by the hydrophilic layer. The validation of the model was conducted by comparing the experimental results for single- and dual-layer membranes with the modelling results. The predicted fl ux and thermal ef fi ciency by the modelling were within 10% error to the experimental results. layer has a large effect on the fl ux and thermal ef fi ciency.
INTRODUCTION
Membrane distillation (MD) is a separation process that has been known for over 50 years (Camacho et al. 2013). It has been considered for different applications such as desalination, wastewater treatment, and dairy applications (Mostafa et al. 2017). Various configurations of MD have also been considered such as direct contact MD (DCMD), air gap MD, vacuum MD, and sweeping gas MD . MD is a thermally driven process, in which heat and mass transfer occur simultaneously across the membrane (Camacho et al. 2013). In DCMD, evaporation and condensation take place on the feed and permeate side, respectively. Vapour molecules transport through the membrane pores and condense on the permeate side for DCMD. This requires the membrane pores to be kept non-wetted during the process. In comparison with other desalination technologies, high-concentrated solution and pure water product can be achieved with MD (Enrico Drioli & Criscuoli 2009;Susanto 2011;Alkhudhiri et al. 2012). MD has some disadvantages beside its advantages. Those are high thermal energy consumption, heat loss by conduction, and membrane wetting and fouling (Pangarkar et al. 2011;Alkhudhiri et al. 2012;Qtaishat & Banat 2013). The performance of MD depends on the membrane properties, process conditions, and the module design (Susanto 2011;Winter et al. 2013). Proper module design should provide high-rate mass transfer, high turbulence for feed and permeate, and efficient evaporation. The suitable membrane in the process required to be resisted to wetting, high temperature, and fouling and scaling (Wang & Chung 2015). Mass transfer resistance from the membrane can be minimised by using membranes with low thickness and tortuosity (Adnan et al. 2012). Thermal resistance can be increased using thicker membrane, so that heat loss can be prevented. The membranes with higher porosity can increase both MD permeability and thermal resistance for which MD flux and thermal efficiency both increase (Khayet et al. 2004). Modelling and computation can be accounted for as vital tools when optimising those aforementioned parameter effects on the performance of MD (Susanto 2011).
Different types of hydrophobic/hydrophilic composite membranes were prepared over the last decade (Essalhi & Khayet 2014). The concept of dual-layer composite membranes was claimed to improve MD flux for desalination in DCMD because of their low resistance to mass flux by decreasing water vapour transport path length through the thin hydrophobic top layer and their low conductive heat loss attributed by the thick hydrophilic layer (Khayet et al. 2005;Qtaishat et al. 2009a;Essalhi & Khayet 2014). Hou et al. (2012) fabricated polyvinylidene fluoride (PVDF) flat-sheet composite membrane using hydrophilic polyester non-woven fabric for MD. Through DCMD tests, the composite membrane achieved as high as 47.6 kg/m 2 h permeate flux with feed inlet and permeate inlet temperatures at 80.5 and 20.0°C, respectively (Hou et al. 2012). In Qtaishat et al. (2009b), two different types of surface modifying macromolecules (SMMs) were blended into the host hydrophilic polymer polyetherimide (PEI) to prepare hydrophobic/hydrophilic porous composite membrane. They found that most of the dual-layer composite membranes performed at 55% higher fluxes than that of commercial polytetrafluoroethylene (PTFE) membrane. Qtaishat et al. (2009c) used hydrophobic/hydrophilic dual-layer membranes, which were blended hydrophilic polysulfone with hydrophobic SMMs. Some of the dual-layer membranes have exhibited higher DCMD fluxes than commercial PTFE membrane. Although M1 (SMMs/PS) duallayer membrane that was used in Qtaishat et al. (2009c) had the highest flux among the other membranes in the study, M12 (SMMs/PEI) membrane from the study of Khayet et al. (2005) achieved higher flux than that of M1 membrane due to its thinner hydrophobic top layer. The hydrophobic layers prevent the penetration of water into the membrane pores and provide mass transfer resistance to vapour flow, while both hydrophobic and hydrophilic layers contribute to heat transfer (Khayet 2011). On the whole, it has been claimed that the hydrophobic layer of the membrane should be as thin as possible, whereas its pore size and porosity should be as large as possible to achieve high MD permeability (Khayet 2011).
Mathematical modelling of MD can lead to further awareness of process mechanisms (Hitsov et al. 2017). There have been a number of models that simulated various types of MD phenomena, and those models have their strengths along with some limitations (Hitsov et al. 2015). Models can be divided into three types: 0-D models, 1-D models, and 2-D models (Alsaadi et al. 2013). 0-D models do not consider the changes in fluid conditions along the module. Bulk averaged fluid conditions and module properties are used as inputs in this type of model. 1-D models divide the module into small elements along its length in the flow direction, so that in each element, temperature and flow properties can change along the membrane. 2-D models involve complex computational fluid dynamic approaches to describe the heat and mass transfer across the feed and permeate channels and membrane modules, which can be useful when changes in flow and temperature occur in two dimensions. 2-D models are computationally more expensive and require detailed calculations and longer time to solve compared to 1-D models (Alsaadi et al. 2013).
Modelling heat and mass transfer in DCMD is an approach to understand the effects of different design parameters on the performance of DCMD (Deshpande et al. 2017). MD modelling is based on heat and mass transfer equations of the process and incorporates membrane properties such as porosity and pore size . Previous work has mainly focused on theoretical models and experimental studies for various operating conditions (Khayet 2011).
Performance modelling of hydrophobic membranes has been undertaken in previous studies for different MD configurations (Zhang et al. 2012(Zhang et al. , 2013Lawal et al. 2014). Flux predictions were performed for PTFE membrane in DCMD under various process parameters such as velocity, module length, and feed temperature (Zhang et al. 2009). Ibrahim & Alsalhy (2013) predicted flux by performance modelling for DCMD using hollow fibre membrane and found that the effect of feed or permeate flow rate on permeate flux was less sensitive than the effect of feed temperature. MD performance of compressed PTFE membrane was modelled by Zhang et al. (2012Zhang et al. ( , 2013, and flux and thermal efficiency were predicted at different pressures. Model validations were undertaken by comparing the predicted flux with experimental results as well as the exit temperatures for the feed and permeate, and good agreements were found between experimental results and model predictions. Lee et al. (2015) studied theoretical modelling of DCMD process using commercial hydrophobic microporous PTFE/PP composite membrane. The model predicted the flux successfully in comparison with experimental data, and the surface porosity was found to be a significant factor among parameters such as thicknesses of active and support layer on the process performance (Lee et al. 2015). Winter et al. (2013) introduced new integrated and backing modelling approach for DCMD using different membranes with and without backing. The model included the parameters based on geometrical specifications of membrane and scrim/non-woven backing structures. Choosing the optimised backing structures providing high porosity and small coverage of the effective membrane surface had improvements on membrane performance. Zhang et al. (2011) developed a model for flat-sheet hydrophobic PTFE membrane in DCMD. The effect of process parameters, such as temperature and module length on flux, was observed. The model predicted the permeate flux at different module lengths and found that permeate flux reduced as the module length increased. One of the advantages of mathematical models is that, once validated, they can be used to scale up MD from laboratory scale to industrial scale. 0-D modelling was applied for hydrophobic/hydrophilic dual-layer membranes. The first model was developed by Qtaishat et al. (2009a), and the model was used to observe the hydrophobic and hydrophilic layer membrane characteristics' influence on permeate flux. Optimum membrane characteristics were identified for high-efficiency MD. Heat and mass transfer equations were also derived for the model. Other modelling work examined the optimisation of membrane characteristics for hollow fibre hydrophobic/hydrophilic membrane to enhance the permeate flux (Bonyadi & Chung 2007). The effect of thermal conductivity of hydrophilic layer of dual-layer hollow fibre membrane was examined by experimental work and the model (Su et al. 2010). The study showed that increasing the thermal conductivity of the hydrophilic layer increased permeate flux. Nevertheless, the trend for flux changed once the thermal conductivity of the hydrophilic layer reached at certain level because of the limited temperature difference across the hydrophilic layer.
In this study, a 1-D model was developed to predict the permeate flux and energy efficiency for hydrophobic and hydrophobic/hydrophilic membranes in DCMD under various operating conditions. The predictions from the model were compared with the experimental results and show very good agreement. From this investigation, the effect of the hydrophilic layer on the flux was predicted and it is found that for the dual-layer membrane used, the hydrophilic layer resulted in a lowering of flux. The reasons for this observation will be discussed. Figure 1 shows the schematic diagrams of single-and dual-layer membranes for the current DCMD study . The temperature and concentration variations across the feed and permeate channels, as well as that across the membrane, are shown.
THEORY
The difference between the single-and dual-layer membranes is the extra hydrophilic layer in the dual layer. Figure 1 shows that for the dual-layer membrane, the thickness of the hydrophobic layer is in general much less than that in a single-layer membrane. The reasons for using the dual layer include: (1) the distance for vapour transport is reduced, and (2) the hydrophilic layer can be made strong to support the hydrophobic layer. In developing the 1-D model for heat and mass transfer across the membrane, the length of the module in the flow direction is generally discretised into small elements. Mass and heat transfer in both the flow and cross-flow directions are then calculated in each element based on energy and mass conservations. The equations given in the following sections apply to all membrane elements.
Heat transfer
The heat transfer through the hydrophobic membrane passes through three regions: bulk feed to the membrane surface, membrane, and permeate surface to the bulk permeate. The heat transfer mechanisms for each of these regions can be described as below: Bulk feed to membrane surface: Through the membrane: Membrane surface to bulk permeate: where Q f, Q m , and Q p are the heat transfers across the feed, the membrane, and the permeate, respectively, T b,f and T b,p are the bulk temperatures of the feed and permeate, J is the permeate flux, ΔH v is the latent heat of vaporisation, and T m,f and T m,p are the temperatures at the membrane surfaces on the feed and permeate sides, respectively. h f , h m , h p are the heat transfer coefficients at the feed, membrane, and permeate sides, respectively, in Equations (1-3), and the heat transfer is defined as being per unit area. Assuming no heat loss from the module, the total heat transfer at steady state can be written as follows: The overall heat flux for single-layer membranes can be found as follows: Heat transfer through the hydrophobic/hydrophilic dual-layer membrane is different compared to hydrophobic single-layer membranes due to the additional transport section, which is through the hydrophilic membrane layer. Heat flux through the hydrophilic layer is considered to be conductive because of water filling the layer. Heat transport associated with the flow of water through the hydrophilic layer is considered negligible because the flux is low and is ignored.
Through hydrophilic membrane layer: At steady state, The convective heat transfer coefficients can be estimated by using correlations of dimensionless numbers such as Nusselt, Reynolds, and Prandtl numbers.
The Nusselt number is defined as follows: The Prandtl number is used as a correction factor for the Nusselt number (Lawal et al. 2014): And the Reynolds number: where k is the thermal conductivity of water, ρ is the density of the water, C p is the specific heat of water, d is the hydraulic diameter of the channel, and u is the average velocity of the liquid. The Nusselt number is chosen particularly in relation to flow regime in the channel (Khayet 2011;Zhang et al. 2012).
Re , 2,100 laminar regime: 2,100 , Re , 10,000 transitional regime: Re . 10,000 turbulent regime: The conductive heat transfer coefficients of the hydrophilic sub-layer and hydrophobic top-layer can be calculated from the following equations: where k s , k w , k t , and k g are the thermal conductivities of the hydrophilic membrane polymer, water in the pores, hydrophobic membrane polymer, and the gas contained in the pores; δ s , δ t , ε s , and ε t are the thickness and porosity of the hydrophilic and hydrophobic layer of the composite membrane, respectively. The thermal efficiency (EE) was calculated from the ratio of flux (J ) and latent heat transfer (h latent ) to mass flux (ṁf), specific heat capacity (C p ), and temperature difference between bulk temperature (T fi ) inlet and bulk temperature outlet (T fo ) as described by the following equation (Zhang et al. 2010;Swaminathan et al. 2016).
Thermal efficiency for MD can be also described as the proportion of vaporisation latent heat to the total heat transferred (via latent heat and conductive losses) from the feed to the permeate (Alkhudhiri et al. 2012). Thermal efficiency can be enhanced by adequate membrane thickness, high feed temperature, and flow rates (Al-Obaidani et al. 2008;Duong et al. 2015).
Temperature polarisation is the difference between interface temperatures and the bulk temperatures (Zhang 2011). Temperature polarisation causes the resistance in the boundary layer of the membrane. Temperature polarisation coefficient can be given as in the following equation (Ge et al. 2014).
Mass transfer
Mass transfer through the hydrophobic single-layer membrane occurs in three regions: mass transport through boundary layers at the feed side, mass transfer through membrane pores, and mass transfer from membrane surface to the permeate side. Although dual-layer membranes have an additional hydrophilic layer, the equations remain the same as for hydrophobic single-layer membranes. The hydrophilic layer of the membrane is assumed to be filled with water, and the velocity of water flow through the hydrophilic layer is assumed to be low, so that the hydraulic resistance to mass transfer from the hydrophilic layer can be neglected. Mass flux across the membrane can be calculated by the following equation: where C m is the mass transfer coefficient, and p v m,f , p vm,p are the partial pressures on the feed and permeate sides, respectively. Partial pressures can be calculated by the Antoine equation in the following equation.
p v ¼ exp 23:328 À 3,841 T À 45 (22) where T is the mean temperature, (T b,f þ T b,p )/2, at the membrane interface. Depending upon Knudsen number, the mass transfer coefficient can be chosen. If Knudsen number is greater than 1, Knudsen flow dominates and C m can be described as follows (Khayet et al. 2004): If K n , 0.01, molecular diffusion dominates and C m can be described as follows: If 0.01 , K n , 1, Knudsen/molecular diffusion mechanism dominates mass transport (Qtaishat et al. 2008;Soni et al. 2009 where δ t , τ t , ε t , and r p,t are the thickness, tortuosity, porosity, and pore size of the top hydrophobic layer of the membrane. P a is the air pressure, P is the total pressure inside the pore, M is the molecular weight of the water, R is the gas constant, and T is the absolute temperature. The mass transfer equations are considered to be the same for the single-and dual-layer membranes, since the hydrophobic layer of the dual-layer membrane was placed on the feed side. During the mass transfer, salt ions cannot pass through the membrane, which causes the accumulation of salt ions in the feed and the increase of salt concentration near the membrane surface. This phenomenon is termed as concentration polarisation (Hitsov et al. 2015). Concentration polarisation occurs in the MD feed channels and reduces transmembrane flux. However, for low concentrations of feed solution, this phenomenon might be negligible, as the vapour pressure is not greatly affected. Concentration polarisation does not give burden computationally to the model.
EXPERIMENTAL
The characterisation of membranes and DCMD experiments were undertaken. Characteristics of the membranes were undertaken to determine the porosity, thickness, pore size, and contact angles of the membranes. The membrane used for the tests were commercially available single-layer polyethylene (PE) membrane (Aquastill) and a bespoke dual-layer PVDF-co-HFP-N6 (PH/N6) membrane fabricated via electrospinning and sourced from the University of Technology Sydney. The membranes used, their code names, and the composition are listed in Table 1. As Equation (27) shows, the mass flux depends on membrane properties such as pore size r, porosity ε, thickness δ, and tortuosity τ (Zhang et al. 2010), i.e. α is the exponent of pore size which is in the range of 1-2.
It shows by correlation above, increasing the porosity and pore size of the membrane increases the permeate flux, whereas increasing tortuosity and thickness reduces the permeate flux. Therefore, the ideal membrane should have higher porosity, adequate pore size (between 0.1 and 0.5 + 0.08 μm; Hou et al. 2012;Prince et al. 2013), and lower thickness and tortuosity. These characteristics are required in the modelling and are measured in this study.
Porosity test
Porosities of the membranes were measured using acetone and water due to their different wetting properties for the hydrophobic and hydrophilic layers (Zhang 2011). The water wetted the hydrophilic layer and the acetone wetted both the hydrophilic and hydrophobic layers. The mass differences of a dry and wet membrane were used to determine the porosity of the hydrophilic and hydrophobic layers. Water was first used to measure the porous volume of hydrophilic membrane support layer, so that the porosity of the hydrophilic layer could be determined using the hydrophilic layer thickness and area. The volume of the membrane (V m ) and support layer (V support ) was found using acetone, as it wetted both the membrane and hydrophilic layer pores. The porous volume of the support layer was calculated by the following equations: where V a is the active layer volume, m support is the mass of the support layer, ρ support is the density of the support layer, ρ is the membrane density, and m total is the membrane mass with active and support layers (Zhang et al. 2010). The density of active layer can be calculated from the following equation: where m active layer is the active layer mass, V flask is the volume of the volumetric flask, m acetoneþmembrane is the total mass of the acetone and membrane in the volumetric flask.
Pore size (porometer)
Mean pore size, maximum pore size, minimum pore size, and pore size distributions of the membranes were determined by Porometer Quantachrome 3GZ. Wet and dry run were conducted consecutively. Isopropyl Uncorrected Proof alcohol was used as wetting liquid due to its low surface tension. Test was conducted with increasing the transmembrane pressure, gas was sent through the holder, and the gas flow was measured as a function of the transmembrane pressure.
Thickness (scanning electron microscope)
Scanning electron microscope (SEM) was used to observe the membrane surface and membrane cross-sections. The thicknesses of the membranes were measured from membrane cross-sections. Membranes were frozen in liquid nitrogen and then cut with a blade to exhibit the clean cross-sections.
Contact angle measurement
Hydrophobicity of the membranes was measured by Contact Angle Analyser Kruss DSA25. Contact angles of the membranes were determined by the sessile drop method. A 4-μl drop was placed on the membrane surface, and the contact angle was determined using a camera and image analysis. The mean contact angle was determined from two replicate measurements. Both sides of the dual-layer membrane were tested. Figure 2 shows a schematic diagram of the experimental set-up for the DCMD tests similar to that used in Zhang (2011). It consisted of a membrane module, heater, a chiller, the feed and permeate pumps, a balance, and a conductivity meter. 1% w/w (10 g/L) salt concentration (saline water) was used as feed solution. The feed solution was held in the feed tank, and the permeate solution was collected in the product reservoir. The product reservoir was weighed throughout the experiment. DCMD tests were run for 4 h for each experimental condition, and inlet and outlet temperatures of the feed and permeate streams and the module inlets and outlets were recorded by temperature logging. The conductivity of the permeate solution was recorded throughout the experiment. DCMD tests were conducted at a variety of different feed and permeate inlet temperatures and flow rates for different membranes. The various experimental conditions can be found in Table 2. For all the experiments, the mass flow rates for both the feed and permeate channels were kept the same in each experiment.
MD test
Experimental conditions were kept constant throughout each experiment. Effects of different experimental conditions on flux and energy efficiency were examined and modelled. Membrane dimensions of M1 and M2 were 135 Â 135 mm and 135 Â 95 mm (L Â W ), respectively. As the width of the M2 membrane differed from the M1 membrane, two different module arrangements were used to accommodate the variations in the size of membrane samples between the single-and dual-layer membranes. The module and spacer dimensions can be found in Table 3. These dimensions were used in modelling the DCMD process. Table 4, and Figures 3 and 4 show the SEM images of the M1 and M2 membranes.
Characteristics of the membranes were used in the modelling to calculate the mass transfer and conductive heat transfer coefficients for the M1 single-layer and M2 dual-layer membranes.
M2 dual-layer membrane had more porous surface than M1 single-layer membrane, as it can be seen from Figure 3(a)-3(c). High porosity can lower conductive heat flux and increase the water vapour transport coefficient through the membrane (Zhang et al. 2010). Although it is desirable to have high porosity for better MD flux, the other characteristics such as pore size and thickness are also crucial factors.
Contact angles of the membranes indicate their hydrophobicity, and the hydrophobic layers of both M1 and M2 were greater than 100°. Hydrophobic membranes are desirable for DCMD to prevent wetting, so that membrane can be used for a longer time. Although contact angle values were not used in the modelling, it was important to see their hydrophobicity in order to assume the character of wetting tendency when contacting the feed.
SEM surface images of membranes show surface porosities, while images of cross-sections identify the membrane thicknesses. Thickness can be a limitation for mass transfer across the membrane, and lower permeate flux might be the reason for this.
From the images below, it can be seen that the hydrophobic and hydrophilic layers of M2 membrane had more porous surface than M1 membrane.
Uncorrected Proof
The layers of M2 were detached to allow SEM imaging to identify the layer thicknesses more easily, where hydrophobic and hydrophilic layers are represented as red and black arrows, respectively. The hydrophobic layer was thicker than the hydrophilic layer, as shown in Figure 4(b).
The differences in thicknesses of hydrophobic layers of M1 and M2 can be seen in Figure 4(a) and 4(b). The hydrophobic layer of M2 was thicker than the hydrophobic layer of M2. However, the thinness of the membranes was preferable over thicker membranes in DCMD due to membrane performance.
Model validation
Programmes were written using MATLAB to solve the heat and mass transfer equations given in Section 2. A flow chart for the solution procedure is given in Supplementary Appendix. The prediction of the permeate flux was undertaken at different feed inlet temperatures and at different flow rates for the M1 and M2 membranes. The predicted model permeate fluxes at different flow rates and feed inlet temperatures for M1 and M2 were validated with experimental results. In the following figures, the experimental results are presented with points and the modelling results with lines.
Following assumptions have been made to simplify the coding for predicting the flux and thermal efficiency in the model. These assumptions were: (1) The temperature gradient across the width of the membrane was neglected.
(2) Heat loss through the membrane module to the environment was neglected.
(3) Sensible heat transferred by the permeate is neglected.
(4) Permeate mass passing the membrane is neglected, as the single-pass recovery is approximately 5%. (5) Concentration polarisation effect was neglected due to the low concentration of salt used in the experiments compared to the concentration at which significant vapour pressure depression occurs. Hence, the model is only valid for low salt concentrations.
Temperature profiles of M1 and M2 were observed along the membrane length for co-and counter-current flows and can be seen in Figures 5 and 6. Temperature change between feed and permeate bulk is decreasing parallel to each other for counter-current flow, whereas co-current temperatures are approaching each other. T bf and T bp are the bulk temperatures at the feed and permeate side, and T mf and T mp are the membrane interface temperatures at the feed and permeate, respectively. Temperatures at the interface of the membranes were calculated by the model.
Temperature profiles for single-and dual-layer membranes were found to be similar; however, temperature profile for M2 has another temperature point between the interfaces of hydrophilic layer and permeate solution, which is T sp .
The temperature difference between T sp and T bp was observed to be very small because the thin hydrophilic layer of the membrane was filled with water, in which only conductive heat transfer takes place without latent heat transfer.
Predicting the permeate flux at different temperatures and velocities
The permeate flux increased for both membranes when increasing the flow rate due to enhanced turbulence along the channel. Figure 7 shows the flux change at different flow rates with co-current flow at 60°C feed and 20°C permeate inlet temperatures.
Uncorrected Proof
Experimental reproducibility of the experimental results was within the maximum error range, which was 10%. The permeate flux was observed to be higher with M1 membrane at different flow rates. Figure 8 shows the flux change at different flow rates for counter-current flow. The trend for flux was similar compared to co-current flow with both M1 and M2 membranes. M2 achieved its highest flux at 0.105 m/s with counter-current flow. Figure 9 shows the permeate fluxes at different feed inlet temperatures for M1 and M2. Increasing the feed temperature while keeping the permeate inlet temperature at 20°C increased the permeate flux for both membranes M1 and M2. M1 membrane achieved higher permeate flux than M2 membrane. Lower flux for the M2 dual-layer membrane can be correlated to heat and mass transfer coefficients that are based on the membrane characteristics. Heat and mass transfer across the membrane were limited due to M2 membrane's higher thickness. An additional hydrophilic layer of M2 has increased the thermal resistance and decreased Uncorrected Proof the overall heat transfer. The agreement between the model predictions and the experimental results was better than changing the feed inlet velocities as those in Figures 7 and 8. Figure 10 shows the permeate flux versus feed inlet temperatures with counter-current flow. The increase for the flux in counter-current flow was not as high as the increment with co-current flow for M1 membrane because of the higher temperature difference between feed and permeate bulk solutions.
The model accuracy with the experimental results was better when changing the feed inlet temperature than feed and permeate velocities (see Figures 7-10). This is probably related to more accurate modelling for heat transfer through the membrane compared to mass transfer, particularly given the simplified assumption made for mass transfer through the hydrophilic support layer. Also, the model has more parameters that were influenced by temperature rather than velocity. The same sensitivity was shown by the experiments as the model. The reason was when the flow was fully developed, further increments of fluxes increase thermal differences marginally.
Having higher porosity, larger pore size, and thinner layer was favourable for higher flux. Although M2 membrane had the high porosity, the performance was lower than that of the M1 membrane. The thickness and relatively smaller pore size of the dual-layer membrane M2 could be the factors for the relatively poor mass transfer.
Validation of energy efficiencies at various temperatures and velocities
Energy efficiencies for hydrophobic single-layer and hydrophobic/hydrophilic dual-layer membrane were compared, and validation was made comparing the thermal efficiency obtained from calculations to the experimental data as shown in Figures 11 and 12. It can be seen that MD with co-current flow was more efficient Uncorrected Proof than counter-current flow. Single-and dual-layer membranes exhibited different energy efficiencies. As it can be seen from Equation (19), energy efficiency is the ratio between the heat transfer from the vapour transfer across the membrane to the heat loss from the feed. Figure 11 demonstrates the energy efficiency variation with the feed inlet temperature for co-current flow regime. M1 membrane was more energy-efficient than M2 membrane.
Lower energy efficiency for M2 membrane can be explained because of its low flux and hydrophilic layer. The temperature difference between interface and bulk temperatures was higher for dual-layer membrane compared to single-layer membrane, which increases the temperature polarisation coefficient according to Equation (20). The addition of the hydrophilic layer can be a limitation for heat transfer, as it increases conductive heat losses to the permeate so that it reduces the thermal efficiency of the membrane. Therefore, dual-layer membrane performance for permeate flux and energy efficiency was less than single-layer membrane.
The temperature difference between T s,p and T b,p was observed to be very small because the thin hydrophilic layer of the membrane was filled with water, in which only conductive heat transfer takes place without latent heat transfer. The hydrophobic layer thickness of the M2 membrane caused more heat loss across the membrane compared to the thinner M1 membrane. The greater thickness of the layer reduced the conductive heat transfer across the layer.
Thermal efficiencies with counter-current flow were lower than with co-current flow. The reason was the higher temperature difference between feed inlet and outlet due to the flow directions leading to greater localised temperature polarisation.
Thermal efficiencies of the DCMD system can be increased by increasing the membrane pore size and porosities, which can also lead to higher flux.
CONCLUSIONS
A novel 1-D numerical model for dual-layer membranes was developed and verified experimentally with a hydrophobic/hydrophilic Electrospun PH/N6 membrane. The model was aimed to predict the permeate flux and energy efficiency for DCMD.
Mass and heat transfer equations for M1 single-layer membrane and M2 dual-layer membrane were solved numerically. To do so, heat and mass transfer coefficients were calculated after measuring the membrane characteristics. The mass and heat transfer equations for the hydrophobic/hydrophilic dual-layer membrane accounted for conductive heat transfer across the hydrophilic layer and neglected mass transfer effects because the flow rate was assumed negligible.
The validation of the hydrophilic/hydrophobic dual-layer membrane model was achieved by comparing the experimental results with modelling results. The accuracy for the model was in general within 10% error (5% experimental error and 5% modelling error), which makes reliable DCMD performance modelling. The accuracy for flux variation at different velocities was less compared to that at different feed inlet temperatures. This identifies that the hydrodynamic simplifications for the dual-layer membrane lead to greater error in the heat transfer equations used.
Uncorrected Proof
The permeate flux increase was higher with the increase in feed inlet temperature compared to the increase in flow rate. Despite M2 membrane having higher porosity than M1 membrane, M1 membrane performed better when it came to mass flux and energy efficiency compared to M2 membrane.
This 1-D model can be used to predict flux and thermal efficiency for either single or dual-layer membranes. This successful attempt of the 1-D model for dual-layer membranes can lead to in-depth analysis for heat and mass phenomena for membranes, which can also be extended to the 2-D model. | 7,824.6 | 2021-06-17T00:00:00.000 | [
"Engineering"
] |
Molecular Identification of β-Citrylglutamate Hydrolase as Glutamate Carboxypeptidase 3*
β-Citrylglutamate (BCG), a compound present in adult testis and in the CNS during the pre- and perinatal periods is synthesized by an intracellular enzyme encoded by the RIMKLB gene and hydrolyzed by an as yet unidentified ectoenzyme. To identify β-citrylglutamate hydrolase, this enzyme was partially purified from mouse testis and characterized. Interestingly, in the presence of Ca2+, the purified enzyme specifically hydrolyzed β-citrylglutamate and did not act on N-acetyl-aspartylglutamate (NAAG). However, both compounds were hydrolyzed in the presence of Mn2+. This behavior and the fact that the enzyme was glycosylated and membrane-bound suggested that β-citrylglutamate hydrolase belonged to the same family of protein as glutamate carboxypeptidase 2 (GCP2), the enzyme that catalyzes the hydrolysis of N-acetyl-aspartylglutamate. The mouse tissue distribution of β-citrylglutamate hydrolase was strikingly similar to that of the glutamate carboxypeptidase 3 (GCP3) mRNA, but not that of the GCP2 mRNA. Furthermore, similarly to β-citrylglutamate hydrolase purified from testis, recombinant GCP3 specifically hydrolyzed β-citrylglutamate in the presence of Ca2+, and acted on both N-acetyl-aspartylglutamate and β-citrylglutamate in the presence of Mn2+, whereas recombinant GCP2 only hydrolyzed N-acetyl-aspartylglutamate and this, in a metal-independent manner. A comparison of the structures of the catalytic sites of GCP2 and GCP3, as well as mutagenesis experiments revealed that a single amino acid substitution (Asn-519 in GCP2, Ser-509 in GCP3) is largely responsible for GCP3 being able to hydrolyze β-citrylglutamate. Based on the crystal structure of GCP3 and kinetic analysis, we propose that GCP3 forms a labile catalytic Zn-Ca cluster that is critical for its β-citrylglutamate hydrolase activity.
-Citrylglutamate (BCG) 3 is a pseudodipeptide first identified in newborn rat brain at a concentration of 0.5 to 1 mol/g. BCG is also detected in kidneys, heart, and to a much lower extent in intestine, spinal cord, and lungs of young rats. The content of BCG in all organs decreases rapidly after birth to the noticeable exception of testes, where its concentration increases during sexual maturation and remains constant during adulthood (1)(2)(3). Although the exact physiological function of BCG is presently unknown, different observations suggest that it may play an important role during brain development and spermatogenesis (4,5). Recently, BCG has been proposed to be an iron and copper chelator (6,7).
BCG is structurally close to N-acetyl-aspartylglutamate (NAAG), the most abundant dipeptide in the adult brain. NAAG is secreted by neurons upon calcium-dependent depolarization and has long been thought to bind to the glutamate metabotropic receptor mGluR3, possibly attenuating the glutamate-induced excitotoxicity (8). However, this neurotransmitter function of NAAG is still debated (9 -11). NAAG is hydrolyzed into N-acetyl-aspartate (NAA) and glutamate by glutamate carboxypeptidase 2 (GCP2), a membrane-bound, glycosylated ectoenzyme with its catalytic site oriented toward the extracellular environment, in good agreement with the observation that NAAG can be released from neurons (12). GCP2 was also designated as FOLH1 because of its hydrolase activity on folyl-polyglutamate, and as prostate specific membrane antigen (PSMA) as it is highly expressed in prostate cancer (13)(14)(15). Inhibitors of GCP2 have shown neuroprotective effects in animal models of cerebral ischemia, as well as analgesic activity (16). Noteworthy, NAAG can also be hydrolyzed by glutamate carboxypeptidase 3 (GCP3), also designated as N-acetylated ␣-linked acidic dipeptidase 2 (NAALAD2), which shares about 67% sequence identity with GCP2, but with a 10-fold lower catalytic efficiency (17,18). Like GCP2, GCP3 is a membrane-bound, glycosylated ectoenzyme, and its mRNA is abundant in testes and ovaries and detectable in placenta, spleen, prostate, and brain, whereas the mRNA of GCP2 is mainly expressed in kidneys, prostate, liver, and brain (17). Several GCP2 and GCP3 crystal structures have been solved and reveal a high conservation between the catalytic sites as well as the presence of a Zn-Zn cluster typical of this type of metallohydrolases (19,20).
Unlike NAAG, the fate of BCG has been less well studied. BCG is converted to citrate and glutamate by a glycosylated, membrane-bound hydrolase that has never been identified so far. BCG hydrolase activity has been detected in testis, lungs, kidneys, and heart (21). The enzyme has been partially purified from rat testis and displayed properties that indicated that it was different from NAAG hydrolase(s): first, the partially purified BCG hydrolase did not hydrolyze NAAG; second, it was strongly stimulated by Co 2ϩ , Mn 2ϩ , or Ca 2ϩ but not by Zn 2ϩ ; finally, endoglycosidase treatment revealed that the glycosylation of BCG hydrolase was not essential for its enzymatic activ-ity. In contrast, NAAG hydrolase is a bi-zinc hydrolase, its activity is not stimulated by the addition of metals and is strictly dependent on the presence of glycans (22,23).
We recently identified the enzymes responsible for the synthesis of NAAG and BCG as RIMKLA and RIMKLB, two distant homologues of bacterial glutamate ligases (24). RIMKLA specifically synthesizes NAAG from NAA and glutamate using ATP as energy source and is exclusively expressed in brain and spinal cord. RIMKLB is also capable of synthesizing NAAG but, in addition, it catalyzes the synthesis of BCG from citrate, glutamate and ATP. RIMKLB is expressed in brain and spinal cord, but also in tissues where NAT8L, the NAA synthesizing enzyme, is not expressed such as testes, ovaries, and oocytes (25), indicating that in these tissues RIMKLB most likely acts exclusively as a BCG synthase. To gain more information about the physiological function of BCG, we attempted to purify BCG hydrolase from mouse testis extracts and to identify it.
MATERIALS AND METHODS
Purification of -Citrylglutamate Hydrolase from Mouse Testis-5.5 g of mouse testis were homogenized in 27.5 ml of Buffer A (25 mM Hepes pH 7.1, 120 mM NaCl) and centrifuged for 30 min at 15,000 ϫ g. The pellet containing Ͼ90% of the BCG hydrolase activity was resuspended in Buffer A containing 5% Triton X-100 (v/v) in a Downs homogenizer, agitated for 30 min at room temperature and centrifuged for 30 min at 15,000 ϫ g. The supernatant was diluted 10-fold in Buffer B (25 mM Tris pH 7.8, 0.5% Triton X-100) and loaded onto a 25-ml DEAE-Sepharose column equilibrated with Buffer B. The column was washed with 75 ml of Buffer B, and proteins were eluted with a linear NaCl gradient (0 to 0.5 M NaCl, 2 ϫ 125 ml of Buffer B). Fractions (4 ml) were collected, and NAAG hydrolase and BCG hydrolase activities were measured in the presence of 1 mM MnCl 2 . Fractions containing BCG hydrolase activity (76 ml) were pooled and diluted 5-fold in Buffer C (25 mM Tris, pH 8.5, 0.5% Triton X-100) and loaded onto a 25-ml Q-Sepharose column equilibrated with Buffer C and a linear NaCl gradient (0 to 0.5 M NaCl in 2 ϫ 125 ml Buffer C) was applied. Fractions containing BCG hydrolase activity (42 ml) were pooled and concentrated to 2.5 ml onto an Amicon Ultra 30 kDa concentration unit (Millipore) and loaded onto a 5-ml ConA-Sepharose column as follows: the BCG hydrolase preparation was diluted 10-fold in buffer D (25 mM Hepes, pH 7.4, 0.5 M NaCl, 0.05% Triton X-100, 1 mM MnCl 2 , 1 mM CaCl 2 ) and loaded onto the ConA-Sepharose column equilibrated with the same buffer. The column was washed with 50 ml of Buffer D and the proteins were eluted with a stepwise gradient of methylglucose (0, 25, 50, 100, 250, and 500 mM). The active fractions were pooled, concentrated to 2 ml, and loaded onto a Superdex S-200 gel filtration column (GE Healthcare) as described previously (24).
Identification of Proteins by Mass Spectrometry-The purified enzyme was loaded on a 10% (w/v) polyacrylamide-SDS gel, after electrophoresis the gel was stained with colloidal Coomassie Blue (Fermentas, Lituania). The bands of interest were cut out and digested with trypsin. Peptides were analyzed by capillary LC-tandem mass spectrometry in a LTQ XL ion trap mass spectrometer (ThermoScientific, San Jose, CA) fitted with a microelectrospray probe. The data were analyzed with the Pro-teomeDiscoverer software (ThermoScientific), and the proteins were identified with SEQUEST against a target-decoy nonredundant mouse protein database obtained from NCBI. The false discovery rate was below 5%.
Tissue Distribution of -Citrylglutamate Hydrolase-Mouse tissues were homogenized with 3 ml/g Buffer A (25 mM Hepes, pH 7.1, 120 mM NaCl) and centrifuged for 30 min at 18,000 ϫ g. The pellets were washed by three cycles of resuspension in Buffer A in a Downs homogenizer and 30 min centrifugation at 18,000 ϫ g. They were finally resuspended in Buffer A containing 5% Triton X-100 (v/v) and used for the measurements of BCG hydrolase and NAAG hydrolase activities. Protein was assayed with the method of Bradford using bovine ␥ globulin as a standard (26). Quantitative PCR experiments were performed as described previously (27).
Cloning and Preparation of Expression Vectors-The DNA sequences of mouse GCP3 (NAALAD2) and GCP2 (FOLH1) (GenBank reference sequence NM_028279 and NM_001159706) were PCR-amplified from I.M.A.G.E. clones IRCLp5011C0120G and IRCLp5011C0626D (ImaGene GmbH) using 5Ј primers (GCP3 : ctg acc caa gga tcc atg gca agg cct agg cat ctc cg; GCP2 : cta aag ttg gga tcc atg tgg aac gca ctg cag gac) containing a BamHI restriction site (in bold) and 3Ј primers (GCP3: tcg aca gaa gaa ttc cta taa cac att ggt cag tgt ccc; GCP2: ttg gtg aga gaa ttc tta agc tac ttc cct cag agt ctc tgc) containing an EcoRI site. The PCR products were inserted in the pEF6Myc-His eukaryotic expression vector at the BamHI-EcoRI sites and checked by sequencing.
For the expression of a soluble form of GCP3 and GCP2, the first 43 (GCP3) and 55 (GCP2) amino acids were replaced by the following leader sequence: MDKLRVPLWPRVGPLCLLLAG-AAWA*PSPSLYPYDVPDYAPDPKFE which consists of the first 36 N-terminal amino acids of murine EPO receptor in which 9 amino acids of the HA epitope (underlined) have been inserted after the signal peptide cleavage site (indicated by *) (28). A BamHI restriction site (in bold) was introduced by site directed mutagenesis in nucleotides coding for residues 42-43 (GCP3) or 53-54 (GCP2) using 5Ј primers (GCP3: ctc aaa gaa aca act act tct gct gga tcc cat caa agt ata caa cag; GCP2: cca atg aag cta ctg gta atg gat ccc att ctg gca tga aga agg), respectively to generate GCP3-BamHI*-pEF6Myc-His and GCP2-BamHI*-pEF6Myc-His constructs. The above leader sequence was amplified from (28) using a 5Ј primer (cca tac tgg gta cca tgg aca aac tca ggg tgc ccc tct ggc) containing a KpnI site and a 3Ј primer (gga tat ggg gat ccc tca aac ttg ggg tcc ggg gcg tag tct gg) containing a BamHI site and inserted in the GCP3-BamHI*-pEF6Myc-His and GCP2-BamHI*-pEF6Myc-His at the KpnI-BamHI sites. Site-directed mutagenesis of S509N (GCP3) and of N519S (GCP2) was performed using the following 5Ј primer: gaa tca ata agc ttg gat ctg gga atg att ttg agg ctt act tcc (GCP3) and gca agc tgg ggt ctg gca gtg att ttg aag tgt tct tcc (GCP2), respectively. All expression vectors were checked by sequencing.
Expression, Purification, and Quantification of Mouse Recombinant GCP3 and GCP2-HEK-293T cells were cultured and transfected essentially as described by (29) using the jet-PEI TM procedure. For the expression of the soluble, HA-tagged, forms of wild-type and mutated GCP3 and GCP2, HEK-293T cells were transfected in serum-free medium. After 48 h at 37°C, the culture medium containing soluble GCP3 or GCP2 was collected and concentrated on an Amicon Ultra 100 kDa concentration unit and applied onto a Superdex S-200 gel filtration column (GE Healthcare) equilibrated with buffer B (25 mM Hepes, pH 7.1, 200 mM NaCl), and fractions were collected. SDS-PAGE analysis indicated that the recombinant proteins represented between 1 and 5% of total protein. Quantification of the band corresponding to the recombinant GCPs was achieved using molecular weight markers (Fermentas, Lituania) as standards. A more accurate relative quantification of the different GCP preparations was also performed by quantitative Western blot on an Odyssey Infrared Imager (Licor) using anti-HA epitope antibodies (diluted 1:2000) and IRdye 680 anti-mouse secondary antibodies (Licor, diluted 1:5000) as described. 4 Membrane-bound forms of GCP3 and GCP2 were produced by transfection of HEK-293T cells as described above. 48 h after transfection, the cells were washed once with 5 ml of PBS and scraped into 0.8 ml of Buffer A (25 mM Hepes, pH 7.1, 120 mM), frozen in liquid nitrogen, thawed and lysed by vortex-mixing. The resulting homogenates were centrifuged for 30 min at 18,000 ϫ g and 4°C. The supernatants were discarded, and the pellets washed by two cycles of resuspension in Buffer A and centrifugation (30 min at 18,000 ϫ g and 4°C) and finally solubilized in Buffer A supplemented with 5% Triton X-100.
Enzymatic Assays-The recombinant soluble forms of GCP3 and GCP2 were assayed radiochemically in a mixture (200 l final volume) comprising, unless otherwise stated, 25 mM Hepes, pH 7.1, 0.1 mg/ml BSA and either 30,000 cpm -citryl-L-[U- 14 C]glutamate and -citryl-L-glutamate, or N-acetyl-aspartyl-L-[U- 14 C]glutamate and N-acetyl-aspartyl-L-glutamate in concentrations ranging from 0 to 0.1 mM, and the indicated concentration of divalent cation as a chloride salt. For enzymatic assays of BCG hydrolase purified from mouse testis or of the membrane-bound forms of recombinant GCP2 and GCP3, Triton X-100 was added to the assay mixture to a final concentration of 0.25%. After 30 min at 30°C, the reaction was stopped by heating 5 min at 80°C, and the heated incubation mixture was centrifuged for 20 min at 15,000 ϫ g. The resulting supernatant was diluted with 0.8 ml water and applied onto a 1-ml Dowex AG1-X8 column (Cl Ϫ form, 100 -200 mesh, Acros Organics) prepared in a Pasteur capillary pipette. The resin was washed with 2 ml of water, followed by 5 ml of 150 mM NaCl to elute the released glutamate. Radioactivity was counted in the presence of Ultima Gold (Perkin Elmer) in a liquid scintillation counter.
Synthesis of -Citryl-L-glutamate, -Citryl-L-[U-14 C]glutamate and N-Acetyl-aspartyl-L-[U- 14 C]glutamate--Citryl-Lglutamate was synthesized and purified as described previously (24). For the preparation of radiolabeled BCG and NAAG, 1 mU of purified His-tagged RIMKLB was added to a 5-ml solution containing 25 mM Tris, pH 8.0, 5 mM citrate or N-acetylaspartate, 5 mM ATP-Mg, 5 mM MgCl 2 , 10 mM DTT, 0.2 mg/ml bovine serum albumin and 5 ϫ 10 6 cpm L-[U-14 C]glutamate. The reaction mixture was incubated for 4 h at 30°C under stirring and then heated for 10 min at 80°C. The mixture was centrifuged for 30 min at 18,000 ϫ g to remove proteins, and the supernatant was treated with 2% (w/v) activated charcoal to remove nucleotides. The charcoal was filtered and the filtrate was loaded onto a 25 ml AG1-X8 Dowex column (Cl Ϫ form). The column was washed with 100 ml of water, a linear gradient of NaCl was applied (0 to 1 M NaCl in 300 ml), and fractions (5 ml) were collected. Fractions containing radioactivity corresponding to BCG or NAAG were pooled, concentrated to 2 ml in a lyophilizer, and freed from NaCl by filtration on a Bio-Gel P2 (Bio-Rad) column (50 cm ϫ 1.0 cm) equilibrated with water.
Modeling of the Binding Mode of BCG in the GCP3 Active Site-A model for the -citrylglutamate ligand was made based on a description of the covalent structure of the molecule using the PRODRG server (30). The GCP3-BCG complex model was made using the PyMOL molecular graphics system (31) starting from the PDB entry 3FF3. Protein side chains and BCG functional groups were adjusted to minimize steric clashes and optimize the hydrogen bonding network.
Purification and Characterization of BCG Hydrolase from
Mouse Testes-During its purification and its characterization, BCG hydrolase was assayed through the release of [ 14 C]glutamate from radiolabeled -citryl-L-[U-14 C]glutamate prepared enzymatically with RIMKLB (see "Materials and Methods"). BCG hydrolase was purified from a mouse testis extract. Most (Ͼ90%) of the activity was pelleted upon centrifugation and could be solubilized with 5% Triton X-100, in agreement with the fact that BCG hydrolase is membrane-bound (22). Consistent with previous reports, we found that BCG hydrolase was strictly dependent on the addition of metals: Mn 2ϩ and Ca 2ϩ stimulated the activity more than Co 2ϩ and Mg 2ϩ , whereas Zn 2ϩ had no stimulatory effect (not shown) (22). A testis membrane extract solubilized with Triton X-100 was chromatographed on DEAE-Sepharose (not shown) and Q-Sepharose, and BCG hydrolase activity was measured in the presence of 1 mM Mn 2ϩ . As shown in Fig. 1, the BCG hydrolase activity was eluted from the Q-Sepharose column with the salt gradient. To check for the specificity of the enzymatic preparation, we also measured NAAG hydrolase activity in the same ionic condition (1 mM Mn 2ϩ ) and found, surprisingly, a substantial NAAG hydrolase activity that coeluted perfectly with the BCG hydrolase activity. Neither NAAG hydrolase activity nor BCG hydrolase activity could be detected when Mn 2ϩ was omitted from the assay medium. The perfect co-elution of the BCG hydrolase and NAAG hydrolase activities suggested that BCG hydrolase was endowed with NAAG hydrolase activity, although it did not discard the possibility that this NAAG hydrolase activity was contributed by a contaminating peptidase present in this partially purified preparation.
The metal dependence and substrate specificity of partially purified BCG hydrolase was studied in more details and the results are represented in Fig. 2. When assayed in the presence of 1 mM Mn 2ϩ , the enzyme displayed a lower K m (2.0 versus 18.7 M) and a higher V max (52 versus 24 fmol/min/ml of purified enzyme preparation) for BCG than for NAAG. In the presence of 5 mM Ca 2ϩ , the V max for BCG was increased 4-fold, amounting to 192 fmol/min/ml, and the K m was unchanged, whereas no NAAG hydrolysis was observed (Fig. 2, panels A and B), indicating that the enzyme was specific for BCG under this condition. A study of the metal dependence (Fig. 2, panels C and D; assayed in the presence of 5 M substrate) showed that the BCG hydrolase activity was undetectable in the absence of added Me 2ϩ . Mn 2ϩ and Ca 2ϩ half-maximally stimulated the activity at concentrations of 25 M and 1.8 mM, respectively. Mn 2ϩ stimulated the NAAG hydrolase activity with a K a of 16 M, whereas Ca 2ϩ had no effect on this activity at any of the concentrations tested (from 0.1 to 10 mM).
The partially purified preparation was next chromatographed on a ConA-Sepharose column, which retains glycans.
The BCG hydrolase activity was found to be retained on this column and to be eluted from it with a methylglucose gradient, thus confirming that it is a glycosylated protein (not shown). Treatment with PNGase, which removes N-glycans, caused a complete loss of BCG hydrolase activity (results not shown), indicating that N-glycosylation is important for activity, in disagreement with previous reports (22). Finally, after purification on ConA-Sepharose, the BCG hydrolase preparation was chromatographed on a Superdex S-200 gel filtration column in the presence of molecular weight markers. An apparent molecular mass of 200,000 Da was calculated from its elution profile (not shown).
Taken together, these results indicated that the properties of BCG hydrolase (its activity on NAAG, the fact that it is membrane bound and glycosylated, sensitive to endo-glycosidase treatment) were similar to GCP2 or GCP3. MS-analysis of tryptic fragments of the most purified Superdex S-200 gel-filtration fraction disclosed the presence of peptides corresponding to GCP3, although not of GCP2 (not shown), suggesting that GCP3 was a potential candidate. This prompted us to compare the tissue distribution of BCG hydrolase and NAAG hydrolase activities with the distributions of GCP2 and GCP3 mRNAs.
Comparison of the Tissue Distributions of BCG Hydrolase and NAAG Hydrolase Activities with the Tissue Distribution of GCP3 and GCP2 mRNAs in Mouse-When measured in the presence of near-physiological concentrations of Ca 2ϩ (2.5 mM CaCl 2 ) and Mg 2ϩ (1 mM MgCl 2 ), the BCG hydrolase activity was highest in testis, uterus, and bladder, intermediate in kidneys and lung, and very low, if detectable, in liver, heart, spleen, eyes, and brain (Fig. 3, panel A, left). The distribution of the NAAG hydrolase activity, measured under the same ionic conditions was distinctly different from the BCG hydrolase profile, the highest activity being found in the kidneys and very low levels in other tissues (Fig. 3, panel A, right). When enzymatic assays were performed in the presence of 1 mM Mn 2ϩ , the BCG hydrolase and NAAG hydrolase distribution profiles were not very different from each other (not shown), presumably because BCG hydrolase acted then also on NAAG.
The tissue distributions of the GCP2 and GCP3 mRNAs were determined by quantitative RT-PCR. As evident from panel B (left) (Fig. 3), the tissue distribution of the GCP3 mRNA, but not that of GCP2 mRNA (panel B, right), matched the tissue distribution of BCG hydrolase, to the possible exception of liver. Taken together, these findings indicated that BCG hydrolase was likely to correspond to GCP3.
Expression and Characterization of Recombinant GCP2 and GCP3-To test the ability of GCP2 and GCP3 to hydrolyze BCG and NAAG, these proteins were expressed in HEK cells, either as intact membrane-bound enzymes (not shown) or as soluble forms in which the N terminus membrane anchor had been removed. Briefly, the first ϳ40 amino acids of GCP2 and GCP3 comprise the signal peptide as well as a hydrophobic trans-membrane helix that anchors these proteins on the external face of the plasma membrane. To produce soluble, fully glycosylated recombinant GCPs, the first ϳ40 amino acids were replaced by an artificial amino acid sequence comprising the signal peptide of human EPO (erythropoietin) receptor followed, after the signal peptidase cleavage site, by an HA epitope (see "Materials and Methods" and Ref. 28). Recombinant soluble GCPs were collected from the culture medium, concentrated and chromatographed onto a Superdex S-200 gel-filtration column and their enzymatic activities were studied (Figs. 4 and 5). SDS-PAGE analysis indicated that the recombinant GCPs represented ϳ1-5% of total proteins. As shown in Fig. 4, recombinant GCP3 displayed a dual hydrolase activity on both BCG and NAAG in the presence of Mn 2ϩ , but hydrolyzed only BCG in the presence of Ca 2ϩ . When assayed in the presence of 1 mM Mn 2ϩ , the K m of recombinant GCP3 for BCG was 3.5 M and the V max , 1900 nmol/min/mg; for NAAG the K m was 7.6 M and the V max , 550 nmol/min/mg. In the presence of Ca 2ϩ , the V max for BCG hydrolysis was increased up to 4400 nmol/min/mg whereas the K m was unchanged, and no NAAG hydrolysis could be measured (Fig. 4, panels A and B). The metal dependence of GCP3 showed that this enzyme was almost inactive in the absence of added divalent cation (ϳ40 nmol/min/mg). Mn 2ϩ stimulated the BCG hydrolase activity half-maximally at a concentration of 16 M and Ca 2ϩ at 1.1 mM. The NAAG hydrolase activity of GCP3 was stimulated by Mn 2ϩ , but not by Ca 2ϩ (Fig. 4, panels C and D). Mg 2ϩ stimulated very poorly the activity of GCP3: activities of 96 nmol/min/mg and of 54 nmol/min/ml were measured on BCG and on NAAG, at a 5 mM concentration of this metal (not shown). Taken together, these characteristics are very similar to those reported above for BCG hydrolase purified from mouse testis and indicated that BCG hydrolase is GCP3.
Recombinant GCP2 (Fig. 5) displayed activity toward NAAG with a V max of 542 nmol/min/mg and a K m of 4.3 M (panel B), however it did not hydrolyze BCG, even in the presence of Mn 2ϩ or Ca 2ϩ (panel A). Contrasting with GCP3, GCP2 activity was not stimulated by the addition of divalent cations (Fig. 5, panels C and D). Similar results were obtained with the intact (membrane-bound) GCP2 and GCP3 enzymes, indicating that the removal of the first 40 amino acids did not influence the kinetic properties of these enzymes (not shown).
Effect of Mutation of the Homologous Residues GCP2-Asn519
and GCP3-Ser509-As previously noted by others, the major difference between the catalytic sites of GCP2 and GCP3 is the replacement of a conserved asparagine (Asn-519) in GCP2 by a conserved serine (Ser-505) in GCP3 (18). Structural modeling of -citrylglutamate in the catalytic site of GCP2 and GCP3 (see "Discussion") suggested that the bulkier asparagine could impede proper binding of -citrylglutamate to the GCP2 binding site, explaining that the latter enzyme was not able to hydrolyze the citrate derivative. It should be noted also that Asn-519 in GCP2 binds an Asp residue (Asp-453) that coordinates Me2.
In agreement with our predictions, replacement of Asn-519 by a serine in GCP2 (see Fig. 6) made this enzyme able to hydrolyze BCG in a Ca 2ϩ -and Mn 2ϩ -dependent manner, though in this case the activity was markedly (10 -60-fold) lower than with GCP3. The capacity of GCP2 to hydrolyze NAAG was largely unchanged by the mutation, except that it was now somewhat sensitive to the effect of divalent cations. The mutated form of GCP2, which was able to hydrolyze NAAG in the absence of added divalent cation (as does wild-type GCP2), was stimulated by Mn 2ϩ (like GCP3, but unlike GCP2) and inhibited by Ca 2ϩ . This inhibitory effect was not seen with wild-type GCP2.
The reciprocal mutation, i.e. the replacement of Ser-509 by an Asn in GCP3 cancelled its activity on BCG, but did not completely suppress its activity on NAAG (Fig. 7). Unlike what was observed with GCP2, the activity on NAAG was completely dependent on the presence of Mn 2ϩ . It could not be evoked with Ca 2ϩ .
As Zn 2ϩ appears to be critical for the activity of GCP2 and GCP3 (see "Discussion"), we also tested the effect of the addition of Zn 2ϩ on the activity of all four enzymes, using both NAAG and BCG as substrates (Fig. 8). Addition of Zn 2ϩ did not affect the activity of GCP2 on NAAG, and did not evoke any activity of this enzyme on BCG. Zn 2ϩ stimulated the activity of GCP3 and GCP2-N519S on NAAG but strongly inhibited their activity on BCG. In the case of GCP3-S509N, it stimulated the activity on NAAG but did not evoke any activity on BCG.
DISCUSSION
Identification of -Citrylglutamate Hydrolase as GCP3-A major conclusion of the present work is that -citrylglutamate hydrolase corresponds to GCP3. This is based on the finding that BCG hydrolase purified from mouse testis and recombinant GCP3 display very similar kinetic properties: both of them act with high affinity on BCG in the presence of Ca 2ϩ and to a lesser extent of Mn 2ϩ , but not in the presence of Zn 2ϩ . Both of them do not act on NAAG in the presence of Ca 2ϩ , but well in the presence of Mn 2ϩ or Zn 2ϩ . Furthemore, they show affinities for substrates and for divalent cations that are in the same range. In addition the tissue distribution of BCG hydrolase activity is in good agreement with that of the GCP3 mRNA and GCP3 was present in the purified preparation of BCG hydrolase, as determined by tandem mass spectrometry. These findings solve therefore the riddle of the function of GCP3, which was initially identified as a NAAG hydrolase. Its low activity on this substrate, but substantial activity on BCG, in the presence of physiological concentrations of divalent cations, indicate that a major function of GCP3 is to hydrolyze BCG.
GCP2 and GCP3 differ from each other by their substrate specificity and their metal dependence and these two properties are linked to each other. Major features of the kinetic properties of the two enzymes are the following: 1) GCP2 acts on NAAG but not on BCG, whatever the ionic condition; furthermore this enzyme is insensitive to added metals. 2) GCP3 acts preferentially on BCG in the presence of Ca 2ϩ ; its activity on this substrate is lower in the presence of Mn 2ϩ and completely cancelled in the presence of Zn 2ϩ . The latter two ions stimulate the activity on NAAG. 3) Replacement of Asn-519 by a serine in GCP2 makes this enzyme able to hydrolyze BCG and renders it sensitive to the addition of metals. 4) Replacement of the homologous residue Ser-509 in GCP3 by an asparagine suppresses its ability to hydrolyze BCG. In the next section, we will try to explain these properties on the basis of the structural features of the catalytic site of GCP2 and GCP3.
Role of a Heterometallic Cluster in the Hydrolysis of BCG-We show a structural model of the catalytic site of GCP2 and GCP3 (Fig. 9). Both enzymes contain a two-zinc cluster that serves to activate a water molecule (19). This cluster is held by five residues, which are strictly conserved between GCP2 and GCP3. There are however, subtle differences between the catalytic sites of GCP2 and GCP3, most particularly the fact that the occupancy of the Zn2 atom is only 50 -70% in GCP3 (20), as compared with 100% in GCP2 (19). This means that some Zn1-Zn2 clusters in GCP3 have lost Zn2 (Fig. 9A), which can therefore be potentially replaced by another divalent cation such as Ca 2ϩ (Fig. 9B) or Mn 2ϩ (not shown). This, in our view, explains the fact that GCP2 is completely insensitive to metals, whereas GCP3 is dependent on the addition of divalent cations for its activity. The finding that GCP3 activity on BCG is higher with Ca 2ϩ than with Mn 2ϩ and nil in the presence of added Zn 2ϩ leads us to propose that this activity is optimal when a Zn-Ca cluster forms, intermediate with a Zn-Mn cluster and nil with a Zn-Zn cluster. Reciprocally, a Zn-Ca cluster does not support activity on NAAG, in contrast to Zn-Mn or Zn-Zn clusters. The kinetic results obtained with the two mutants can also be rationalized on the basis of this hypothesis.
Attempts at modeling NAAG and -citrylglutamate in the catalytic sites of GCP2 and GCP3 indicate that the regions of BCG and NAAG that differ between these two molecules are close to Me2. Our suggestion is that the replacement of the (neutral) acetamide moiety of NAAG by the (charged) carboxymethyl group of BCG adds a negative charge in the neighborhood (2.8 Å) of Me2. This charged group will tend to bind to Me2, causing more distortions if Me2 is Zn than if it is Ca or Mn. The latter two ions are indeed known to bind preferentially FIGURE 9. Structure of the catalytic site of GCP2 and GCP3. Panel A shows the metal cluster of the GCP2 ligand-free structure (PDB 2OOT) and of GCP3 in complex with glutamate (not depicted, PDB 3FF3). For the sake of clarity and to emphasize its low occupancy, Zn2 is not depicted in the GCP3 structure. Noteworthy, Asp-443 which coordinates Zn2 in the GCP3 structure occupies two positions, only one of which allows coordination with the Zn2 atom, in good agreement with the lower occupancy of this metal. Ser-509 also displays two conformations. In marked contrast, the Zn2 atom occupancy is 100% in the ligand-free GCP2 structure and no rotamers are observed within the amino acids constituting the catalytic site in this case. Panel B shows GCP2 in complex with NAAG (PDB 3BXM) and a modeling of GCP3 complexed with BCG and the Zn-Ca cluster. The distances between the two metals are the actual distances observed by x-ray crystallography (20). In GCP2 structures the two zinc atoms are separated by 3.3 Å, each Zn being equally distant from the catalytic water molecule (2.0 Å), whereas in GCP3 the distances between the two metals is 3.7 Å, and the distance between Me2 and the catalytic water (2.4 Å) is larger than the distance between Zn1 and this water (2.1 Å), in agreement with a bulkier calcium occupying the position of Me2. more ligands in protein structures (6 or even 7 in the case of Ca) than what Zn usually does (4 or 5 ligands) (32).
Role of Ser-509 in GCP3-Replacing Asn-519 in GCP2 by Ser-509 in GCP3 is also critical to allow hydrolysis of BCG, as indicated by site-directed mutagenesis. A potential explanation may be that the bulkier side chain of Asn causes a problem of steric hindrance with the hydroxyl group of -citrylglutamate (which is replaced by a hydrogen in NAAG). However, replacing Asn-519 in GCP2 by a serine has also a marked effect on the metal dependence of this enzyme, as the mutated form of GCP2 is stimulated by Ca 2ϩ (for its activity on BCG), by Mn 2ϩ (for its activity on BCG and NAAG) and Zn 2ϩ (for its activity on NAAG). This effect of Asn-519 replacement is likely ascribable to the fact that Asn-519 binds Asp-453, one of the residues that bind Zn2. Thus, replacement of Asn-519 by a serine facilitates the replacement of Zn2 by a divalent cation (e.g. Ca 2ϩ ) that supports activity on BCG. What may argue against this last interpretation is the fact that making the converse replacement in GCP3 does not restore the high affinity for Zn 2ϩ whereas it does suppress the activity on BCG. One has to admit, however, that this lack of restoration of high metal affinity means that other subtle differences between the catalytic sites of GCP2 and GCP3 play a role in determining the metal affinity.
Physiological Considerations-The identification of BCG hydrolase as GCP3 implies that its catalytic site is oriented toward the extracellular environment. The concentration of Mn 2ϩ is extremely low in serum (ϳ25 nM) (33), and presumably in extracellular fluids, whereas that of Ca 2ϩ amounts to about 1.5 mM. This means that under physiological conditions, GCP3 will only function as a BCG hydrolase and not as a NAAG hydrolase.
Another consequence of the extracellular orientation of the catalytic site of BCG hydrolase is that BCG is a molecule that is most likely secreted, like NAAG. Further work is needed to identify the mechanisms by which BCG can be released and under which circumstances. Co-occurrence of the mRNA encoding BCG hydrolase (GCP3) and BCG synthase (RIMKLB) is found in testis, eye, fertilized eggs, oocytes, and ovaries (our results; BioGPS database (34)), indicating that -citrylglutamate plays a role in these organs or cells. Interestingly, the gene encoding BCG hydrolase is expressed in the ovarian tissue in an ovulation-dependent manner (35). In lung, the expression of BCG synthase (RIMKLB) appears to be remarkably low, whereas GCP3 is abundantly expressed in this tissue, raising the possibility that other, yet unidentified compounds may be substrates of GCP3.
Ontogeny-The GCP2 and GCP3 genes arose through the duplication of an ancestral gene after the fish radiation. Danio rerio has only one copy, which comprises a serine at the position equivalent to Ser-509 in GCP3 or Asn-519 in GCP2. This suggests that this primitive enzyme is mostly a BCG hydrolase, in agreement with the observation that -citrylglutamate is much more abundant than NAAG in fishes (3). Both GCP2 and GCP3, with the characteristic serine or asparagine residue, are found in amphibians, birds and mammals, suggesting that these species possess two distinct enzymes to hydrolyze NAAG and BCG. This is consistent with the finding that both NAAG and BCG are present in these species.
CONCLUSION
In conclusion, we have identified the enzyme that hydrolyzes BCG as GCP3, an enzyme which was previously thought to act on NAAG, and which we now show to act poorly on this substrate under physiological ionic conditions. As GCP3 is an ectoenzyme, these findings point to the fact that BCG has to be excreted from cells to be hydrolyzed. This may suggest that BCG plays its physiological function outside cells, rather than inside cells as had previously been suggested. The remarkable parallelism between the synthesis and the degradation of NAAG and -citrylglutamate argues for these molecules playing similar functions, but in different cell types or organs. | 8,349.6 | 2011-09-09T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Revisiting the non-resonant Higgs pair production at the HL-LHC
We study the prospects of observing the non-resonant di-Higgs pair production in the Standard Model (SM) at the high luminosity run of the 14 TeV LHC (HL-LHC), upon combining multiple final states chosen on the basis of their yield and cleanliness. In particular, we consider the $b\bar{b}\gamma \gamma, b\bar{b} \tau^+ \tau^-, b\bar{b} WW^*, WW^*\gamma \gamma$ and $4W$ channels mostly focusing on final states with photons and/or leptons and study 11 final states. We employ multivariate analyses to optimise the discrimination between signal and backgrounds and find it performing better than simple cut-based analyses. The various differential distributions for the Higgs pair production have non-trivial dependencies on the Higgs self-coupling ($\lambda_{hhh}$). We thus explore the implications of varying $\lambda_{hhh}$ for the most sensitive search channel for the double Higgs production, \textit{viz.}, $b\bar{b}\gamma\gamma$. The number of signal events originating from SM di-Higgs production in each final state is small and for this reason measurement of differential distributions may not be possible. Furthermore, we consider various physics beyond the standard model scenarios to quantify the effects of contamination while trying to measure the SM di-Higgs signals in detail. In particular, we study generic resonant heavy Higgs decays to a pair of SM-like Higgs bosons or to a pair of top quarks, heavy pseudoscalar decaying to an SM-like Higgs and a $Z$-boson, charged Higgs production in association with a top and a bottom quark and also various well-motivated supersymmetric channels. We set limits on the cross-sections for the aforementioned new physics scenarios, above which these can be seen as excesses over the SM background and affect the measurement of Higgs quartic coupling. We also discuss the correlations among various channels which can be useful to identify the new physics model.
Introduction
The existence of a scalar boson with a mass around 125 GeV has been unambiguously confirmed by both the ATLAS and CMS collaborations at the Large Hadron Collider (LHC). It is, however, still early to conclude whether this discovered scalar is the Higgs boson as conjectured in the Standard Model of particle physics (SM) 1 . Therefore, it is of paramount importance to precisely measure its couplings to the various SM particles, its width, spin and CP properties. As already seen from the Run I data and gradually being reiterated by the Run II data, the Higgs couplings to the SM electroweak gauge bosons are in excellent agreement with the SM expectations [1][2][3][4][5][6][7][8]. The Yukawa couplings to the first two generation fermions are extremely difficult to measure owing to their smallness [9]. However, the couplings to the third generation quarks and lepton are gradually gaining in significance [10][11][12][13][14][15][16]. The only other measurable coupling (the first generation Yukawa couplings being extremely small, is considerably challenging to getting measured in the near future) which also describes the scalar potential of the theory is the elusive Higgs self-coupling (λ hhh ). The focus of the present work is to study in considerable details, various possible final states of the Higgs pair production and to study the effects of contamination due to the presence of several new physics effects. The only direct probe to this coupling is via a pair production of Higgs bosons which further decay to various SM final states. However, it has been shown in Refs. [17][18][19][20][21][22][23] that an indirect measurement of the Higgs trilinear coupling is possible through radiative corrections of single Higgs processes both at the HL-LHC and at future e + e − colliders. Ref. [22] has shown that this coupling can be constrained in the range of [0.1,2.3] times that of its SM expectation at 68% confidence level. It has also been shown in Ref. [24] that it is possible to constrain λ hhh from the electroweak oblique parameters. The triumph of the experiments in having already probed most of the standard Higgs couplings, urges the community to constrain the self-coupling in a plethora of channels. Such measurements have received considerable attention in recent times both from theoretical and experimental communities [21,. However, a precise direct measurement of the self-coupling is extremely challenging at the LHC because the SM production cross-section is small even at √ s = 14 TeV. The dominant di-Higgs production process proceeds through top quark loop diagrams in the gluon fusion channel. An interesting aspect of this process lies in the fact that there is a fine cancellation owing to a destructive interference between the box and the triangle diagrams. This results in an extremely small cross section, viz., 39.56 +7.32% −8.38% fb at the NNLO+NNLL level [68][69][70] (with full top mass effects at NLO [71]) for the 14 TeV run of the LHC. However, various decay channels of the Higgs provide phenomenologically rich final states and appropriate combinations might help in improving the discovery potential at the high luminosity run of the LHC (HL-LHC), provided we identify optimised sets of selection cuts to reduce backgrounds (B) and improve the signal (S) over background ratio (S/B) and the statistical significance (S/ √ B). Searches for both resonant and non-resonant Higgs pair production have been performed in various channels by both the ATLAS and CMS experiments [72][73][74][75][76][77][78][79][80][81][82]. At present, one of the strongest bounds on the non-resonant Higgs pair production comes from the 4b search performed by ATLAS [73] with an integrated luminosity of 13.3 fb −1 , putting an upper bound of 29 times that of the SM expectation. Very recently, the bbτ τ search by CMS [79,83] has put a strong observed limit at 30 times the SM number, with an integrated luminosity of 35.9 fb −1 . The strongest (second strongest) constraint, at 13 (19.2) times that of the SM expectation, comes from the bbbb (bbγγ) search by ATLAS [84] (CMS [85]) with an integrated luminosity of 36.1 (35.9) fb −1 . As for the resonant searches, at present, the strongest limits are obtained from the hh → bbγγ [85], hh → bbbb [84,86] and bbτ + τ − [83] modes, competing in the mass range [∼ 250 GeV; 3 TeV]. However, the bbW W * channel is also predicted to be a competitive probe in the future runs of the LHC [87,88].
The di-Higgs production rate can be enhanced in various beyond the Standard Model (BSM) scenarios. Some such new physics scenarios involve new heavy coloured states propagating in both the box and triangle loops, e.g., supersymmetric and extra-dimensional theories, theories with heavy resonance(s) decaying into a pair of SM-like Higgs, viz., a multitude of models with an extended Higgs sector, strongly interacting theories, composite Higgs models and also various effective field theories (EFTs) modifying the tth coupling [21,. Since the Higgs discovery, many of the models exhibiting new coloured states, have been severely constrained owing to the near-precise measurements in the single Higgs channels. Many of these extensions are responsible not only for an enhancement in the di-Higgs production cross-section, but also for certain distinct kinematic distributions, often having minimal overlap with their SM counterparts. We must, however, remember that even the enhanced cross-sections might not be entirely sufficient to obtain an adequate significance because large SM backgrounds, primarily ensuing from tt, ZZ, ZH, pure QCD and also fakes, may swamp the signal completely. In this regard, modified kinematics, especially the presence of resonances might be somewhat helpful. In the quest to reduce backgrounds to the best of one's abilities, one has to envision a combination of optimal final states. In addition, for each such final state, one has to identify the most suited set of selection cuts in order to enhance signal-to-background ratio. A thorough literature survey points us to studies which show that the trilinear coupling can be best probed when studied in multiple channels with a combination of the numerous final states of the Higgses. These final states are chosen owing to the largeness of the Higgs branching ratios and their cleanliness with respect to the backgrounds. A more inclusive search procedure takes a closer look into various kinematic regions of di-Higgs processes. In particular, studies utilising variables reconstructed from boosted objects, jet substructure techniques, stransverse mass (m T 2 ) and other novel variables, are also shown to have potential importance in the future runs of the LHC [89,90]. Multivariate analyses also turn out to be very efficient in segregating the signal from the backgrounds, thus offering encouraging results [91][92][93]. Nevertheless, an exhaustive study in the di-Higgs sector, involving detector simulations and also alongside an inclusion of the effects of new physics effects (as we shall discuss below) on such measurements, is by and large missing from the literature, since some of the aforementioned studies claiming very optimistic results have been performed at the parton level or with minimal detector effects. Hence, one of the primary goals in this work is to optimise the di-Higgs search strategy by systematically studying a number of final states taking into account detector effects and conservative systematic uncertainties.
In the first part of our study, we focus on the non-resonant di-Higgs production in the familiar bbγγ, bbτ + τ − and bbW W * channels and try to estimate the statistical significances at the HL-LHC. Being mostly agnostic to the previous studies, we try to identify the sets of optimised cuts which show the greatest sensitivities in these channels. The bbγγ and bbW W * have been shown to be the most promising channels in this regard [88,94,95]. The bbτ + τ − channel, however, suffers from large tt backgrounds. The reconstruction of τ s, which is always accompanied by missing transverse energy ( / E T ), is a complicated process at the colliders and involves identifying optimal τ -tagging and mistagging efficiencies. However, improvements in the reconstruction of invariant mass of the di-tau system using the missing mass algorithm [96], dynamical likelihood techniques [97] or the modified m T 2 algorithm [90,98] may provide encouraging results in this channel. Before performing these studies, we stress that the analyses involving these channels are not novel and hence we will be more cautious in our claims. CMS predicts final significances of 1.6σ, 0.39σ, 0.45σ and 0.39σ respectively in the bbγγ, bbτ + τ − , bbV V * and bbbb channels for the non-resonant di-Higgs production, at the end of HL-LHC run with an integrated luminosity of 3 ab −1 [99]. ATLAS on the other hand predicts their best-case significance at 1.05σ for the bbγγ non-resonant channel at the HL-LHC [100]. Moreover, for the bbW W * channel, we study both the semi-leptonic and di-leptonic modes. Besides, we look into the γγW W * channel with both the semi-leptonic and di-leptonic final states. Finally, we also look for the 4W channel in the same-sign di-lepton (SS2 ), tri-lepton (3 ) and four lepton (4 ) final states. We compare the numbers obtained from the experimental projections with our study by including detailed detector effects and conservative background systematics.
In this work, we will not concern ourselves with dedicated analyses for resonant di-Higgs searches. Neither will we focus on scenarios where the rescaling or the modification in the tth Yukawa coupling (y t )may alter the nature of interference between the triangle and box diagrams. However, we will briefly discuss the case where one can have λ hhh different from the SM expectations. These, in principle, can have drastic ramifications in the production cross-sections as well as the kinematics of the di-Higgs system. New physics contributions may also show up in the BR(h → XX), modifying the total rate. These will be considered as a separate future study. In the present work, we will however consider various BSM signatures which have the potential to contaminate the non-resonant SM di-Higgs production and affect the measurement of λ hhh . Observing any significant difference in the number of events for a particular channel, with respect to its SM expectation, may be interpreted as a modification in the value of λ hhh . This is one of the main aims of this present work. We want to quantify the degree to which we can discard such contamination after having established a robust set of cuts which optimises the SM signal. We will be using multivariate analyses for this purpose. We classify these contaminating scenarios into three broad categories, viz., hh(+X), h + X and X, where X denotes an object or a group of objects not coming from an SM Higgs decay. The hh(+X) mode is one of the most studied scenarios. Di-Higgs production from the decays of heavy scalar particles is the classic case considered in the literature [58,[101][102][103]. A heavy scalar particle arises naturally in many extensions of the SM, for instance, in the minimal supersymmetric standard model (MSSM) or in further extended scenarios [49,60,104], general two-Higgs doublet models (2HDMs) [29,30], extra-dimensional models [61], models with an extra U (1) gauge group [62][63][64], to name a few. In the present work, we do not focus on any particu-lar model and consider a generic heavy resonance decaying to a pair of SM Higgses which further decay to various final states. We vary the mass of the heavy resonance but do not optimise the selection cuts for each benchmark and keep them fixed at the optimisation obtained for the corresponding SM non-resonant Higgs pair production channel. Delving a bit more into well-motivated models, we consider certain different channels in the MSSM from which we can obtain a pair of SM-like Higgs bosons. For generic supersymmetric (SUSY) scenarios, we will encounter high effective masses (m eff ) and high missing transverse momentum ( / E T ). This will lead to a minimal or no overlap of kinematic variables with their SM di-Higgs counterparts. For a degenerate SUSY spectrum, however, we will obtain low m eff and low / E T and this may potentially contaminate several di-Higgs final states. The hh(+X) state may come from a squark pair production, i.e., pp →q iqj → q i q j + hh + χ 0 1 χ 0 1 , whereq i refers to squarks (anti-squarks), q i refers to quarks (anti-quarks) with i being the flavour index and χ 0 1 to the lightest supersymmetric particle (LSP), here the lightest neutralino. Thus, we obtain a hh + jets + / E T state which has the potential to contaminate the SM di-Higgs signal unless specific cuts are designed to subdue its effect. For the second category, we consider a mono-Higgs production in association with other objects and this can specifically mimic some of the Higgs pair production final states. We consider few such scenarios, viz., A → Zh, i.e., a pseudoscalar decaying to the Z boson along with the SM-like Higgs: this scenario is particularly interesting in the MSSM and also in classes of generic 2HDMs. We will encounter the bbγγ, bbW W * , bbτ + τ − final states from this channel. Besides, we will even have some contamination to the SS2 , 3 and 4 final states. Furthermore, an electroweakino pair production may also exhibit a mono-Higgs final state with a significant rate. Processes like pp → χ 0 2 χ ± 1 → hW ± + χ 0 1 χ 0 1 , where the lightest chargino and the second-lightest neutralino are wino-like can contribute significantly. For such a scenario, BR(χ 0 2 → hχ 0 1 ) can be dominant and BR(χ ± 1 → W ± + χ 0 1 ) is close to unity. From such channels, we can have possible contaminations to the semi-leptonic bbW + W − , γγW + W − and bbτ + τ − channels and also to the SS2 and 3 modes. The final category of BSM scenarios having potential contaminating effects to the SM di-Higgs production are processes with no SM-like Higgs bosons. In this paper, we study three such examples. We may have the production of a pair of top quarks emanating from a heavy (pseudo-)scalar resonance, displaying prowess for resonant masses above the tt threshold. Besides, in various classes of models we have an associated production of a charged Higgs boson with a top and a bottom quark (H ± tb). For m H ± > m t , we have the tbtb production. Another potential contamination can come from the stop-anti-stop (t it * i , where i = 1, 2 ) pair production which can lead to the tt + / E T or the bW bW + / E T final states. All the above three channels can mimic the hh → bbW W * and bbτ + τ − modes. In the following, we make an attempt to study these contamination effects as functions of the neutral/charged heavy Higgs masses for certain well-chosen benchmark points. In the following sections, we will see the importance of multivariate analyses in discriminating the SM di-Higgs signal from the SM backgrounds and later also from possible new physics contaminations. Hence, the backbone of the analyses techniques used in this work, are the boosted-decision tree (BDT) algorithms.
Having described the various aspects studied in this work, we dissect our paper into the following sections. In section 2, we study the SM non-resonant di-Higgs final states in considerable details and present the reach of the HL-LHC in observing various channels. We discuss the variation of the Higgs self-coupling and the effects one obtains on the signal sensitivity, in section 3. In section 4, we consider the contamination effects ensuing from the aforementioned three categories with the help of benchmark points. Finally, in section 5, we summarise our results, conclude and present a future outlook for the vast field of di-Higgs searches.
Non-resonant di-Higgs production
As discussed in the introduction, the objective of this present work is two-fold, viz., estimating the observability of SM di-Higgs production in multifarious channels at the HL-LHC and also to decipher the contamination to such SM processes from various new physics scenarios as we will discuss at length in section 4. In this section, we will focus on several possible final states of the SM Higgs pair production. Our guiding principles in choosing these final states are cleanliness and substantial production rates. Hence, we choose states containing either photons or leptons (e, µ and τ ) or both. Thus, we consider the bbγγ, bbτ + τ − , bbW W * , W W * γγ and 4W channels for the present work. We do not consider the 4τ , W W * τ + τ − , ZZ * τ + τ − , 4γ, ZZ * γγ and 4Z states on account of their negligible rates. We must mention however that some of these neglected channels at the 14 TeV study may have important ramifications for 100 TeV collider studies [105]. At this point, it is important to mention that we closely follow the ATLAS and CMS analyses whenever available. For channels where we are unable to find such studies, we optimise the cuts to maximise the significance.
As we have emphasised in the introduction, the gluon fusion mode prevails as the dominant contribution to the SM di-Higgs production when compared with the remaining modes, such as vector boson fusion, associated production with a vector boson [106], or double Higgs production in association with a pair of top quarks [94]. Hence, for the present study, we concern ourselves only with the former production mode. On the simulation front, we generate the di-Higgs signal samples at leading order (LO) upon using MG5 aMC@NLO [107]. To attain the final states discussed above, we decay these samples with Pythia-6 [108,109]. We generate the background event samples also at LO using MG5 aMC@NLO 2 . Unless the decays are done at the MG5 aMC@NLO level, we decay these with Pythia-6. The generation level cuts for the various processes are listed in Appendix A. For all our simulations, the NN23LO parton distribution function (PDF) [112] has been employed. Also for all our sample generations, we use the default factorisation and renormalisation scales as defined in MG5 aMC@NLO [113]. Next, we shower and hadronise the signal and background samples with Pythia-6. Following this, the final state jets are reconstructed with the anti-kT [114] algorithm with a minimum p T of 20 GeV and a jet parameter of R = 0.4 in the FastJet [115] framework. In order to simulate detector effects, we use Delphes-3.4.1 [116]. Unless otherwise stated, we demand the electrons, muons and photons to be isolated as follows: the total energy activity within a cone of ∆R = 0.5 around each such object, is required to be smaller than 12%, 25% and 12% respectively of its p T . Besides, we consider the default identification efficiencies of the electrons, muons and photons as specified in the ATLAS detector card in Delphes-3.4.1. For channels with b-jets as final state objects, we consider a flat b-tagging efficiency of 70% [100]. We also consider flat j → b and c → b mistag rates of 1% and 30% respectively. Here we would also like to clarify that whenever in the following sub-sections, we mention a lepton ( ) as a final state, we always refer to an electron or a muon.
In almost all the channels which follow, we perform a cut-based analysis (whenever an equivalent analysis has been performed by CMS or ATLAS) for the signal optimisation. For these channels and also for the rest where we do not perform a cut-based analysis, we perform a multivariate analysis in order to capture the full machinery of an optimised search. For such studies, we choose numerous discriminatory variables, depending on the analysis and use the TMVA framework [117] to discriminate between the signal and background samples. For the following analyses, we use the decorrelated boosted decision tree (BDTD) algorithm. We must admit here that, it is possible to have a further improved algorithm but here we stick to a standard discriminator. In all cases, we train the signal and background samples, carefully avoiding overtraining of the samples at each step. For this purpose, we demand that the Kolmogorov-Smirnov test results are always greater than 0.1. It is, however, mentioned in Ref. [118] that a non-oscillatory critical test value of 0.01 may also suffice as a test for overtraining. We systematically modulate the BDT optimisation procedure with sufficiently large number of signal and background samples and always ensure a KS test value greater than 0.1 for both signal and background.
With this machinery in hand, we outline and detail the prospects of the nonresonant di-Higgs process in various final states in the following sections. We also note that all our generated samples are at a centre of mass energy of 14 TeV and the final analyses are performed for an integrated luminosity of 3 ab −1 .
The bbγγ channel
Having set the stage, we begin by studying one of the most promising non-resonant di-Higgs search channels at the HL-LHC, viz., the bbγγ final state. Even though this channel is somewhat at a disadvantage from the point of view of the total rate, because of the extremely small branching ratio of h → γγ, the cleanliness of this channel makes way for an adequate compensation, as we will gather at the end of this section. Numerous studies in the literature [43,91,94,100,119,120] have attempted to constrain the Higgs self-coupling (λ) by focusing on this particular final state. In performing this study, we closely follow the analysis presented in Ref. [100].
The most dominant background stems from the QCD-QED bbγγ process. We generate this background upon merging with an additional jet by employing the MLM merging scheme [121]. We must also mention here that the pure QED contribution (not involving the Higgs) to bbγγ is O(1%) that of its QCD-QED counterpart. Other significant backgrounds arise from the associated production of the Higgs with a pair of bottom (top) quarks, bbh (tth) and the associated production of Higgs with a Z-boson (Zh). In addition to these backgrounds, contributions also arise from numerous fakes, having event yields comparable to the QCD-QED bbγγ process. Although, the list of such relevant fake backgrounds is exhaustive, viz., ccγγ, jjγγ, bbjj, bbjγ and ccjγ, it is considerably difficult to simulate them. Thus, for the ccγγ and jjγγ channels, which bear a similar topology to the QCD-QED bbγγ process, we estimate the fake event yields upon employing a simple scaling: N ccγγ (jjγγ) = (N ccγγ (jjγγ) ATLAS /N bbγγ ATLAS ) · N bbγγ , where the subscript ATLAS denotes the event yields as listed in Ref. [100], while, N bbγγ is our simulated estimation. In an analogous manner, we simulate the bbjγ and bbjj backgrounds and scale N ccjγ = (N ccjγ ATLAS /N bbjγ ATLAS ) · N bbjγ . Following Ref. [100], we consider a j → γ fake probability of ∼ 0.1%. Also, at this point, we would like to mention that the fake rates are p T /η dependent functions and for precise analyses, these must be dealt with more care.
Upon generating the samples, for every event we require exactly two b-tagged jets and two photons in the final state. The leading (sub-leading) b-jet is required to have p T,b 1 (b 2 ) > 40 (30) GeV and must lie within a pseudo-rapidity range of |η b 1 ,b 2 | < 2.4. The two photons are required to have p T,γ > 30 GeV and are required to lie within |η γ | < 1.37 (barrel) or 1.52 < |η γ | < 2.37 (endcap). Additionally, we also veto events having one or more isolated leptons with p T > 25 GeV and within |η| < 2.5. The following selection cuts are implemented and are also tabulated in Table 1. We demand that the jet multiplicity, N j must be less than 6 in order to reduce the large tth background when either or both the top-quarks decay hadronically via the decays of the W -bosons. We also find that the ∆R cuts are highly effective in tackling the QCD-QED bbγγ background. Here, ∆R ab refers to the distance between the final state particles a and b in the η-φ plane. In addition, we also impose an upper and lower limits on the invariant masses of the two b-jets (100 GeV < m bb < 150 GeV) and the two photons (122 GeV < m γγ < 128 GeV), which impressively reduces the QCD-QED bbγγ background and sufficiently affects all the other backgrounds as well. Lastly, we also impose a lower bound on the transverse momenta of the b-jet pair (p T,bb > 80 GeV) and the transverse momenta of the di-photon pair (p T,γγ > 80 GeV).
We tabulate the signal and background yields for each selection cut in Table 2. We also quote the statistical significance S/ √ B, where S represents the signal yield and B refers to the sum of all relevant backgrounds. Upon applying all the aforementioned cuts, we obtain a final significance of 1.46, assuming zero systematic uncertainty. Because this first part of our paper somewhat serves as a validation of the studies performed by the ATLAS and CMS collaborations, we would like to confirm that our statistical significance is consistent with the results obtained by ATLAS [100].
Before moving on to discussing the multivariate analyses, we slightly digress in discussing the effects of certain possible cuts in improving the significance when compared to the one we derived just above. One of the largest background yields even after imposing all the aforementioned cuts is tth. However, it is interesting to note that this channel is associated with missing transverse energy even at the parton level when at least one of the W -bosons decays leptonically. Our signal, on the other hand, other than / E T emanating from experimental noise, does not have Selection cuts N j < 6 0.4 < ∆R γγ < 2.0, 0.4 < ∆R bb < 2.0, ∆R γb > 0. 4 100 GeV < m bb < 150 GeV 122 GeV < m γγ < 128 GeV p T,bb > 80 GeV, p T,γγ > 80 GeV any missing energy. Hence, we demand an upper limit of / E T < 50 GeV and show in Table 3 (a) that the tth background reduces to almost half its previous value. The bbγγ and Fake 1 backgrounds also incur modest reductions. The signal on the other hand reduces marginally. This improves the S/B from 0.17 to 0.19. Accordingly, the signal significance with zero systematics, acquires a slight increase at 1.51.
On a slightly different note, the ATLAS analysis [100] that we follow has considered jet energy corrections, to account for the parton radiation sourced from outside the jet cone. This results in the invariant mass distribution of the bb pair coming from the Higgs boson to peak at a value less than that of the Higgs mass. In the present study, we have however, only implemented the default jet energy correction considered in Delphes. As a result, we attempt to study the consequence of modifying the range of the selection cut on m bb to 90 GeV < m bb < 130 GeV. We present the new results in Table. 3 (b). This modified selection cut results in an increase in the Zh background but the signal also receives a relatively large increase, resulting in an S/B of 0.19 and a significance of 1.64. We left these last two modified cuts at the discussion level as issues concerning both / E T and jet-energy correction are primarily experimental and it is non-trivial to predict if our modified cuts can be incorporated seamlessly in an experimental setup. ( In the last leg of this subsection, we perform a multivariate analysis of the bbγγ final state by utilising the BDT algorithm in an attempt to isolate the signal and backgrounds more efficiently and improve upon the signal significance. The BDT optimisation procedure is performed upon using the following kinematic variables: where the numerical subscripts signify the p T ordering of an object with the subscript 1 corresponding to the hardest object. In the course of training the BDT, the kinematic variables m bb , p T,γγ , ∆R b 1 γ 1 and ∆R bb showed the maximal prowess in discriminating the signal from the background. We present the normalised distributions of these variables for the signal and the dominant backgrounds in Fig. 1 after the basic selection cuts. The corresponding signal and background yields along with the final significance are tabulated in Table 4. We observe that the multivariate analysis features a ∼ 20% improvement in the significance (S/ √ B = 1.76) over its cut-based counterpart. Table 4: Signal and background yields after the BDT analysis along with the significance.
The bbτ τ channel
Having studied the cleanest di-Higgs channel, we now turn our focus towards the channel which at present imposes one of the stronger limits on the di-Higgs crosssection. The bbτ + τ − channel has a considerably larger rate compared to bbγγ and has the advantage of three different final states as we shall discuss in details below. The τ -lepton can decay either leptonically with a ∼ 34% branching ratio or hadronically.
This yields us rich final states, viz., bb , bb j and bbjj, all accompanied with / E T . The jets are formed from the hadronic τ -decays and we will tag them in order to discriminate more from the backgrounds.
The major backgrounds for these channels stem from the fully hadronic, semileptonic and fully leptonic decays of pair produced tt. The QCD-QED background, gg → bbZ ( * ) /γ * → bbτ + τ − is also substantial. As we will see, demanding a large invariant mass in the τ + τ − system, eradicates the γ * contribution almost completely. Other backgrounds include bbh, Zh, ttW , ttZ and tth. Besides, we also have the bbjj background, with jets faking hadronic τ s. In context of the Zh channel, we once decay the Z-boson to a pair of bottom quarks while forcing the Higgs to decay to a pair of τ -leptons and then interchange these decay modes in order to have all possible bbτ + τ − final states. The cross-sections of the backgrounds are large and hence in order to improve statistics in our final analyses, we generate the samples with hard generation level cuts (see Appendix A). We neglect W (→ τ ν) + jets, W h, W Z, h → ZZ * and single top production owing to their very small production rate.
On the one hand the tt backgrounds are significantly large when compared to the small signal rate. However, boosted techniques and several kinematic variables do provide us some handle over the situation [89]. On the other hand, reconstruction of invariant mass of the τ -pair is a delicate issue at the LHC since it is always accompanied by missing transverse energy. Several m τ τ reconstruction techniques have been discussed in the literature [96,122,123] and extensively used in various previous analyses. In this work, we will considerably focus on the collinear mass approximation technique [96]. This approximation is based on two important assumptions, viz., the visible decay products of a τ lepton along with the neutrinos coming from it are all nearly collinear (i.e., θ vis = θ ν and φ vis = φ ν ) and the total missing energy in the event is solely due to these neutrinos. Upon utilising these two assumptions, the x-and the y-components of / E T can be easily expressed in terms of the momenta of the neutrinos. Solving this, one obtains the individual momentum of each neutrino. The above method has a drawback because only in the cases where the τ τ system is boosted against a hard object (examples being energetic jet, boosted objects), do we recover a reasonable mass. In our present scenario, the τ τ system (h → τ τ ) is boosted against the other Higgs which decays to a pair of b-quarks. The reason for this drawback is that this technique is extremely sensitive to the / E T resolution and may overestimate the reconstructed mass, M τ τ . Another drawback of this assumption is that, the solutions of the / E T equation diverge when the visible τ decay products are produced back to back in the transverse plane. We discuss another τ τ reconstruction technique, viz., the Higgs-bound technique [124,125], in Appendix B.
We are aware of the fact that the ATLAS [96] 6 and CMS [97] collaborations use different algorithms to reconstruct a resonance decaying into a pair of τ -leptons.
In the following sub-subsections, we present the analyses with sets of optimised cuts aimed for the HL-LHC. For the major part, we closely follow the predicted performance of an upgraded ATLAS detector [126] to model the detector effects and tagging efficiencies. For this part of the study, we use a different isolation criteria for the leptons (e, µ) upon following this ATLAS reference [127]. We demand the total energy activity around the lepton and within a cone of radius ∆R = 0.2 must be less than 10 GeV. Following Ref. [126], we fix the medium-level τ selection efficiencies for candidates with p T > 20 GeV and |η| < 2.3 at 55% and 50% respectively for the one-pronged and three-pronged τ candidates. We also allow for QCD-jets faking τ -jets with mistag rates of 5% and 2% respectively for one and three tracks passing the medium level τ identification.
We dissect the analysis into three independent parts corresponding to the decay mode of the τ -lepton, viz., the bbτ h τ h , bbτ h τ and bbτ τ final states, where the subscript h( ) denotes the hadronic (leptonic) decay mode of the τ . For the following three sub-analyses, we demand some common sets of cuts. We select events with exactly two reconstructed b-tagged jets with a minimum p T requirement of 40 (30) GeV for the leading (subleading) jet. We also require these b-tagged jets to be within a pseudorapidity range of |η| < 2.5. We require m bb > 50 GeV in order to bring the signal and backgrounds on the same footing because the backgrounds have been generated with this cut at the generation level. In case of the Higgs decaying to τ pair, we take ∆R bτ > 0.4, ∆R τ τ > 0.4 and m vis τ τ > 30 GeV, which signifies the minimum invariant mass on the visible products from the τ -pair. We also apply a common set of selection cuts as follows: In addition to the aforementioned common cuts, we require exactly two τ -tagged jets having a minimum p T of 30 GeV and a maximal pseudorapidity range of |η| < 2.5. In each of these sub-analyses, we first consider the variable m vis τ τ , constructed out of the visible τ objects and afterwards we consider the collinear mass variable, M τ τ .
For the first case, we further optimise p T,bb , m T 2 and m vis τ τ in order to have the best possible signal over background ratio.
Upon performing the optimised cut-based analysis, we obtain a final significance of 0.44 for the HL-LHC. The cut-flow and the final significance are tabulated in Table 5. In contrast to the bbγγ channel, the S/B ratio here is ∼ 0.67% and hence one needs data-driven background techniques and a drastic reduction in systematic uncertainties in order for this channel to be relevant in the future. Next, we use the collinear approximation technique, discussed above, to reconstruct the invariant mass of the Higgs decaying to a pair of τ leptons. To overcome the limitations as discussed above, we select events by putting an additional cut, ∆φ τ τ < 3.0 radian. For the BDT analysis, we impose an upper cut on the collinear mass, M τ τ < 200 GeV. The cut-flow and the statistical significance are tabulated in Table 6 with the following optimised cuts on top of the other variables. We obtain a significance of 0.65, which shows a small improvement over the previous analysis with the m vis τ τ variable. In order to be certain if our optimised cuts can be improved further, we employ a multivariate analysis using the BDT algorithm after the basic selection cuts. We train our signal and background samples with the following 12 kinematic variables 7 "Others" include tth, ttW and ttZ. 8 "Others" include tth, ttW and ttZ.
for the case with the m vis τ h τ h variable: For the other case, with the M τ τ variable, we train our signal and background samples with the following 9 kinematic variables: where the symbols have their usual meaning. ∆φ ab is the azimuthal angle separation for the ab system. M τ h τ h is the collinear mass of Higgs from hadronic τ decays. The signal and background yields after this multivariate analysis are shown in Table 7.
The normalised distributions of the four best discriminating kinematic variables, viz., M τ h τ h , m T 2 , m bb and p T,bb are shown in Fig. 2. We find that the S/B ratio increases slightly and we also have a non-negligible increase in the significance at 0.74, assuming zero systematic uncertainty. Table 7: Signal, background yields and final significance for the bbτ h τ h channel after the BDT analysis with (a) m vis τ τ (b) M τ τ variable.
The bbτ h τ channel
In the present instalment, we choose events containing exactly one isolated lepton and one reconstructed τ -tagged jet over and above the common requirements. We also require the isolated lepton to have a p T > 20 GeV and |η| < 2.5. The additional optimised selection cuts for this present mode, involving the m vis τ τ , are: After imposing the various cuts, we obtain a signal significance of 0.26 for the HL-LHC. The event yields along with the significance are shown in Table 8. Table 8: Same as in Table 5 for the bbτ h τ mode. The various orders of the signal and backgrounds are same as in Table 5.
We get the following optimised cuts upon the other variables with M τ τ variable. The event yields at HL-LHC are shown in Table 9 with a significance of 0.44 Here also we perform a BDT analysis to see its potential. We choose the following 13 kinematic variables to train our signal and background event samples with the m vis τ h τ l variable: Table 9: Same as in Table 5 for the bbτ h τ mode with collinear mass variable. The various orders of the signal and backgrounds are same as in Table 5.
Furthermore, we consider the following 9 kinematic variables to train our signal and background event samples while having the M τ h τ l variable: We ensure a proper training of the event samples. In Table 10, the signal, background yields and the significance after the multivariate analysis, are presented. The normalised distribution of the four maximal discriminating kinematic variables, viz., M τ h τ l , m T 2 , m bb and p T,bb are shown in Fig. 3. Upon imposing a suitable cut on the BDT variable, we find that the zero-systematics significance is 0. 49 Table 10: Same as in Table 7 for the bbτ h τ mode with (a) m vis τ τ (b) M τ τ variable.
The bbτ τ channel
The last segment of the bbτ + τ − channel consists of two leptonically decaying τ s. We demand events containing exactly two oppositely charged isolated leptons with p T > 20 GeV, over and above the requirements stated above. We impose the following optimised cuts on top of the other variables for the scenario where we consider the invariant mass from the visible products of the τ -leptons.
A final signal significance, S/ √ B, of 0.044 is obtained, upon assuming zero systematic uncertainties. We show the event yields and the significance in Table 11 Table 11: Same as in Table 5 for the bbτ τ mode. The various orders of the signal and backgrounds are same as in Table 5.
For the second category involving the collinear mass variable, we choose the following optimised cuts on top of the other variables. The results are tabulated in Table 12. Table 12: Same as in Table 5 for the bbτ τ mode with collinear mass variable. The various orders of the signal and backgrounds are same as in Table 5.
In an analogous manner to the previous two cases, we perform a multivariate analysis with the following 11 kinematic variables for the first case: p T,bb , m bb , ∆R bb , m vis τ τ , ∆φ τ τ , ∆φ τ 1 / E T , ∆φ τ 2 / E T , m vis hh , p vis T,hh , ∆R b 1 τ 2 , m T 2 Figure 4: Normalised distributions of M τ l τ l , m T 2 , ∆φ τ 1 / E T and p T,bb for the signal and dominant backgrounds in bbτ τ channel before applying basic selection cuts.
Following this, we perform another multivariate analysis with the following 8 kinematic variables for the case involving the collinear mass: In Table 13, the signal, background yields and the significance after the BDT analysis are presented. We also show the normalised distributions of the four kinematic variables viz., M τ l τ l , m T 2 , ∆φ τ 1 / E T and p T,bb in Fig. 4. The BDT optimisation yields a statistical significance of 0.077 for the latter scenario where we use the collinear mass observable.
The bbW W * channel
A channel often neglected in terms of rigour and clarity is the bbW W * final state, having three markedly different sub-states, viz., the fully leptonic (bb + / E T ), the Table 13: Same as in Table 7 for the bbτ τ mode with (a) m vis τ τ (b) M τ τ variable.
semi-leptonic (bb + jets + / E T ) and the fully hadronic (bb + jets), where denotes an electron, muon or a tau lepton. Out of these three possible final states, the fully leptonic one (which has an overlapping final state from bbτ τ ; see section 2.2.3) is the cleanest owing to lesser backgrounds. The semi-leptonic channel has a larger background as compared to the former. The fully hadronic final state, on the other hand, will be swamped, mostly by QCD backgrounds and hence is omitted from any further discussion in this study. For both the leptonic and semi-leptonic channels, the major background comes in the form of tt. The fully leptonic tt scenario contributes to being the dominant background for the leptonic signal and both the fully leptonic and semi-leptonic decays of tt act as the dominant backgrounds to the semi-leptonic signal. For the semi-leptonic channel, the second-most dominant background arises in the form of W bb + jets. The much less dominant backgrounds are comprised of bbh, tth, ttV , V h, V bb and V V V , where V denotes a W or a Z boson. For both the analyses, we implement a common set of trigger cuts, viz., p T,b/j > 30 GeV, p T,e (µ) > 25 (20) GeV, |η b, | < 2.5 and |η j | < 4.7. Furthermore, in order to deal with the large tt backgrounds, we apply, at the generator level a hard cut of m bb > 50 GeV. We apply the same for the bb background. Hence, in order to be consistent, we implement this same cut for all the samples at the analysis level. In the following two sub-subsections, we focus only on multivariate analyses. We pass the signal and background samples to the BDTD algorithm upon implementing the aforementioned cuts.
The 2b2 + / E T channel
Inspired by the CMS HL-LHC studies [131], we focus on the dileptonic mode of the bbW W * channel in this part. Differing slightly from CMS, we do not impose cuts on m , ∆R and ∆φ bb . Moreover, instead of using their neural network discriminator, we consider the BDTD algorithm. Besides, in addition to their analysis, we include various subdominant backgrounds on top of the dominant tt backgrounds, as has been listed above. For this study, we select events with exactly two b-tagged jets and two isolated leptons with opposite charges. Upon inspecting various kinematic distributions, we choose the following ten for our multivariate analysis: where the last term implies the azimuthal angle separation between the reconstructed di b-tagged jet and di-lepton systems. Having tt as the dominant background by far, i.e., the weight of this background being several orders of magnitude larger than the rest, we train our BDTD algorithm with the signal sample along with this background only. We analyse the other backgrounds upon using this training. The final number of signal and background events along with the significance are listed in Table 14.
The distributions of the four best discriminatory variables, viz., m bb , m , p T,bb and p T, , after the basic cuts as listed above, are shown in Fig. 5 Finally, with a judicious cut on the BDTD observable, we find ∼ 35 signal and ∼ 3197 background events, yielding a significance of ∼ 0.62 upon neglecting systematic uncertainties. The numbers are in excellent agreement to the ones obtained by CMS [131]. This channel can thus act as an important combining channel to enhance the total SM di-Higgs significance at the HL-LHC and also serves as an important search for a resonant di-Higgs scenario [88].
The 1 2j2b + / E T channel
Before concluding this subsection, we make an attempt to decipher the potential of the semi-leptonic final state for the bbW W * channel. On the analysis front, we choose events with exactly two b-tagged jets, one isolated lepton and at least two light jets meeting the trigger criteria as discussed above. We consider the same set of cuts as for the dileptonic channel before performing the multivariate analysis. For this case, we find the following variables to have the best discriminatory properties.
where p T, jj , ∆φ bb jj and ∆R jj refer to the visible p T of the jj system (for the signal, ensuing from the h → W W * → νjj decay), the azimuthal angle separation between the di-b-tagged jet system and the jj system and the ∆R separation between the lepton and the di-jet system respectively. Here the dominant backgrounds are the semi-leptonic and the leptonic decays of tt. Hence, in an analogous way to the dileptonic case, we train the BDTD with the signal and the tt samples, albeit with proper weight factors for the leptonic and semi-leptonic backgrounds. We then utilise this training for the rest of the backgrounds as well, which are clearly subdominant with respect to the tt backgrounds. We find a significance of 0.13, however, with a much smaller S/B ratio. The results are summarised in Table 15. The distributions of the four best observables, viz., m bb , p T, 1 , p T,bb and / E T are shown in Fig. 6. We do not find a promising significance for this scenario. We obtain a negligible S/B and a significance of 0.13 assuming zero systematic uncertainties. A somewhat promising result has been obtained in Ref. [87] using jet substructure techniques. Table 15: Signal, background yields and final significance for the 1 2j2b + / E T channel after the BDT analysis.The various orders of the signal and backgrounds are same as in Table 14.
Following this, we then discuss some of the most significant kinematic variables which distinguish the signal and backgrounds most efficiently. Finally, we present the results from multivariate analysis.
Pure leptonic decay
The signal yield in this current scenario is much smaller in comparison to the moststudied di-Higgs search channels like bbγγ and bbτ + τ − . However, as we will see below, this channel has a significantly lower background yield.
We require each event to have exactly two isolated photons and two isolated leptons having opposite electric charge. Sizeable backgrounds to this final state arise from the tth associated production, the Higgs-strahlung Zh process (merged up to three jets), and from the γγ (where = e, µ, τ for this case) final state. The irreducible background to this search channel comes from ν νγγ (mostly from V V γγ), which has a relatively smaller cross-section as compared to the aforementioned backgrounds, and hence has not been considered in the current analysis. While generating the γγ background, we merge the samples up to one extra jet and we also impose a generation-level cut on the invariant mass of the γγ pair, viz., 120 GeV < m γγ < 130 GeV.
Before listing down the variables we use for the multivariate analysis, we also impose a b-jet veto to the events. This reduces the tth background substantially. For this analysis as well for the semi-leptonic analysis that follows, we require the invariant mass of the di-photon system to be 122 GeV < m γγ < 128 GeV. As an optimised cut-based analysis for this channel is not available in the literature, we implement a BDT optimisation approach. The following are the variables used to train the signal and background samples.
where the last term denotes the azimuthal angle separation between the di-lepton and the di-photon systems. In Fig. 7, we show the kinematic distributions of the four variables, viz., m , / E T , p T,γγ and m γγ . These variables help distinguish the signal from the weighted background samples, most efficiently.
We find that upon imposing a cut on the BDT variable, the S/B improves from 4.4×10 −3 (after the basic selection) to 0.40. This is a significant improvement and perhaps has of the best signal over background ratios amongst all the channels studied so far. Unfortunately for us, this channel is plagued by very small branching ratios rendering a signal yield of less than unity. Given the dearth of signal events, we can not define a statistical significance. We must, however, note that this channel can be one of the most important channels for a 28 TeV/ 33 TeV collider. The signal and background yields are listed in Table 16. Hence we conclude that in order for this channel to have a significant contribution in the combination of the various final states, one requires either a large luminosity or higher energies.
Semi leptonic decay
This channel has been studied by ATLAS [74] with an integrated luminosity of 13.3 fb −1 . However, given the extremely small branching ratio of h → γγ, this channel is yet not sensitive and imposes a very weak observed upper limit on the non-resonant di-Higgs cross-section at 25.0 pb (95% confidence-level). Here, we concern ourselves with the γγ + jets + / E T final state. This process, however, has an additional complexity since the kinematics of the final state depends on whether the ν (jj) comes from the on-shell or the off-shell W -boson decay. Even though the event rate of the semi-leptonic scenario is larger than its purely leptonic counterpart, the presence of additional jets lead to considerably larger backgrounds.
For the event selection, we do not follow the analysis sketched in Ref. [74] as it is designed to maximise the signal events given the dearth in the integrated luminosity for such a process. We perform a multivariate analysis with looser basic selection cuts. We demand exactly one isolated lepton, two isolated photons and at least one light jet, with the p T and |η| ranges mentioned above. The irreducible background to this process comes from νγγ, merged up to one hard jet and has a tree level cross-section of ∼ 3.28 fb. In addition, γγ ( = e, µ, τ for both cases), merged up to one hard jet and having a generation level cross-section of 1.05 fb, also contributes to the background when one of the leptons goes missing. These two backgrounds have been generated with a hard cut at the generation level as has been discussed for the di-leptonic scenario. Similar to the previous analysis for the full leptonic case, tth and Zh+jets also contribute significantly to the background. In addition, we consider the W h process, merged up to 3 jets, as an important background.
We perform our standard multivariate analysis upon employing these nine kinematic variables.
where ∆φ j γγ is the azimuthal angle separation between the j and the reconstructed di-photon systems with j being the hardest jet and m T is the transverse mass variable. It is found that ∆R j , p T,γγ , m γγ and m T are the most effective variables in distinguishing the signal from the backgrounds as can be seen in Fig. 8. We find that after a proper BDT implementation, the signal over background ratio improves from 4.8×10 −3 (after basic selection) to 0.11. The signal and background yields after imposing an appropriate cut on the BDTD variable are summarised in Table 17.
Here also we find that similar to its precursor, i.e., the purely leptonic scenario, the S/B is much better than most of the channels considered thus far. However, the low rate due to the small branching ratio of h → γγ acts as a hindrance to render this final state useful at present. Going to high energy machines, higher integrated luminosities of around 5000 fb −1 with the 14 TeV collider, performing a combination of integrated luminosities from CMS and ATLAS at the HL-LHC, and lastly a modification to the SM cross-section, will enhance this channel's potential. In summary, the γγW W * final states yield extremely good S/B ratios.
The 4W channel
In this subsection, we focus on the yet-untouched final states ensuing from the di-Higgs production mode, viz., the 4W channel 9 . For completeness, we consider both semi-leptonic and fully leptonic decay modes. We lose cleanliness upon including more and more jets in the final state, i.e., upon considering the semi-leptonic decays. On the other hand, for a fully leptonic final state, the cross-section yield is extremely small. Considering two, three and four leptons, we choose following final states: Table 17: Signal and background yields for the γγ + jets + / E T channel after the BDT analysis. The various orders for the signal and backgrounds are same as in Table 16. The order for W h + jets ( νγγ + jets) is the same as for Zh + jets ( γγ + jets).
In the following, we discuss the three cases as listed above.
The SS2 final state
Before implementing the multivariate analysis, we require each event to have exactly two leptons carrying the same electric charge and having p T > 25 GeV. Furthermore, we require events with at least two jets with a veto on b-tagged and τ -tagged jets. The W Z (W → ν, Z → ), tt and same-sign W -boson pair production constitute the most dominant backgrounds for this channel. Besides, we have the V h production (V = W ± , Z decays leptonically and Higgs decays to W W * , ZZ * ), ttX (X = W ± , Z, h). The tt channel is a fake background for this process where either jets fake as leptons or charges are misidentified. Save for the same-sign W -boson pair, all the other dibosonic backgrounds are merged up to 3 jets. We must also note that by demanding a veto on the b-tagged jets, we are able to reduce a significant portion of the tt and ttX backgrounds.
In a similar spirit as in all the previous subsections, we embark upon our multivariate analysis by choosing the six following kinematic variables. m ± ± , ∆R i j k , m jj , where i, k = 1, 2 gives four combinations and m jj signifies the invariant mass constructed out of the hardest two jets. We show the four most discriminatory variables in Fig. 9 and list down the final signal, background yields along with the zero-systematics significance in Table 18. We find that upon performing a BDT optimisation, the S/B ratio improves from 2.2 × 10 −4 (after basic selection cuts) to 9.7 × 10 −4 . Unless the production cross-section is increased significantly or we find better techniques to control the S/B, this channel does not have much hope for a standard di-Higgs search. A drastic change in kinematics might change the picture altogether. Figure 9: Normalised distributions of m ± ± , ∆R 2 j 1 , ∆R 1 j 2 and m jj for the signal and the most relevant backgrounds for the SS2 final state. Table 18: Signal and background yields for the SS2 channel after the BDT optimisation.
The 3 final state
The trilepton analysis is somewhat similar in spirit to its SS2 counterpart. For the p T cuts on the lepton, we relax them somewhat in this analysis. We require p T, 1 > 25 GeV, p T, 2 > 20 GeV and p T, 3 > 15 GeV, in order not to make the basic selection cuts too stringent. The pseudorapidity requirements for the leptons and the various requirements for the jets are as before. Furthermore, in order to remove events with leptons ensuing from the Z-boson, we require |m Z − m | > 20 GeV for leptons having opposite sign and same flavour. The main backgrounds for this channel come from W h, diboson production (mainly W Z) and the fake backgrounds coming from tt. Apart from these, the Zh (Z → , h → W + W − ), ttX (X = W ± , Z, h) and ZZ backgrounds also contribute significantly. All the dibosonic processes are merged up to three jets.
For this installment, we choose the following kinematic variables to train our BDTD algorithm.
where i, j runs from 1 to 3, m eff is the effective mass summing the / E T , the scalar p T of the three leptons and all the jets in the event. Lastly, n jet is the count of the number of jets per event. The four best variables are shown in Fig. 10. The event yields and final significance are shown in Table 19. In this case, the S/B changes from 7.3×10 −4 (after basic selection cuts) to 2.8×10 −3 . We find that there is a slight improvement compared to the SS2 scenario. Finally, we end up with a statistical significance of 0.20. Table 19: Signal, background yields and final significance for the trilepton channel after applying the most optimised BDT cut. The various orders for the signal and the backgrounds are same as those in Table 18. The order for Zh + jets (ZZ + jets) is the same as that for W h + jets (W Z + jets).
The 4 final state
This brings us to our final non-resonant analysis. For this analysis, we perform a simple cut-based analysis. We require each event to have four isolated leptons. The dominant backgrounds are W h, tth, tt, ZZ and Zh. Besides, we have nonnegligible contributions from ttV (V = W ± , Z). All the dibosonic backgrounds are merged up to three jets save for the ZZ sample which is merged up to one extra jet. The leading and sub-leading leptons are required to have p T > 20 GeV. For the remaining two softer leptons, we demand p T > 10 GeV. Besides, we also employ the |m Z − m i j | > 20 GeV cut in order to reduce backgrounds having a pair of opposite sign same flavour leptons coming from Z-bosons. Furthermore, we apply a cut on the missing transverse energy, viz., / E T > 50 GeV to greatly reduce the 4 background. These cuts are extremely helpful in reducing the backgrounds by a great deal. However, the extremely small signal yield reduces to an even smaller number which is not statistically significant for all practical purposes. In Table 20, we find an S/B of ∼ 2.5 × 10 −4 after imposing the aforementioned cuts. On adding the / E T cut the S/B increases to 7.8×10 −3 . However, upon having such small cross-sections, we do not perform a BDT analysis for this scenario.
Summarising the non-resonant search results
To summarise this long section, we find that the prospects of discovering the SM non-resonant di-Higgs channel at the HL-LHC (14 TeV with 3 ab −1 of integrated luminosity) are bleak. The most promising channel comes in the form of bbγγ yielding an S/B ratio of ∼ 0.19 and a statistical significance of 1.76. The situation for the bbτ + τ − channels is more challenging unless we find an excellent algorithm to reconstruct the di-tau system. The purely leptonic final state of the bbW W * mode shows promise but one will either require data-driven techniques to reduce systematic uncertainties on the backgrounds or even better ways to curb the backgrounds. Both the leptonic and semi-leptonic decay modes for the γγW W * channel yield excellent signal to background ratios. However, the extremely small event yields render these channels unimportant with the planned luminosity upgrade. The 4W channel has three distinct final states with leptons. Upon doing detailed analyses, we find that the signal yields are very small. The S/B improves upon increasing the number of leptons but the signal yields fall rapidly. Upon combining all the statistically significant searches with at least 5 signal events after all the cuts, we end up with a combined significance of 2.08σ at the HL-LHC. We expect that in the event of running the LHC till higher luminosity or upon considering the CMS and ATLAS results to be statistically independent (giving us 6 ab −1 data), one can reach close to 2.95σ (with 6 ab −1 luminosity, we gain by a factor of √ 2) upon combining all the statistically significant channels. We must note that if we consider a flat systematic uncertainty on the background estimation, then upon using the formula S = N S / N S + N B + κ 2 N 2 B , with S, N S , N B and κ being respectively the significance, number of signal and background events after all possible cuts and the systematic uncertainty, we will face a reduction in the quoted statistical significance depending on the value of κ. Even κ = 0.1, 0.2, i.e., a 10%-20% systematic uncertainty, may completely dilute our significance. Hence, we need excellent control over systematics in order for us to observe any hints coming from the di-Higgs channels. A 100 TeV collider has the potential of measuring the di-Higgs channel to a greater degree of accuracy. We also note that, in some channels, an enhancement in the production cross-section by a factor of 3 may help the discovery with the HL-LHC. Lastly, modified kinematics will alter this picture completely and we may see encouraging results with lesser integrated luminosities. In the following section, we discuss various BSM scenarios yielding the same final states as have been discussed in the present section.
Ramifications of varying the Higgs self-coupling
Before discussing the contaminations from various BSM scenarios to the standard double Higgs channels, we address the issue of the variation of the Higgs selfcoupling from its SM expectation. The Higgs self-coupling in the SM is an extremely small number and the HL-LHC study by ATLAS [100] predicts a sensitivity of −0.8 < λ hhh /λ SM < 7.7 upon assuming SM-like couplings for the remaining. In this regard, we must be wary of the differences in the kinematic distributions upon changing λ hhh because it changes the magnitude of the destructive interference with the SM box-diagram as we shall see below. This not only modifies the rate of the double Higgs production, but also alters the kinematics significantly. For the present study, we will consider the following six values of λ hhh /λ SM , viz., -1, 1, 2, 5 and 7. Because we have seen that the bbγγ channel is the most sensitive channel for di-Higgs studies at the HL-LHC, we will restrict the anomalous self-coupling study to only this channel. Hence, referring to section 2.1, we tread the following three steps. First, we consider double Higgs production with each of the aforementioned λ hhh values (one at a time) as our signal and pass them through the cut-based analysis which has been optimised (with the cuts listed in Table 1) to maximise the SM (λ hhh /λ SM = 1) signal. Following this, we pass each of the λ hhh samples through the BDT framework optimised for the SM double Higgs production (see Table 4). Thereafter, we train all the samples with an alternative λ, viz. λ hhh /λ SM = 5. Finally, we train the BDT for each λ hhh point and compute the significance. We list the results in Table 21. The cross-sections are for the process pp → hh → bbγγ as a function of λ hhh /λ SM . The efficiencies are computed as the ratios of the final number of events (after the cut and count or the multivariate analysis) to the number of generated events. Finally, the yields are given for the signal and background samples for an integrated luminosity of 3 ab −1 . The cut-efficiency is shown to be the maximum for the value of λ hhh /λ SM = 2 where incidentally the cross-section is the smallest. We had already seen that going from a simple cut and count analysis to a BDT analysis, rigorously trained to segregate the signal from background, we gain in significance. This already holds true for the first two sub-tables, with an improvement varying between 13%-23%. However, when we train the BDT with the corresponding λ hhh samples, the BDT becomes more tuned to the modified kinematic distributions and in almost all cases, we find an improvement in significance compared to its counterpart where the training was performed with the SM signal sample. We can see the results in the fourth sub -table in Table 21. Also, in order to quantify the difference in distributions for the variation of the Higgs trilinear coupling, we show the normalised distributions of the reconstructed Higgs p T in the di-photon channel (p T,γγ ) upon varying λ hhh /λ SM (see Fig. 11). Finally, we employ the log-likelihood CLs hypothesis test [132][133][134] upon assuming the SM (and also λ hhh /λ SM = 5) to be the null hypothesis. We obtain the following ranges of κ = λ hhh /λ SM : −0.86 < κ < 7.96 CBA for κ = 1 optimisation; SM null hypothesis −0.63 < κ < 8.07 BDT analysis for κ = 1 optimisation; SM null hypothesis −0.81 < κ < 6.06 BDT analysis for κ = 5 optimisation; SM null hypothesis −1.24 < κ < 6.49 BDT analysis for κ = 5 optimisation; κ = 5 null hypothesis.
Note that for κ = 1, we are quite close in reproducing the HL-LHC prediction by ATLAS (i.e., −0.8 < λ hhh /λ SM < 7.7) in both the cut-based (CBA) and BDT optimisation procedures. However, κ is an unknown parameter (as the Higgs trilinear coupling has still not been measured) and hence, in principle, should be varied as well. Upon training with a different value of κ other than 1, viz., κ = λ hhh /λ SM = 5, a shift in the allowed ranges for κ has been obtained, which further depends on the hypothesis chosen. We find a rather stronger upper-limit on the allowed range of the trilinear coupling upon training with the λ hhh /λ SM = 5 sample. To conclude this section, we emphasise the fact that we must be geared to tackle variations of the trilinear couplings from the SM expectations and must be able to segregate them with the help of various kinematic distributions up to a certain uncertainty. Table 21: Table showing
Contaminations to non-resonant di-Higgs processes
Measuring the trilinear Higgs coupling has been the primary focus for all di-Higgs searches. However, as we have seen in details in the previous section, the SM Higgs pair production cross-section being extremely small, makes it a challenging job to look for its signatures even at the HL-LHC. In the previous section, we found that the combined significance upon assuming zero systematic uncertainties is ∼ 2.1σ. However, up until now, we reserved ourselves from introducing any BSM effects. We saw that the number of signal events (or rather the S/B) is small for most of the final states and hence small contributions from any BSM physics can potentially distort or contaminate the signal. Statistically significant deviations from the expected SM di-Higgs yields may be considered as signatures of new physics. On the one hand, such deviations can be attributed solely to modifications in λ hhh or y t with respect to their SM values. On the other hand, markedly different new physics processes can also be responsible for the modification in the event rate in a particular production mode. Having performed boosted decision tree analyses designed solely to maximise the SM di-Higgs yield, a fair question to ask at this stage is whether any new physics can at all mimic the SM signatures. The answer is twofold. If perchance the primarily discriminatory kinematic variables of the new physics scenario in question, overlap with their SM counterparts to a good degree, then there is a good chance of the new physics mimicking this SM signal. Secondly, even if the overlap is not significant then the largeness of the new physics cross-section may determine the degree of contamination. The purpose of this section is to study some such imposters ensuing from various well-motivated new physics scenarios which may potentially contaminate the non-resonant SM Higgs pair event yields in various final states. We will study the extent of these contaminations upon considering various benchmark scenarios. We will also find correlated channels during our quest of extracting the effects of contamination. The effect of correlation simply means that some search channels for the non-resonant di-Higgs searches will allow for more contaminating new physics scenarios compared to some other search channels. Broadly, the following are the three scenarios which can contaminate the non-resonant Higgs pair production in certain final states: • Double Higgs production, pp → hh(+X) through resonant or non-resonant production modes, • Single Higgs production in association with some other particles, pp → h + X and • Null Higgs scenario, pp → X, yielding some of the final states as has been discussed in section 2, where X is an object or a group of objects not coming from an SM Higgs boson decay. In the following subsections we detail these three broad scenarios citing examples from specific new physics models.
The hh(+X) channels
Several extensions of the SM, primarily with an extended Higgs sector, may significantly enhance the Higgs pair production cross section and may also alter the kinematics of certain observables. More specifically, two Higgs doublet models (2HDM) [30,32] and complex scalar extensions [63,64,88] are some prime examples. In the type-II 2HDM scenarios, which can be embedded in an MSSM, there is a CP -even Higgs, a CP -odd Higgs and two charged Higgs bosons on top of the SM-like Higgs with m h = 125 GeV. The SM-like Higgs pair can be produced from the decay of a heavy CP -even Higgs boson, H. The couplings of the various Higgses in 2HDM scenarios depend mainly on the Higgs mixing parameter, α and the ratio of the two vacuum expectation values (vevs), tan β of the two Higgs doublets. In order to abide by the LHC results and constraints pertaining to the discovered scalar at ∼ 125 GeV, one has to invoke the so-called alignment limit, where the lightest CP -even Higgs automatically aligns itself with the SM-like Higgs, having couplings close to the SM predictions. The allowed masses of the pseudo-scalar (A) and the CP -even heavy Higgs lie in the range of a few hundred GeVs. In the low tan β regime, the rate for the CP -even heavier Higgs decaying to a pair of SM-like Higgs bosons can become significant and may even surpass the SM di-Higgs cross-section [30,32]. The resonant production of a heavy CP -even Higgs can, in principle, contaminate the SM di-Higgs signal thus affecting the measurement of the Higgs self-coupling. In particular, the low tan β region can affect the Higgs trilinear coupling measurement. For large tan β, the H → bb and H → τ τ modes become dominant as the coupling scales as m b (m τ ) × tan β. Hence, we do not concern ourselves with the large tan β regime. We must also note that high tan β-low m A regions are excluded [135].
In order to study the contamination from the process pp → H → hh, we generate the signal samples in Pythia-6 and demand a narrow-width for H, i.e., in the GeV range, less than the detector resolution. The results are shown in Fig. 12 as upper limits on the cross-section pp → H times the branching ratio of H → hh, viz., σ(pp → H → hh), as functions of the heavy Higgs mass, m H . We try to present the results in a somewhat model independent fashion. One can imagine the effects of tan β or any other theory parameter to have been absorbed in the upper limit of the cross-section. The green (blue) region signifies the upper limit on the cross-section required to contaminate the SM yield at 2σ (5σ), where the cross-section upper limits are derived using the inequality where S UL NP is the computed upper limit at N σ on the new physics (NP) scenario upon considering a background which includes the SM di-Higgs contribution as well. The grey region is part of the new physics parameter space which does not contaminate the SM expectations. As we know, the invariant mass of the SM di-Higgs system peaks around 400 GeV and hence because of our robust BDT optimisation, which captures to a very precise degree the shape of the non-resonant SM observables, a heavy Higgs boson of mass m H 400 GeV gets literally treated as a background. Hence, as seen in Fig. 12, one requires larger cross-sections for m H 400 GeV in order to contaminate the SM signal even at the 95% confidence level. We see that the strongest bound on the upper limit on σ(pp → H → hh) comes about from the bbγγ channel. The upper limit varies between 76 fb and 25 fb between m H = 400 GeV and 650 GeV. This is followed by bbτ + τ − . We find the 2σ upper limit on the cross-section varying between 170 fb and 83 fb for the aforementioned mass range. The limit is also considerably strong in the fully leptonic decay of bbW W * , varying between 228 fb and 40 fb for m H varying between 450 GeV and 650 GeV. The upper limits from the W W * γγ channels are fairly strong as well. The 2σ upper limit plateaus between 129 fb and 282 fb for the fully leptonic case. Bounds from the other modes, especially from the 4W modes are much weaker. Hence, we see that the channels where we obtained the best S/ √ B values have the strongest bounds on the upper limits of the cross-section. Thus, for the best optimised modes, one requires lesser cross-sections from the heavy Higgs production in order to contaminate the non-resonant Higgs pair production. We must emphasise once again that our BDT optimisation was done solely for the SM non-resonant Higgs pair production modes and this subsection is only showing the effects of the new physics contamination to the SM signal. In order to search for such a resonance, one needs to redo the optimisation upon treating it as a signal. This will be the subject matter of our forthcoming work. To summarise this part, we find that an order 100 fb of cross-section for a resonant Higgs mass 400 GeV will contaminate the SM di-Higgs expectation to at least 2σ.
Similarly, Higgs pair production in supersymmetric models [38,60,62,101] are also very well motivated. To put things into perspective, in this work we restrict ourselves to MSSM which predicts supersymmetric partner(s) for each SM particle. The theory also requires two Higgs doublets. The decays of some of the supersymmetric scalar particles result in the SM-like Higgs along with their fermionic counterparts. The processes which can contaminate the di-Higgs search channels, other than the heavy Higgs resonance mentioned above, come from the squark (anti-squark) pair production. Although LHC has already imposed stringent bounds on the first and second generation squark masses, viz., ≥ O(TeV), still this particular channel can attain sizeable cross-sections owing to the strong couplings and contribution from each light flavour. We choose a benchmark point (BP1) to study squark pair production (q LqL ,q Lq * L ,q * Lq * L ) followed by subsequent decay of the squark to a light quark and Higgs boson accompanied by χ 0 1 . This yields a final state of hh + / E T + jets. In Table 22, we list three benchmark points which are still allowed by all experimental constraints, particularly the constraints coming from the Higgs mass and couplings measurements. The first of these is relevant for our discussion in this subsection. The common parameters for the three benchmark points are as follows: M A = 1000 GeV, tan β = 10, A t = 2500 GeV, From BP1, we see that the cross-section of hh + X is ∼ 10.8 fb, which is less than a third of the SM expectation. Moreover, we find that the / E T distribution from the squark pair production is significantly different from the signal as well as from the Benchmark Parameters (GeV) Mass (GeV) Processes Branching Points Fraction Fig. 13. After applying the BDT cuts for the bbγγ analysis, we are left with ∼ 0.60 events, which is much smaller compared to the SM expectation and not statistically significant. Hence in order to minimise the contamination to the bbγγ final state ensuing from an SM di-Higgs production, one may perhaps impose certain exclusive cuts, especially on the / E T distribution. This will help reduce new physics contaminations with large / E T . Moreover, for certain SUSY scenarios, we may have cascade decays giving rise to multiple jets. Hence, the cut N j < 6 can come in handy to reduce such backgrounds and we may also require to optimise this cut further in order to reduce such contamination effects. In other words, removing contamination effects can be tricky and can be somewhat model dependent if we are studying inclusive final states.
The h + X channels
In the previous subsection, the heavy resonance production and the di-Higgs production ensuing from subsequent decays of a pair of (anti-)squarks, potentially contaminate all the SM di-Higgs search channels that are studied in section 2. In this subsection, we will look into two specific candidates which will contaminate some di-Higgs final states and not the others. After the HL-LHC run if one finds excesses in certain di-Higgs like final states and not in the others, then it might be possible to narrow down the new physics possibilities to a greater degree.
In 2HDMs, a resonant production of the pseudoscalar Higgs production, viz., pp → A → Zh followed by Z and h decaying to all possible final states, can, in principle, imitate various final states as shown in Fig. 14. The decay rate of the pseudoscalar, A → Zh is appreciable with M A below the tt threshold and for low values of tan β ( 5). The upper limits on the cross-sections are weaker than those from the resonant scalar production. One of the strongest bounds arise from bbγγ, varying from 330 fb (450 GeV) to around 197 fb (650 GeV). The strongest upper limits, however, comes from the bbτ + τ − search, varying between 292 fb and 186 fb in the aforementioned mass range. For the di-leptonic bbW W * channel, the bound strengthens from 1236 fb at m A = 400 GeV to ∼ 110 fb for m A = 650 GeV. From the final state tailored for the 3 mode coming from the 4W scenario, the 2σ upper limit varies between 555 fb (400 GeV) and around 341 fb (650 GeV). The upper-limits on the cross-section required for contamination from the remaining final states are rather weak. In summary, the A → Zh channel contaminates in a slightly weaker fashion as compared to the H → hh channel. One of the possible reasons is that the reconstructed Z-peak is shifted from the reconstructed Higgs peak as m bb serves as an important discriminatory variable in all the searches involving a b-jet pair. Hence, more cross-section is required here in order to contaminate the SM di-Higgs channels to a similar degree as in the H → hh channel. As an aside, we would like to mention that the process pp → Ah may also potentially contaminate the same final states as for the A → Zh case. We however, do not consider the details of this channel, for brevity.
As an extended scenario, we now shift our focus to supersymmetry. In MSSM, electroweakino pair production often results in mono-Higgs type signals. LHC has come down heavily on such SUSY scenarios constraining much of its parameter space. The bounds on squarks and gluino masses have already surpassed a TeV. In this situation, the observation of a SUSY signature will heavily rely on its electroweak sector, composed of charginos (χ ± i ) and neutralinos (χ 0 j ). In the presence of a decoupled Higgs sector, the chargino-neutralino pair production is mediated through the W -boson propagator, with the W ± χ ∓ χ 0 1 coupling containing terms which depend on both the wino and the higgsino components of the electroweakinos involved. However, it is to be noted that the contributions from the wino components dominate over the contributions from the higgsino terms. ATLAS and CMS have also performed searches for chargino-neutralino pair production in the 3 + / E T and the same-flavour opposite-sign 2 + / E T final states for a non-generic scenario where both χ ± 1 and χ 0 2 are dominantly wino-like and mass degenerate. They have obtained correlated bounds on the masses of LSP and NLSP [136][137][138][139] 10 . We carefully select a benchmark point where the wino mass parameter, M 2 is much smaller compared to the higgsino mass parameter, µ making the lightest chargino and second lightest neutralino, wino-like. A winodominated χ 0 2 and χ ± 1 yields much larger cross-section for the process pp → χ 0 2 χ ± 1 compared to other electroweakino production process, for example, χ 0 2 pair production etc. Hence, we will not consider the latter process although it can, in principle, 10 Much stronger limits have been obtained from the 13 TeV results from separate final states involving τ -leptons [140]. We do not however, consider these limits in the present work. Table 22 and is marginally outside the projected exclusion obtained by ATLAS for the HL-LHC [141]. In this parameter space χ 0 2 dominantly decays to hχ 0 1 , while χ ± 1 has a 100% branching ratio to W ± χ 0 1 . This essentially produces a W h + / E T final state with a cross-section of ∼ 400 fb, thus generating h + X signatures. Hence, the W h + / E T final state from the chargino-neutralino pair production can modestly contaminate some of the di-Higgs search channels, viz., the bbW W * → bb jj + / E T , γγW W * → γγ jj + / E T , 4W → ± ± jjjj + / E T , 3 jj + / E T . In Table 23, we present the event yields for the benchmark point BP2, in three of the concerned di-Higgs channels, corresponding to the most optimised BDT score obtained for the nonresonant SM di-Higgs searches. We find that the contaminations are large in these channels reminding us that a possible future observation of significant number of events in these channels must be treated carefully. We also mention here that the SM di-Higgs expectations from these channels are insignificant leading to negligible signal over background ratios. Thus, observations of significant numbers of events over and above the SM backgrounds can be potential signatures for new physics.
Null Higgs channels
Before closing this section, we discuss the final category of potential contaminants, viz., the ones with no SM-like Higgs bosons in the production or decay modes. We start by revisiting the classic heavy resonant (pseudo-)scalar production. This (pseudo-)scalar is dominantly produced by the gluon-fusion production mode and in the case where its mass is greater than the tt threshold, it can decay to a pair of top quarks, the branching ratio depending on the H(A)tt Yukawa coupling. This channel can potentially contaminate the bbτ + τ − and bbW W * channels. We find from Fig. 15 that the upper limits on the cross-section times branching ratio (σ(pp → H(A) → tt)) from the relatively clean bbW W * → bb + − + / E T channel, is visibly weak. The upper limits from the semi-leptonic decay mode, viz., bbW W * → bb + / E T +jj gives slightly stronger 2σ upper limits on the contamination cross-section, varying between ∼ 1.2 pb (m H = 500 GeV) and ∼ 0.5 pb (m H = 650 GeV). The upper limits from bbτ + τ − also does not fare well. Hence, the H → tt channel does not contaminate the SM di-Higgs channels to any considerable degree. One of the prime reasons is the fact that the BDT variable m bb is strongly discriminating, peaking at the SM-like Higgs boson mass for the non-resonant Higgs pair production, with the b-quark pair from the tt mode having a distinct feature as shown in Fig. 16. Hence, one will require a very large production cross-section for the heavy resonant scalar in order to contaminate the SM signature significantly.
Another interesting category can be accommodated in various extensions of the SM involving singly charged Higgs bosons. One can consider a scenario where a singly charged Higgs is produced in association with a top quark and a bottom quark, viz., pp →tbH + /tbH − and the charged Higgs either decays to τ ν τ or tb depending on its mass. These channels may adversely contaminate the bbW W * and bbτ + τ − modes. We find from Fig. 17 that the tbtb channel poses the strongest contamination to the bb jj + / E T final state. The 2σ contamination cross-section for this final state varies between 393 fb (m H + = 250 GeV) and 204 fb (m H + = 650 GeV). The limits from the other channels are weaker. We also note in passing that all the aforementioned processes essentially affect the low tan β region of the parameter space.
As a final example, we study the stop pair production, pp →t 1t * 1 which can potentially mimic some of the di-Higgs signatures. The stop pair-production cross-section is fairly large for stop masses of the order of several hundreds of GeVs. With an appropriate choice of parameters listed as BP3 in Table 22,t 1 can have a dominant branching ratio to bχ + 1 , with χ + 1 eventually decaying to W + χ 0 1 . This gives us a final state of 2b + 2W + / E T which potentially affects the hh → bbW W + and hh → bbτ + τ − search channels. We choose BP3 such that the mass difference betweent 1 and χ 0 1 is less than the top mass, ensuring the stop decays ast 1 → W bχ 0 1 . The final number of events at the HL-LHC for the relevant search channel bb jj + / E T is shown in Table. 24. The contamination is found to be of the same order as the SM signal. We also note that the other decay mode of stop quarks, viz., tχ 0 1 , will also give rise to tt + / E T final state, affecting the same channels.
We must stress here that the entire analysis has been performed using boosted decision tree optimisation techniques which has been trained using the SM di-Higgs data samples. Hence, the BDT cuts are very efficient in segregating any contamination, i.e., non-SM contributions. Now, if a new physics process is still able to contaminate, then it must be very efficient in passing all the cuts. This would mean that it must come with a large production cross section or a considerable overlap with the SM kinematic variables, so as to contaminate the SM signal. In other words, we can impose stringent bounds on the cross-sections for various BSM scenarios discussed above, which can potentially contribute to the di-Higgs signals. The efficiency of the BDT cuts will, of course, depend on the particular channel considered. The bound on some BSM physics can be strong from one channel and may not be so strong from the rest. It is important to note that there might be two completely different aspects of interpreting our results. The first case would be where we are already aware of the presence of new physics (through some other channel). In such situations, we want to ask whether any new physics process might contaminate the di-Higgs signal. If so, we will get an idea of how large the cross section will be for such processes and prepare our strategy. The second one is similar to our present situation, where we would be still looking for new physics. This is a much more complex scenario as we are looking for new physics in various directions. Our purpose in this work is to classify di-Higgs searches in multiple channels in a model independent manner so as to extract the best possible information about potential contaminating channels. In this case, we can, at best, put bounds on the cross-sections coming from new physics scenarios. This will give us an idea if the measurement of the Higgs self-coupling is possible and if yes, then which channel to look out for.
We wish to conclude this section by reiterating our philosophy for the second part of our study with the following observations. In the fortunate case that we discover new physics in the near future, for instance discoveries of heavy Higgs boson(s), superpartners of quarks, to name a few, then the measurement of λ hhh will be affected because of the effects of contamination to the SM channels as have been quantified above. For a possible scenario where we have hints of new physics but these are below the discovery significance, then also care must be taken to study the effects of contamination which can tell us more about the viability of such scenarios. A third possible scenario which we did not look for in this present study is the effects of new physics only modifying λ hhh . For such possibilities, it might happen that we will see no new particles and the shapes of the kinematic distributions involving the Higgs pair production can only shed light on new physics.
Summary and outlook
In the first part of this work, we evaluated the prospects of di-Higgs searches in numerous well motivated final states. Optimised cut-based analyses were performed for the bbγγ and bbτ + τ − states. We followed this up with multivariate analyses using the boosted decision tree (BDT) algorithm for the majority of our search channels. The multivariate analyses yielded improved signal to background ratio (S/B) and the overall statistical significance. The bbγγ final state presented itself as the most promising search channel with a statistical significance of 1.46 (1.76) for the cut-based (multivariate) analysis. The bbτ + τ − channel was looked for in the fully hadronic, semi-leptonic and leptonic sub-states. This channel, even upon having a higher yield as compared to its predecessor, is marred by much larger backgrounds and our limitation to reconstruct the τ invariant mass precisely. However, upon employing the collinear mass variable for reconstructing the Higgs decaying to a pair of τ s, we finally obtain statistical significances of 0.65 (0.74), 0.44 (0.49) and 0.07 (0.08) for the cut-based (multivariate) analyses in the hadronic, semi-leptonic and leptonic modes respectively. The signal to background ratio improves significantly upon using the collinear mass technique. The bbW W * state in the leptonic final state serves as a clean channel with a moderate S/B and a statistical significance of 0.62. This serves as the third most important contribution after the bbγγ and the fully hadronic bbτ + τ − channels. The semi-leptonic final state for bbW W * pales in comparison with a much smaller S/B and a statistical significance of 0.13. Both the leptonic (S/B= 0.40) and semi-leptonic (S/B = 0.11) final states for the W W * γγ channel show great potential for higher-energy and higher-luminosity colliders. The limitation in design-luminosity at the HL-LHC in addition to the smallness of BR(h → γγ) forbid us from utilising these final states while computing the combined significance. We conclude the first part of this work upon considering the SS2 , 3 and 4 final states emerging from the hh → W W * W W * search channel. The tri-leptonic channel yields a statistical significance of 0.20, however, with an insignificant S/B. One would require a manifold increase in the production cross-section in these three channels for them to become noteworthy, even in the future colliders. For all channels with less than 5 signal events, we were unable to define a statistical significance. A combined zero-systematics significance of ∼ 2.1σ was obtained upon combining all the statistically significant signals for the HL-LHC analysis at 14 TeV. The quoted significance values can get severely diluted, once systematic uncertainties are taken into account.
After this we studied the importance of considering varying values of the Higgs trilinear coupling and how it affects our conclusions. We trained the boosted decision trees with the SM case for once and then with each of the λ hhh samples and found that one can have a difference in significance because of the difference in the distributions of certain kinematic variables. We faithfully recover the expected exclusion on the Higgs trilinear coupling for the HL-LHC, as computed by ATLAS, upon using a log-likelihood CLs hypothesis for the λ SM BDT optimisation. Upon changing the training to a different value of λ and also upon choosing a hypothesis different from that of the SM, we obtain stronger upper limits.
In the final chapter of this work, we analysed some new physics scenarios which may potentially contaminate the SM di-Higgs search channels. We used the same multivariate training and cut on the BDT variable for the new physics cases as have been obtained for the SM non-resonant di-Higgs searches, in order to estimate the contaminations. Three major contamination scenarios were studied, viz., hh(+X), hX and X, X being a set of objects not ensuing from the SM-like Higgs, and upper limits on the production cross-section of heavy scalar (H), pseudoscalar (A) and charged Higgs (H ± ) bosons were obtained. In particular, we derived upper limits on σ(pp → H → hh), σ(pp → A → Zh), σ(pp → H → tt) and σ(pp → H + tb → tb(τ ν)tb) for the various search channels. The bbγγ channel emerged as the most sensitive search channel, with results indicating that for m H = 500 GeV, a production cross-section of σ(pp → H → hh) ∼ 36 fb would result in a 2σlevel of contamination to the SM search. This is closely followed by the bbτ + τ − channel, putting an upper limit of 104 fb for the same resonance mass. The limits from the leptonic decay mode of bbW W * also present competitive upper limits with σ(p → H → hh) attaining values of ∼ 98 fb at m H = 500 GeV for a 2σ-level contamination. The upper limits from the remaining decay channels are ∼ 5 − 10 times weaker. In the resonant A → Zh search, the bbγγ mode presents the strongest upper limit on the cross-section at 233 fb with m A = 500 GeV. The bbτ + τ − mode closely follows with a contaminating cross-section of 238 fb for the same mass of the pseudoscalar. The di-leptonic final state for the bbW W * channel also imposes upper limit of the same order. Next, we derived upper limits on σ(pp → H → tt), and the results were found to be significantly weaker than the previous scenarios. The 2σ upper limits derived for the charged Higgs production also exhibit similar results, with the semi-leptonic bbW W * channel offering the best sensitivity with cross-section requirements of the order of 217 fb for m H + = 500 GeV, in H + → tb mode. The epilogue to this story is provided by the contaminations from various SUSY processes. Here, we had chosen three experimentally viable benchmark points, optimised for squark pair production, chargino-neutralino pair production and stop pair production, with subsequent cascade decay modes mimicking various di-Higgs final states. Of particular interest is the contribution from the χ 0 2 −χ ± 1 pair production which may significantly contaminate the SS2 and 3 final states in the hh → 4W channel, and the semi-leptonic decay mode of the bbW W * channel, with event yield much higher than the corresponding SM di-Higgs signal. It would be logical to argue that the presence of such SUSY signatures would lead to a clear and strong contamination in these di-Higgs final state searches paving an interesting and complicated road ahead for the search of Higgs trilinear coupling.
As seen in this work, the prospects of discovering di-Higgs signals for a SMlike scenario is extremely difficult owing to the smallness of the production crosssection and the overwhelmingly large backgrounds. However, many of the search channels considered must motivate the particle physics community to either aim for higher integrated luminosities, beyond 3 ab −1 or to build higher energy colliders, viz., a 28 TeV/33 TeV and ideally 100 TeV machines. Even in our present setup, in all probability, the sensitivities can be improved upon having a better handle over the backgrounds by either minimising the uncertainties due to the Monte-Carlo computation order or by adopting data driven backgrounds. Besides, there might be certain novel discriminatory variables or certain boosted techniques which might help in reducing the backgrounds further. We also learnt from this study that looking for di-Higgs search channels may in principle be masked by new physics effects. For such scenarios our multivariate optimisation tries the best to separate the SM-signal from the new physics effects. However, in certain cases, due to similarities in kinematic distributions with the SM counterparts or due to a large cross-section yield, we may have considerable contamination effects. The techniques outlined in this paper can be easily extended and optimised as searches for the various new physics effects listed above.
optimised cut-based analysis. For the three modes, we find the following to be the most optimal cut choices: • τ h τ h : p T,bb > 100 GeV, τ h τ : p T,bb > 115 GeV and τ τ : p T,bb > 140 GeV • τ h τ h : m T 2 > 110 GeV, τ h τ : m T 2 > 130 GeV and τ τ : m T 2 > 120 GeV 49 background events for the τ h τ h , τ h τ and τ τ cases, respectively. We find a considerable reduction in the backgrounds with respect to the cut-based analysis performed earlier with the m vis τ τ variable. However, the signal yield also falls sharply. Finally, we find S/ √ B values of 0.21, 0.30 and 0.09 for the three aforementioned cases, respectively. We do not use this variable for a detailed study as the sharpness of this variable reduces upon including smearing and other detector effects. | 24,070.2 | 2017-12-14T00:00:00.000 | [
"Physics"
] |
Scientific connotation of the compatibility of traditional Chinese medicine from the perspective of the intestinal flora
Revealing the connotation of the compatibility of Chinese medicines (CM) is a requirement for the modernization of traditional Chinese medicine (TCM). However, no consensus exists on the specific mechanism of traditional Chinese medicine compatibility (TCMC). Many studies have shown that the occurrence and development of diseases and the efficacy of CM are closely related to intestinal flora (IF), which may provide a new perspective to understand the theory of TCM. This study aimed to summarize the relationship between the changes in IF before and after the compatibility of different drugs and the synergistic, toxicity reduction, and incompatibility effects of drug pairs from the perspective of the effects of CM on the IF and the regulation of microbial metabolites. These studies showed that the effect of drug pairs on the composition of the IF is not a simple superposition of two single drugs, and that the drug pairs also play a specific role in regulating the production of intestinal bacterial metabolites; therefore, it has a different pharmacodynamic effect, which may provide a perspective to clarify the compatibility mechanism. However, research on the interpretation of the scientific connotations of TCMC from the perspective of the IF is still in its infancy and has limitations. Therefore, this study aimed to summarize previous research experience and proposed to conduct a deep and systematic study from the perspective of drug pair dismantling, IF, intestinal bacteria metabolite, organism, and disease to provide a reference for scientific research on the compatibility mechanism of CM.
Introduction
Many Chinese medicines (CM) are effective but toxic to humans. By combining different CM, adjusting the bias, restraining the toxicity, and taking advantage of the strength of the efficacy of the CM, toxicity can be reduced and effectiveness can be increased. This combination has been widely recognized by ancient and modern physicians and is a feature of the clinical application of traditional Chinese medicine (TCM). In the compatibility theory of TCM, the mutual reactions between drugs can be summarized into seven situations, named "Qi Qing," including "Dan Xing," "Xiang Xu," "Xiang Shi," "Xiang Wei," "Xiang Sha," "Xiang Wu," and "Xiang Fan." Disclosing the connotation of the TCMC is required to modernize TCM; however, no consensus exists on the specific mechanism of traditional Chinese medicine compatibility (TCMC). The inability to scientifically clarify the connotations of compatibility has somewhat limited the development of TCM.
Intestinal flora (IF) has become a popular research topic in recent years. Imbalance of the IF is not only related to intestinal diseases, but also to hepatic, cardiovascular, and neurological diseases through the intestine-liver, intestine-heart, and intestinebrain axis Sampson et al., 2016). With the popularization of gene sequencing technology, several studies have shown that CM can promote beneficial bacteria, inhibit harmful bacteria, regulate the metabolites of bacteria, such as bile acids (Bas) and short-chain fatty acids (SCFAs), and thus exert a regulatory effect on the organism (Xu et al., 2017).
The IF is a key link between efficacy and CM. TCM theory states that a healthy human body needs not only to maintain harmony and unity with the external environment but also to maintain the balance of the internal environment. Maintaining the stability of the IF conforms to the concept of "holism" of TCM, and the IF can provide a new perspective to understand the TCM theory. From the perspective of the effect of CM on the IF and the regulation of microbial metabolites, this study summarized the relationship between the changes in the IF and the effects of synergism and toxicity reduction after CM combination, with a view for providing ideas for future systematic studies of the mechanism of CM compatibility.
2 Current situation of research on the relationship between the synergism effect of TCMC and IF The principle of "Qi Qing" in TCM achieves the result of compatibility and synergy. "Xiang Xu" refers to a combination of drugs with similar performance, which have synergistic effects and enhance the original efficacy. "Xiang Shi" refers to a combination of drugs that have certain similarities in performance and efficacy, with one drug as the main drug and one as a supplement to improve the efficacy of the main drug. Both "Xiang Xu" and "Xiang Shi" can play a synergistic role. We believe that the increase in therapeutic efficacy is related to the specific regulation of the IF by the combination of the two drugs. The combination of two drugs may have a stronger regulatory effect on a certain bacterium, resulting in one plus one being greater than two, or it may be that the combination has a specific regulatory effect on a new bacterium and a different effect from that of a single drug. This may be one of the ways to clarify the synergistic effect of compatibility.
Scutellaria baicalensis Georgi (Lamiaceae; Scutellaria radix) (Scutellaria baicalensis)-Coptis chinensis Franch (Ranunculaceae; Coptidis Rhizoma) (C. chinensis) compose a classical "drug pair" applied in clinical practice to dispel heat, dryness, and dampness. Hyperglycemia, dyslipidemia, inflammation, and insulin resistance in type 2 diabetes mellitus (T2DM) were ameliorated after oral administration of S. baicalensis and C. chinensis, particularly the combined extract. Moreover, the effects of the combined extracts were more remarkable than those of the single-drug treatment (Cui et al., 2018). The unique efficacy of S. baicalensis-C. chinensis may be related to the regulation of glucose and lipid metabolism and improvement of the IF (Ding et al., 2019). In vitro experiments showed that single or combined use of S. baicalensis and C. chinensis can promote the growth of beneficial bacteria Bifidobacteria and Lactobacilli in the intestinal tract of normal and T2DM model rats and inhibit the growth of harmful bacteria Enterococcus and Enterobacter; and the effect of drug pairs is stronger than that of a single drug (Xu, 2014). Acidic metabolites of beneficial intestinal bacteria, such as Bifidobacteria and Lactobacilli, can reduce the local pH of the intestine and produce substances with broad-spectrum antibacterial effects, thereby improving intestinal function by inhibiting the growth of intestinal and conditional pathogens. This indicates that the combination of S. baicalensis and C. chinensis can have a positive effect on IF. Liu studied the effects of separate and combined applications of S. baicalensis and C. chinensis on ulcerative colitis (UC) induced by the administration of dextran sulfate sodium (DSS) in mice, as shown in Figure 1. These results revealed that the combined application of S. baicalensis and C. chinensis significantly relieved colon inflammation in mice. Notably, the protective effects of S. baicalensis and C. chinensis against colon inflammation were weakened when the antibiotic mixture was partially consumed by the gut microbiota. A fecal microbial transplantation experiment further proved that the therapeutic effects of S. baicalensis and C. chinensis on UC were closely related to IF. The results of 16S rRNA sequencing showed that the group treated with combined applications of S. baicalensis and C. chinensis exhibited a higher intestinal microbial diversity and the IF composition than those of the separate groups; the abundance of norman_f_Muribaculateae increased relatively, and the abundance of Bacteroides, Akkermania and Lactobacillus also changed, but the difference was not significant. Correlation analysis showed that the bacterial flora regulated by S. baicalensis and C. chinensis was closely related to inflammatory factors in UC treatment. These results indicate that the therapeutic effect of the combination of S. baicalensis and C. chinensis is better than that of a single drug, which is related to the regulation of the IF and inhibition of inflammation.
S. baicalensis and Sophora japonica Linn (Fabaceae; Sophora Flos) (S. japonica) were originally obtained from Renzhai Zhizhi and are clinically applicable to hypertensive patients with hyperactive liver fires. Guan (Guan et al., 2021) established spontaneously hypertensive models to explore the renal protective effects of a combination of S. baicalensis and S. japonica against chronic kidney disease. The results showed that the combination of S. baicalensis and S. japonica significantly ameliorated the severity of renal injury induced by hypertension compared with effectiveness of single drugs. The antihypertensive effect and renal protection of S. baicalensis and S. japonica are affected after the bacterial flora is disturbed by antibiotics, which indicates that the combination of S. baicalensis and S. japonica plays a therapeutic role by acting on IF. The regulation of the intestinal microecological balance may be a mechanism of action of S. baicalensis and S. japonica in the treatment of hypertension and renal damage. The regulatory effect of the combination of S. baicalensis and S. japonica on the IF was different from that of the single drugs. Compared with the model group, the diversity of the IF in the combination group increased, and the ratio of Firmicutes/Bacteroidetes (F/B) decreased. Compared with model group, the relative abundances of Prevotella-9 and Akkermansia were higher in the S. baicalensis group, whereas those of Corynebacterium and Prevotella-9 were increased in the S. japonica group. The relative abundance of Lactobacillus increased, and that of Clostridiales decreased in the S. baicalensis and S. japonica group.
Prevotella-9, Lactobacillaceae, and Bifidobacteriaceae are beneficial bacteria. Lactobacillus can reduce the serum cholesterol level of hyperlipidemic rat models by improving the balance between intestinal microorganisms and increasing the intestinal transit time (Xie et al., 2011), which is closely associated with metabolic diseases. Clostridiaceae, an indolepositive bacterium, is positively correlated with indole, which has negative effects on the kidney (Niwa, 2013). With an increase in the abundance of dominant bacteria, the intestinal barrier improves, and the change in dominant bacteria reduces indole accumulation, further inhibiting oxidative stress activation in the kidneys. Olfr78 regulates renin secretion and increases blood pressure. Activated GPR41 relaxes blood vessels and lowers blood pressure. S. baicalensis and S. japonica increased SCFA production, inhibited the release of inflammatory factors, and regulated blood pressure by decreasing the expression of Olfr78 and increasing GPR41 expression, thereby alleviating kidney damage. These results indicate that the hypotensive effects of S. baicalensis and S. japonica in rats may be related to the regulation of IF, thereby increasing SCFA levels (Pluznick, 2014).
Gegen Qinlian Decoction (GQD), derived from the Treatise on Febrile Diseases, is a typical prescription for the clinical treatment of acute enteritis which is composed of Pueraria montana var. Lobata The study found that GQD can restore the diversity of IF and significantly increase the relative abundance of bacteria that generate SCFAs, thus increasing the concentrations of acetic acid, propionic acid, and butyric acid in feces. Increased SCFAs can inhibit the HADC and NF-κB pathways to alleviate inflammatory reactions in the intestinal mucosa. GQD treatment of diarrhea may modulate the gut microbiota and increase SCFA levels . Chen found that GQD and its different compatibilities had different therapeutic effects on acute enteritis, and GQD and the whole prescription without G. uralensis had more obvious anti-inflammatory and mucosal reconstruction and ulcer repair effects on colon tissue. Based on this difference, Chen analyzed the diversity of IF. Alpha and beta diversity showed that the IF composition in each group was significantly different. Compared with the model group, GQD and its different compatibilities significantly reduced the relative abundance of Clostridium_sensu_stricto_1 which is associated with intestinal inflammatory diseases. It can be seen from the results of the group of GQD without S. baicalensis and C. chinensis that the combination of P. montana and G. uralensis increases the abundance of Bacteroidales_S24-7_ukn, which is a beneficial bacterium, and Allobaculum, which is an SCFA-producing bacterium, while the abundance of pathogenic bacteria Parabacteroides decreases, but at the same time, the abundance of Desulfovibrio, which is toxic to colon cells, increases. The genomes of Bacteroidales_S24-7_ukn and Akkermania both encode the ability to produce propionate, and the increase of propionate is closely related to the stability of intestinal inflammation (Borton et al., 2017); Allobaculum can rapidly ferment glucose to produce lactic acid and butyric acid; Parabacteroides, as a pathogen in infectious diseases, can induce inflammation and immune disorder (Larsen, 2017); Desulfovibrio can damage the intestinal barrier by producing Frontiers in Pharmacology frontiersin.org 03 lipopolysaccharides (Beerens and Romond, 1977). These results indicated that the combination of P. montana and G. uralensis can inhibit the occurrence of inflammation and metabolic disorders. The results showed that the combination of S. baicalensis and C. chinensis increased the relative abundance of the beneficial bacterium Akkermannia and decreased that of the pathogenic bacterium Parabacteroides, indicating that the combination of S. baicalensis and C. chinensis plays an important role in regulating IF, and this compatibility could play a positive role in acute enteritis. Simultaneously, Allobaculum abundance decreased in the S. baicalensis and C. chinensis group. Combined with the results of the GQD group, the GQD without the G. uralensis group and the GQD without P. montana, shows that the compatibility of G. uralensis and P. montana also plays a key role in the regulation of IF. Therefore, it was concluded that S. baicalensis and C. chinensis are the key components in GQD that regulate the balance of IF, and the compatibility of G. uralensis and P. montana enhances the regulation of IF. It can be found that there is a complex network relationship between disease, flora and drugs. The differences at the gene level between different administration groups and model groups may be the biological basis for the different compatibilities of GQD to produce different effects.
Banxia Xiexin Decoction (BXD), derived from the Treatise on Febrile Diseases, is widely used to treat digestive system diseases, such as gastritis, enteritis, gastric ulcer, and gastrointestinal dysfunction. The whole prescription can be divided into the "Xinkai" compatibility unit of the combination of Pinellia ternata (Thunb.) Ten. ex Breitenb (Araceae; Pinelliae rhizoma) (P. ternata) and Zingiber officinale Roscoe (Zingiberaceae; Zingiberis rhizoma) (Z. officinale), the "Kujiang" compatibility unit of the combination of S. baicalensis and C. chinensis,, and the "Ganbu" compatibility unit of the combination of Panax ginseng C. A. Mey (Araliaceae; Ginseng radix et rhizoma) (P. ginseng), Ziziphus jujuba Mill (Rhamnaceae; Jujube fructus) (Z. jujuba), and G. uralensis. Previous studies have shown that BXD can reduce intestinal inflammation and treat ulcerative colitis by improving IF imbalance . Studies have shown that the coordination between IF, tight junction proteins, and the intestinal mucosal barrier plays an important role in maintaining the steady state of the intestinal barrier . Therefore, Zhang Dai et al., 2022) believed that antibiotic exposure leads to IF disorder in young rats, thus damaging the intestinal mucosal barrier, and that BXD and different disassembled prescriptions can regulate the IF structure, protect the intestinal mucosal barrier from pathological damage caused by antibiotic exposure, and improve the immune response. After antibiotic interference, the IF of young rats changed significantly. After treatment, the difference in the IF between the BXD group and blank group was significantly reduced, and the recovery effect of the BXD group was the best. By studying the flora composition at the genus level, it can be found that, compared with the model group, the BXD group and the different disassembled formula groups can significantly reverse the increase of Klebsiella and Enterobacter abundance caused by modeling, and the effect of "Xinkai" group is the most significant. At the same time, the abundance of Bacteroides and Lactobacillus increased in each treatment group, and the increase in Lactobacillus abundance in the BXD group was the most significant. The abundance of Bacteroides in "Xinkai" group and "Ganbu" group was the highest. Enterobacter is a common pathogenic bacterium that can be colonized by host inflammatory reactions to further increase the severity of intestinal inflammation (Li et al., 2020). Klebsiella is a conditional pathogen that causes respiratory and digestive tract infections . Bacteroides play important roles in intestinal mucosal angiogenesis, intestinal microecological balance, and host immunity. Lactobacillus has beneficial effects on intestinal inflammation, oxidative stress, and symbiosis of microbiota (El-Baz et al., 2020). In summary, BXD and different decoctions can adjust the IF structure of antibiotic-exposed young rats. Among them, the "Ganbu" and "Xinkai" decoctions play a central role. The "Xinkai" group can effectively reduce the abundance of pathogenic bacteria, and has more advantages in regulating the balance of flora, while the "Ganbu" group can effectively increase the abundance of probiotics. Liang (Liang et al., 2021) studied the effect of BXD and its compatibility with gastrointestinal bacteria using in vitro antibacterial and bacteriostatic activity tests. Helicobacter pylori infection is closely associated with chronic gastritis and gastric mucosal damage. The research results show that the whole formula group has good bacteriostatic and bactericidal effects on H. pylori, followed by "Kujiang" group. The BXD and different compatibilities also have inhibitory effects on two harmful intestinal bacteria, Escherichia cloacae and Enterococcus faecalis, to varying degrees and are dose-dependent within a certain concentration range. The antibacterial effect of the BXD group and "Kujiang" group is the strongest. Therefore, it was speculated that the material basis of BXD against harmful bacteria is mainly composed of Z. officinale, S. baicalensis, and C. chinensis. When observing the effect of BXD on beneficial bacteria, it was found that the growth of beneficial bacteria was inhibited in "Kujiang" group, while the growth of Bifidobacteria adolescentis and Lactobacillus acidophilus was promoted in the whole recipe group, "Ganbu" group and "Xinkai" group within a certain concentration range. Thus, it is speculated that the "Kujiang" group in BXD can effectively inhibit the growth of pathogenic bacteria in vitro, while the "Ganbu" group can promote the proliferation of beneficial bacteria.
Furthermore, many studies have reported on the relationship between the synergistic effects of TCMC and IF, as shown in Table 1.
Current situation of research on the relationship between the attenuation effect of TCMC and IF
Reasonable compatibility reduces drug toxicity and expands the scope of clinical applications. Although the mechanism of CM toxicity is very complex, current research shows that the IF is also an important factor affecting the toxicity of CM. The principle of "Xiang Wei" and "Xiang Sha" in "Qi Qing" achieves the result of toxicity reduction. "Xiang Wei" refers to the toxicity or side effects of one drug can be eliminated by another drug, and "Xiang Sha" refers to one drug can alleviate or eliminate the toxicity or side effects of another drug. "Xiang Wei" and "Xiang Sha" illustrate the same problem from two perspectives. We believe that the elimination or alleviation of toxic effects is related to the specific regulation of the IF by the combination of the two drugs. CM with toxicity or side effects may affect the structure of the IF, reduce Frontiers in Pharmacology frontiersin.org the abundance of beneficial bacteria, and increase the abundance of harmful bacteria. After compatibility, the negative effects of CM with toxicity or side effects on the IF are eliminated, which has a positive effect on the body. The combination of Glycine max (Linn.) Merr (Fabaceae; Sojae Semen Praeparatum) (G. max) and Gardenia jasminoides J. Ellis (Rubiaceae; Gardenia fructus) (G. jasminoides) is from the Zhizi Chi Decoction (ZCD) in Zhongjing Zhang's Treatise on Febrile Diseases which is a classic prescription for treating insomnia caused by heat stagnation chest diaphragm (Shi et al., 2012). The combination of these two drugs reduced the liver toxicity of G. jasminoides. Luo (Luo et al., 2021) suggested that the improvement of G. max in G. jasminoides -induced liver injury was related to the IF. At the same dose, the hepatotoxicity of ZCD was significantly lower than that of the G. jasminoides. The IF analysis revealed that G. jasminoides affected the IF composition of mice, reduced the abundance of Lactobacillus and Enterococcus, and increased the abundance of Parasutterella. However, the abundance of the beneficial bacteria Akkermania and Prevotella increased significantly after G. jasminoides was combined with G. max. Prevotella can promote glycogen storage in the mouse liver and maintain glucose homeostasis in the host (Purushe et al., 2010). In addition, G. jasminoides reduced the level of butyrate in feces, which was reversed after combination with G. max. When the level of butyrate increases, it plays a protective role in the liver by improving the integrity of the colon and promoting the activation of Nrf2. The combination of G. max and G. jasminoides cured G. jasminoides-induced liver injury by regulating the microbiota and promoting butyrate production ( Figure 4). Chen found that ZCD can maintain the relative balance of the IF better than G. max or G. jasminoides can, via an in vitro study. Therefore, G. jasminoides has a negative impact on the IF, and the compatibility of G. max and G. jasminoides can not only benefit the IF but also positively reverse the disorder of the IF caused by G. jasminoides.
Realgar is a mineral and heavy-metal CM with significant therapeutic effects in the treatment of leukemia and various solid tumors. However, there are several adverse reactions, including intestinal, cardiac, and liver toxicities. The compatibility of Realgar and S. miltiorrhiza Bunge (Lamiaceae; Salvias miltiorrhizae radix et rhizoma) (Salvia miltiorrhiza) was derived from the Compound Huangdai Tablet, which was formulated by Professor Shilin Huang. Clinical practice has confirmed that the treatment for acute promyelocytic leukemia is effective, with a high cure rate and mild adverse reactions . Experiments have shown that the combination of Realgar and S. miltiorrhiza can effectively alleviate adverse reactions caused by Realgar, such as those involving the heart and liver (Wang et al., 2008). Sun (Sun, 2020) found that Realgar affects the IF composition of normal mice in a dose-dependent manner, reduces the abundance of Firmicutes and Bacteroidetes, and increases the abundance of 4 Current situation of research on the relationship between the incompatibility effect of CM and IF "Xiang Wu" and "Xiang Fan" are both contraindicated combination of TCM. "Xiang Wu" refers to one drug acting in combination with another, resulting in reduced or even loss of efficacy. For example, the effect of P. ginseng on promoting energy metabolism, regulating immune and antioxidation in the spleen qi deficiency rats were decreased after the compatibility of P. ginseng and V. nigrum L (Melanthiaceae; Veratrum nigrum) (Veratrum nigrum) . "Xiang Fan" refers to the occurrence of severe toxic reactions or side effects when two drugs are combined. Chen (Chen Y.Y. et al., 2019) conducted a contraindication evaluation on the compatibility of D. genkwa Siebold & Zucc (Thymelaeaceae; Genk flos) (Daphne genkwa) and G. uralensis, and found that the combination of D. genkwa s and G. uralensis showed severe liver, kidney, and reproductive organ toxicity in rats.
FIGURE 2
Combined use of A. macrocephala oil or P. ginseng saponins decreases chemotherapy-induced diarrhea in mice by affecting intestinal flora .
FIGURE 3
Therapeutic effect of A. membranaceus polysaccharide combined with C. pilosula polysaccharide on acute colitis mice by acting on intestinal flora (Tang et al., 2021).
Frontiers in Pharmacology
frontiersin.org Euphorbia kansui T. N. Liou ex S. B. Ho (Euphorbiaceae; Kansui radix) (E. kansui) alone has no obvious toxicity, but it can show toxicity when combined with G. uralensis, and the toxicity increases with the increase of the proportion of G. uralensis (Juan et al., 2015). The "Shiba Fan" obtained by summarizing the rule is one of the most representative theories of TCM contraindicated combination. Although the "Shiba Fan" of TCM has existed for millennia, and there are many studies about the mechanism in recent decades, and even the Pharmacopoeia stipulates that "Shiba Fan" cannot be used together in the form of law, the specific mechanism of "Fan" has not
Normal rats
After the combination of periplocin and P. notoginseng saponins, there was no significant difference in the diversity of the flora, but the relative abundance of Bacteroides increased significantly, while the relative abundance of Lactobacillus decreased The increase in the number of total bacteria and dominant bacteria in the combination group of periplocin and P. notoginseng saponins reflects the detoxification effect of P. notoginseng saponins and preliminarily reveals the mechanism of the combination of the two drugs from the perspective of regulating IF
Normal mice Compared with the normal group, the ethanol extract of R. Aucklandiae has less impact on the IF. C. chinensis alkaloids reduce the diversity of IF, while the combination of different doses of drugs significantly increases the diversity and dosedependently increases the abundance of Rikenellacae RC9 and Lactobacillus and reduces the abundance of Psychrobacter, Bacteroides, and Ruminococcus The ethanol extract of R. Aucklandiae alleviates the adverse reactions caused by C. chinensis alkaloids by regulating gastrointestinal function, intestinal microbiota composition, and metabolic disorders ( Figure 5) Frontiers in Pharmacology frontiersin.org yet been proved. After summarizing previous studies, we believe that "Xiang Wu" or "Xiang Fan" of the two drugs is also related to the regulation of IF. We speculated that a drug plays a better therapeutic role by increasing the abundance of beneficial bacteria and decreasing the abundance of harmful bacteria. However, when combined with another drug, the structure of the IF changes, resulting in reduced efficacy or even loss of efficacy. This may be a possible mechanism for the effect of "Xiang Wu." The possible mechanism of "Xiang Fan" may be that the compatibility of the two drugs specifically increases the abundance of harmful bacteria and decreases the abundance of beneficial bacteria, so it manifests as a toxic reaction or side effect. The "Fan" drug combination of G. uralensis and D. genkwa is the representative combination in the "Shiba Fan". Yu found that, compared with the use of G. uralensis or D. genkwa alone, the combination of G. uralensis and D. genkwa significantly changed the IF structure in mice. G. uralensis or D. genkwa use alone caused the abundance of 3 and 2 genera to change, respectively, whereas combined use caused the abundance of 13 genera to change significantly. Among them, the combination of G. uralensis or D. genkwa specifically increased the abundance of Bacillus and increased the abundance of Desulfovibrio, which produced H 2 S nine times, indicating that the combination of G. uralensis and D. genkwa greatly enhanced their ability to regulate the IF community structure. Macrogenomic prediction analysis showed that hydrogen sulfide metabolism-related genes appeared in the first 20 differential chemical reactions caused by G. uralensis or D. genkwa, and the abundance of these 10 genes further increased in the combined G. uralensis and D. genkwa group. Moreover, through the detection of hydrogen sulfide levels in mouse feces and serum, it was found that the combination of G. uralensis and D. genkwa significantly increased the content of hydrogen sulfide in mouse feces and significantly reduced the concentration of hydrogen sulfide in mouse serum, indicating that the combination of G. uralensis and D. genkwa could disrupt the metabolic balance of hydrogen sulfide in the mouse intestine. The combination of G. uralensis and D. genkwa showed obvious negative effects in regulating the IF community structure and hydrogen sulfide metabolism, which may be related to "increasing toxicity" (Figure 6).
Tao studied the toxicity and side effects of a combination of Euphorbia lathyris L (Euphorbiaceae; Euphorbia Semen) (E. lathyris) and G. uralensis in normal mice and found that G. uralensis had no significant impact on the gastrointestinal tract. E. lathyris damages the intestinal mucosa, thus damaging the intestinal barrier function and weakening gastrointestinal motility regulation. The combination of G. uralensis and E. lathyris significantly enhanced E. lathyris damage to the intestinal mucosa. The results of the intestinal microbial analysis showed that G. uralensis, E. lathyris, and their combination caused changes in the IF structure. The levels of beneficial bacteria, Lactobacillus, were significantly reduced after E. lathyris administration, reflecting the intestinal toxicity of E. lathyris. The characteristic differences caused by the combination of G. uralensis and E. lathyris included Enterococcus, S24_ 7_ ukn, Candidatus arthromitus, Roseburia, and Erysipelotrichaceae_ incertae_se-dis. Different bacterial populations with increased abundance were associated with toxicity and side effects to varying degrees. Enterococcus is a common opportunistic pathogen and S24_ 7_ Un is one of the main lipopolysaccharide synthesizers in animal intestines. The increase in this bacterium will lead to an increase in intestinal endotoxin production, thus disrupting intestinal immune function or damaging intestinal mucosa (Kang et al., 2017); Erysipelothrichaceae is involved in the pathogenesis of chronic heart failure, and this flora is one of the core bacteria missing in patients with chronic heart failure (Luedde et al., 2017). According to the IF analysis, the combination of G. uralensis and E. lathyris probably aggravates intestinal injury through the abnormal regulation of the IF and its function. The results of the metagenomic analysis showed that the combination of G. uralensis and E. lathyris increased the content of genes related to aromatic amino acid degradation and mucus degradation functions, which was significantly different from the single-use group. This indicated that the combination of G. uralensis and E.
FIGURE 5
Adverse reactions to the ethanol extract of R. Aucklandiae caused by Coptidis alkaloids by regulating the composition of intestinal microflora (Wang T. et al., 2022).
Frontiers in Pharmacology frontiersin.org lathyris changed the regulatory effect of a single drug, resulting in new and harmful regulatory effects, and then increased the production of intestinal urinary toxins and other toxic substances, causing or aggravating the risk of disease. Further, many studies have been conducted on the relationship between the incompatibility effect of TCMC and IF, as shown in Table 3.
5 Relationship between CM, the IF, and the Metabolites of the IF CM can regulate the abundance of beneficial and harmful bacteria in the IF. For example, polysaccharides are a high proportion of components in CM, which can not only change the growth environment of the IF but can also be used as a substrate by beneficial bacteria to promote their growth of beneficial bacteria . The organic acids, in the effect as pH buffers, can maintain the stability of the intestinal pH and provide a suitable environment for the proliferation of beneficial bacteria. In addition, the metabolites produced by beneficial bacteria can indirectly inhibit the growth of harmful bacteria. Some CM can directly inhibit the growth of pathogenic microorganisms, thereby regulating the intestinal microecological balance. Heat-clearing CM has a significant inhibitory effect on harmful bacteria (Xiao et al., 2019). Toxic CM, such as Tripterygium wilfordii, can effectively reduce the number of harmful bacteria, including Enterobacteriaceae, Enterococcus, and Bacteroides, in the intestines of UC mice and rats with IgA nephropathy (Ren et al., 2020;Wu et al., 2020). Therefore, CM can alter the metabolic products of the IF by adjusting the overall structure of the flora.
As a bridge between the IF and the body, the metabolites of the IF are mainly SCFAs. SCFAs are composed of 1-6 carbon atoms and are products of fermentation by IF. The SCFAs include acetic, propionic, and butyric acids. The production and consumption of SCFAs are dynamic processes, and their content reflects the activity of bacteria and the number of bacterial populations. SCFAs also affect energy metabolism, mucosal growth, and cell differentiation. SCFAs are not only anti-inflammatory but also reduce the pH in the intestine to inhibit harmful bacteria and balance the IF, and can maintain the balance of water and electrolytes and stimulate the secretion of hormones in the gastrointestinal tract. Therefore, SCFAs are closely associated with many diseases, including ulcerative colitis, obesity, diabetes, nonalcoholic fatty liver disease, autism, airway allergic inflammation, and hypertension (Shao et al., 2019). The IF is also involved in BA metabolism. In the liver, cholesterol is converted to primary free Bas through a multistage enzymatic reaction. Primary free Bas bind to taurine and glycine in the liver, convert them into conjugated Bas, and pass them through the biliary tract for discharge into the intestinal tract. Under the conjugate action of IF, taurine or glycine is removed, and the conjugated Bas become secondary Bas. Secondary Bas return to the liver through the portal system to continue binding. This is known as the enterohepatic cycle. Various Bas form Bas pools in different proportions and act on the host through Bas receptors such as the farcesoid X receptor and G-protein-coupled bile acid receptor, thereby affecting host metabolism, glycolipid metabolism, and energy homeostasis (Guo et al., 2022).
Problems and suggestions of the study on the connotation of the IF and CM
The occurrence and development of diseases and the efficacy of CM are closely related to IF. In summary, we found that the effect of a single drug on the regulation of the IF was different from its compatibility. The composition of the IF regulated by CM combinations is not a simple superposition of the effects of two individual drugs; the compatibility of drugs also plays a specific role in regulating intestinal metabolites, thus producing a different pharmacodynamic effect. This may be the angle from which the compatibility mechanism can be clarified. At present, the research on intestinal microbiota in TCM is still in its infancy. By summarizing previous research results, we provide suggestions for research on intestinal microbiota in terms of compatibility.
FIGURE 6
Combination of G. uralensis and D. genkwa produces toxic and side effects by affecting intestinal flora .
Frontiers in Pharmacology frontiersin.org Normal mice G. uralensis can increase the abundance of beneficial bacteria Lactobacillus, while its effect is eliminated when used with E. pekinensis. The single use of E. pekinensis will reduce the abundance of Akkermania and Butyricimonas, and the combined use will increase the inhibition of beneficial bacteria. In addition, the combined use of E. pekinensis and G. uralensis significantly increased the abundance of Streptococcus and Prevotella The Fan of E. pekinensis and G. uralensis is related to their energy metabolism functions such as inhibiting beneficial bacteria, promoting the growth of conditionally pathogenic bacteria, inhibiting butyric acid production, and weakening the tricarboxylic acid cycle of the IF (Figure 7) G. uralensis and E. kansui Yu et al. (2018) Normal mice The single-use of G. uralensis or E. kansui causes changes in the abundance of 1 and 2 genera, respectively, while the combined use causes significant changes in the abundance of 7 genera, with a significant reduction in Prevotelaceae-related genera, a 10-fold increase in the abundance of Desulfovibrio, which produces H 2 S, and a specific increase in the abundance of Mycoplasma The combination of G. uralensis and E. kansui damages the IF community structure and its related lipid and hydrogen sulfide metabolism balance, which may pose a threat to human health ( The combination of G. uralensis and S. fusiforme plays an adverse role in the body by regulating the IF to disrupt fructose metabolism, fatty acid metabolism, and selenium compounds metabolism
FIGURE 7
Combination of G. uralensis and E. pekinensis produces toxic and side effects by inhibiting beneficial bacteria and promoting the growth of conditional pathogenic bacteria .
Frontiers in Pharmacology frontiersin.org
First, when studying the relationship between compatibility and IF, most of the research objects are drug pairs or whole prescriptions, but did not involve the comparison of changes in the IF before and after the treatments. Such a line of research cannot show that the changes in efficacy produced by the combination are related to the IF and cannot reflect where the characteristics of the combination lie. Therefore, we suggest that when studying the relationship between compatibility and flora, drug pairs or groupings should be studied by splitting the prescriptions. By comparing the composition and abundance changes of IF, we can find the specific flora regulated by the drug pair, and on this basis, we can further analyze the role played by the IF in the treatment of diseases by the drug pair.
Second, some studies only observed changes in the IF after drug compatibility treatment, which only showed a correlation between the compatibility of drugs, flora, and diseases, lacking verification of the causal relationship, and were unable to draw the conclusion that drugs play a therapeutic role through the action of flora, which is relatively less reliable. Therefore, we suggest that pseudo-sterile animal models of broad-spectrum antibiotic interference and fecal transplants can be used to study the role of the intestinal flora in the efficacy of drug pairs. Third, 16s rDNA gene sequencing technology is the most widely used in IF research at present. Although this method overcomes the limitations of traditional culture methods and can provide relative abundance from the phylum to the genus level, this sequencing technology cannot identify specific changes in the IF at the species level; therefore, it is unable to identify the strains and related metabolites specifically regulated by the compatibility of drugs. It was impossible to further verify the relationship between the flora and compatibility. Therefore, we suggest the use of macrogenome sequencing. This method can not only clearly provide species-level composition information of IF, but also provide information on gene function, and on this basis, verify the role of flora in the body through specific flora colonization.
Therefore, when studying the relationship between the compatibility mechanism of CM and IF, we should systematically conduct in-depth research from the perspective of CM, IF, intestinal metabolite, and disease.
First, the prescription was decomposed into different parts, and an appropriate disease model was established. The effectiveness of compatibility was verified by comparing the efficacy of each drug and prescription. High-throughput sequencing technology was used to compare the composition and abundance of each drug and prescription in the IF of model animals, and specific bacteria regulated by the drug were identified. Second, the correlation between the changes in efficacy and flora specificity after compatibility was studied. Sterile or pseudosterile animals treated with antibiotics were used to observe the correlation between the IF and the occurrence and development of diseases. Flora transplantation is used to verify the therapeutic effect of specific flora on diseases and to study whether the therapeutic effect of compatible drugs can be transmitted through feces. Finally, the modes of action of the specific bacteria and their bodies were studied. However, IF may play a therapeutic role by directly acting on intestinal tissues (Mai and Draganov, 2009). In contrast, the IF affects body balance by regulating metabolites. SCFAs formed by the IF can affect energy metabolism, mucosal growth, cell differentiation, and other activities (Shao et al., 2019). Intestinal bacteria also affect Bas metabolism and regulate host metabolism, glucose metabolism, lipid metabolism, and energy homeostasis (Thomas et al., 2008). By studying the regulatory effect of compatible drugs on various metabolites after they act on IF, we can observe the influence of the drugs on the body to clarify the mechanism of drug compatibility.
Summary
The research on intestinal microorganisms is developing rapidly. Research on intestinal microorganisms provides a new perspective for us to understand the occurrence law of diseases and the mechanism of drug efficacy, as well as a new angle to clarify the theory of compatibility of CM, which is worthy of in-depth study. This paper summarizes the relationship between changes in the IF and its metabolites after compatibility with CM and the synergism, toxicity reduction, and toxicity enhancement after compatibility with CM. These studies show that the special
Frontiers in Pharmacology frontiersin.org | 8,995 | 2023-07-19T00:00:00.000 | [
"Biology"
] |
Multivariant linear regression model based response prediction in situation of unknown uncorrelated multiple sources load
In order to predict response in the situation of unknown uncorrelated multiple sources load, a new response prediction method in frequency domain is proposed. The algorithm needs no transfer function and straightforwardly looking for the inner link between the known responses and the unknown responses. In the multivariant linear regression model, the vibration data of known measuring points are used as input and the vibration data of the unknown measuring points are used as output. And the parameters of multivariant linear regression model are solved by historical training data and the least squares generalized inverse. Experiment verification results of acoustic and vibration sources on cylindrical shell showed that the proposed approach could predict vibration response effectively and satisfy industrial requirements.
Introduction
Too large structural vibration response is one of the main reasons for mechanical damage.In frequency domain, there are two common methods to predict response: one is finite element method and another is transfer function based method.And the usage of transfer function and load to predict response is widely used [1,2].Wu Z. [3] et al. put up a vibration prediction based on soil frequency response function in frequency domain.Now combining experimental method and finite element method is very popular, Somashekar V. N. [4] et al. put up a new method based on experimentally validate finite element mode to predict printed circuit boards' vibration response.However, both of these methods can predict response only when the multiple sources load is known.In order to predict the unknown response in the situation of unknown uncorrelated multiple sources load, Mao W. et al. [5] used multiple-input multiple-output SVM model to do load identification.Wang et al [6] used least square generalized inverse to estimate uncorrelated multiple sources load in frequency domain.Based on these reaches, this paper discusses how to use least square generalized inverse to predict the unknown response directly.
The theoretical basis of dynamics about loads and responses in frequency domain 2.1. Problem description of response prediction
The question is that now only part of vibration response of measurement points is known.The responses can be classified into the known responses and the unknown responses.Then n response points can be classified into known response points and unknown response points ( + = ): the known response points = 1,2, … , and the unknown responses = + 1, … , + ℎ, … , + , where ℎ from 1 to .The vibration response of the unknown measurement points should be predicted by vibration response of the known measurement points without knowledge of uncorrelated multiple sources load.
Linear relationship between response and uncorrelated multi-sources
In frequency domain, when the excitations are uncorrelated acted on the linear time invariant system and get multiple responses of the system, the relationship between multiple input and multiple output are linear [9].Vibration responses of linear time invariant (LTI) system from uncorrelated multiple sources load and multiple measurement points meet the following equation [6]: The vibration response of the known measurement points must be used to identified the uncorrelated multiple sources load, let: Liking Eq. ( 1), it can get the matrix form as:
Procedure of known responses based response prediction in situation of unknown uncorrelated multi-sources and with knowledge of transfer functions
Step 1): Obtaining the transfer functions module ( ) of the linear time invariant system from uncorrelated multiple loads to known measurement points and unknown measurement points.And The obtainment of transfer functions can be classified in two ways [7][8].
Step 2): Using transfer functions module ( ) from uncorrelated multiple loads to known measurement points and known measurement points ( ) to identify uncorrelated multi-source loads ( ) by least-squares of generalized matrix inverse method, the uncorrelated multi-source loads can be identified through the system's transfer functions of all uncorrelated multi-source loads to all the known measurement points and the vibration response of the known measurement points [9]: where ( ) ≜ ( ) ( ) ( ) is the least square generalized inverse of ( ).
Step 3): Unknown measurement points Vibration response prediction based on transfer functions from uncorrelated multi-source loads to unknown measurement points and the identified uncorrelated multi-source loads ( ).
After identifying the uncorrelated multi-source loads, the vibration response of the unknown measurement points can be predicted by Eq. ( 4):
The formula derivation of the linear relationship between known response and unknown response
After plugging the identified uncorrelated multiple sources load of Eq. (3) into Eq.( 4), the vibration response of the unknown measurement points can be predicted by the known response: Let: So, Eq. ( 6) can be obtained from Eq. ( 5): Until now, the linear relationship between the known responses and the unknown response on the LTI system in frequency domain is clear.Eq. ( 7) can be obtained from Eq. ( 6).For the row ℎ of Eq. ( 6), ℎ = 1,2, … , : Through times independent experiment, = 1,2, … , , the formula of the matrix equation can be obtained: The coefficient of , ( ), … , , ( ), … , , ( ) can be obtained only when ≥ , and its least square solution is: Likely, all rows of ( ) can be solved in the same way, ℎ = 1,2, … , .Then, the vibration response of the unknown measurement points can be predicted by the vibration of the known measurement points through Eq. ( 10): Since the linear relationship between known responses and unknown responses is clear, multivariant linear regression can be used to construct the modal and least squares method can be applied to solve this modal without knowledge of transfer functions.
Procedure of known responses based to predict unknown responses directly
Step 1): Training modal construction.In training modal, the responses must be classified in to two parts.One is known responses and the other is unknown responses.
Step 2): The solution of the linear relationship matrix ( ) between the known responses and the unknown response.Since we get that the relationship between the known responses and the unknown responses is linear, the least square method is used to get the linear relation matrix ( ) through Eq. (10).
Step 3): After get the inner link matrix ( ), the unknown responses can be predicted through the known responses by using Eq. ( 11).So as though the uncorrelated loads and transfer functions are unknown, but in this method it doesn't need any information of the multi-source loads.
Acoustic and vibration experiment devices
The experiment device is in Figs.1-3.In the internal of cylindrical shell, there is an acoustic reverberation acoustic excitation as shown in Fig. 1.And in the external, there is a vibration excitation of vibration excitation of vibration table which included a sensor recording the vibration excitation, the external acoustic excitation and the vibration response of surface and inner device as shown in Fig. 2. The arrangement of measuring points as shown in Fig. 3.In the experiments, it could be thought that there are only two independent excitation sources and 18 sensors to measure the vibration responses data in which some data is known and the rest is unknown, using the known vibration data to predict the unknown responses data is what should be done.In the experiment, there were 15 acoustic and vibration union excitation magnitude gradually increased.In all the 15 pair experiment, 14 pair is chosen for training and leave one pair for testing.So that means the independent experiment time p equals to 14 and in order to satisfy the constrain of the applicable scope, we made n equal to 9 (the 9 points are on the upper part of the cylinder), and when predict multiple response at a time, we made the number of known points to 7 and the number of unknown point to 2.
Experimental evaluation index
To validate the correctness and precision of this method, the predicted response must compare with the real response and 3 dB relative error is widely used in engineering practice.Given prediction response and the real response , to satisfy the 3 dB relative error, the condition as follows: 10log ≤ 3 and 10log ≤ 3. (11)
Experiment results
To validate the correctness and precision of the proposed method, the chosen unknown responses compare with the real responses.Because there are total 15 pair experiment and 9 response points, for predicting one point, there are 15×9 = 135 conditions to compare the real responses to the predicted response.In order to make it short, we only express one pair's result for comparing.Table 1 shows when one point response is unknown, the result of the predict response comparing with the real response.t stands for the unknown point's number.
When more than one response points predicted, the conditions will be much more than just one unknown point response.For convenience, just 2 point at random in a pair to compare the predict response with the real response are chose.Table 2 shows the result of the comparison, each and combined to a group.To make it more clear, Fig. 4 shows the energy dB of the predict response and the real response.
Experimental results analysis
As it shows above, the proposed method applicable in situation of unknown loads and unknown transfer functions and the accuracy can satisfy the industry standard.Experimental results show that the prediction response is very close to the real response.And the error lies in: 1) In the real experiment there exist noises and other aspects of data error.
2) The ill-condition of matrix inverse problem is another reason of the error between prediction response and the real response [7].
3) The weak nonlinearity of the system also causes the difference between the predicted response and the real response.
Conclusions
Least square generalized inverse based algorithm is introduced to response prediction algorithm in frequency domain when uncorrelated multiple sources load are unknown.And with no transfer functions data this method can predict the response effectively.Response prediction experiment verification results of acoustic and vibration sources on cylindrical shell showed effective and higher accuracy of the proposed approach.But this method can only solve the response prediction of linear system, how to predict nonlinear system's response is future research direction.If the load sources are correlated to each other in frequency domain, the phase of loads and responses will be very important.How to characterize the phase of load and correlated load sources which are also further research direction based multi-source dynamic random response prediction algorithm in frequency domain.
Table 2 .
The energy of error over 3 dB of predict response and real response % | 2,461.2 | 2017-10-21T00:00:00.000 | [
"Engineering",
"Physics"
] |
Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement
We propose a conditional non-autoregressive neural sequence model based on iterative refinement. The proposed model is designed based on the principles of latent variable models and denoising autoencoders, and is generally applicable to any sequence generation task. We extensively evaluate the proposed model on machine translation (En-De and En-Ro) and image caption generation, and observe that it significantly speeds up decoding while maintaining the generation quality comparable to the autoregressive counterpart.
Introduction
Conditional neural sequence modeling has become a de facto standard in a variety of tasks (see, e.g., Cho et al., 2015, and references therein). Much of this recent success is built on top of autoregressive sequence models in which the probability of a target sequence is factorized as a product of conditional probabilities of next symbols given all the preceding ones. Despite its success, neural autoregressive modeling has its weakness in decoding, i.e., finding the most likely sequence. Because of intractability, we must resort to suboptimal approximate decoding, and due to its sequential nature, decoding cannot be easily parallelized and results in a large latency (see, e.g., Cho, 2016). This has motivated the recent investigation into non-autoregressive neural sequence modeling by Gu et al. (2017) in the context of machine translation and Oord et al. (2017) in the context of speech synthesis.
In this paper, we propose a non-autoregressive neural sequence model based on iterative refinement, which is generally applicable to any sequence generation task beyond machine translation. The proposed model can be viewed as both ⇤ Equal Contribution a latent variable model and a conditional denoising autoencoder. We thus propose a learning algorithm that is hybrid of lowerbound maximization and reconstruction error minimization. We further design an iterative inference strategy with an adaptive number of steps to minimize the generation latency without sacrificing the generation quality.
We extensively evaluate the proposed conditional non-autoregressive sequence model and compare it against the autoregressive counterpart, using the state-of-the-art Transformer (Vaswani et al., 2017), on machine translation and image caption generation. In the case of machine translation, the proposed deterministic nonautoregressive models are able to decode approximately 2 3⇥ faster than beam search from the autoregressive counterparts on both GPU and CPU, while maintaining 90-95% of translation quality on IWSLT'16 En$De, WMT'16 En$Ro and WMT'14 En$De. On image caption generation, we observe approximately 3⇥ and 5⇥ faster decoding on GPU and CPU, respectively, while maintaining 85% of caption quality. 1 2 Non-Autoregressive Sequence Models Sequence modeling in deep learning has largely focused on autoregressive modeling. That is, given a sequence Y = (y 1 , . . . , y T ), we use some form of a neural network to parametrize the conditional distribution over each variable y t given all the preceding variables, i.e., log p(y t |y <t ) = f ✓ (y <t ), where f ✓ is for instance a recurrent neural network. This approach has become a de facto standard in language modeling (Mikolov et al., 2010). When this is conditioned on an extra variable X, it becomes a conditional sequence model log p(Y |X) which serves as a basis on which many recent advances in, e.g., machine translation (Bahdanau et al., 2014;Sutskever et al., 2014;Kalchbrenner and Blunsom, 2013) and speech recognition (Chorowski et al., 2015;Chiu et al., 2017) have been made.
Despite the recent success, autoregressive sequence modeling has a weakness due to its nature of sequential processing. This weakness shows itself especially when we try to decode the most likely sequence from a trained model, i.e., There is no known polynomial algorithm for solving it exactly, and practitioners have relied on approximate decoding algorithms (see, e.g., Cho, 2016;Hoang et al., 2017). Among these, beam search has become the method of choice, due to its superior performance over greedy decoding, which however comes with a substantial computational overhead (Cho, 2016).
As a solution to this issue of slow decoding, two recent works have attempted non-autoregressive sequence modeling. Gu et al. (2017) have modified the Transformer (Vaswani et al., 2017) for non-autoregressive machine translation, and Oord et al. (2017) a convolutional network (Oord et al., 2016) for non-autoregressive modeling of waveform. Non-autoregressive modeling factorizes the distribution over a target sequence given a source into a product of conditionally independent perstep distributions: breaking the dependency among the target variables across time. This allows us to trivially find the most likely target sequence by taking arg max yt p(y t |X) for each t, effectively bypassing the computational overhead and suboptimality of decoding from an autoregressive sequence model. This desirable property of exact and parallel decoding however comes at the expense of potential performance degradation (Kaiser and Bengio, 2016). The potential modeling gap, which is the gap between the underlying, true model and the neural sequence model, could be larger with the non-autogressive model compared to the autoregressive one due to challenge of modeling the factorized conditional distribution above.
Iterative Refinement for Deterministic
Non-Autoregressive Sequence Models
Latent variable model
Similarly to two recent works (Oord et al., 2017;Gu et al., 2017), we introduce latent variables to implicitly capture the dependencies among target variables. We however remove any stochastic behavior by interpreting this latent variable model, introduced immediately below, as a process of iterative refinement.
Our goal is to capture the dependencies among target symbols given a source sentence without auto-regression by introducing L intermediate random variables and marginalizing them out: Each product term inside the summation is modelled by a deep neural network that takes as input a source sentence and outputs the conditional distribution over the target vocabulary V for each t.
Deterministic Approximation The marginalization in Eq. (1) is intractable. In order to avoid this issue, we consider two approximation strategies; deterministic and stochastic approximation. Without loss of generality, let us consider the case of single intermediate latent variable, that is L = 1. In the deterministic case, we setŷ 0 t to the most likely value according to its distribution p(y 0 t |X), that isŷ 0 t = arg max y 0 t p(y 0 t |X). The entire lower bound can then be written as: Stochastic Approximation In the case of stochastic approximation, we instead sampleŷ 0 t from the distribution p(y 0 t |X). This results in the unbiased estimate of the marginal log-probability log p(Y |X). Other than the difference in whether most likely values or samples are used, the remaining steps are identical.
Latent Variables Although the intermediate random variables could be anonymous, we constrain them to be of the same type as the output Y is, in order to share an underlying neural network. This constraint allows us to view each conditional p(Y l |Ŷ l 1 , X) as a single-step of refinement of a rough target sequenceŶ l 1 . The entire chain of L conditionals is then the L-step iterative refinement. Furthermore, sharing the parameters across these refinement steps enables us to dynamically adapt the number of iterations per input X. This is important as it substantially reduces the amount of time required for decoding, as we see later in the experiments.
Training For each training pair (X, Y ⇤ ), we first approximate the marginal log-probability. We then minimize whereŶ l 1 = (ŷ l 1 1 , . . . ,ŷ l 1 T ), and ✓ is a set of parameters. We initializeŷ 0 t (t-th target word in the first iteration) as x t 0 , where t 0 = (T 0 /T ) · t. T 0 and T are the lengths of the source X and target Y ⇤ , respectively.
Denoising Autoencoder
The proposed approach could instead be viewed as learning a conditional denoising autoencoder which is known to capture the gradient of the logdensity. That is, we implicitly learn to find a direction Y in the output space that maximizes the underlying true, data-generating distribution log P (Y |X). Because the output space is discrete, much of the theoretical analysis by Alain and Bengio (2014) are not strictly applicable. We however find this view attractive as it serves as an alternative foundation for designing a learning algorithm.
Training We start with a corruption process ) which becomes as an input to each conditional in Eq. (1). Then, the goal of learning is to maximize the log-probability of the original reference Y ⇤ given the corrupted version. That is, to minimize Once this cost J DAE is minimized, we can recursively perform the maximum-a-posterior inference, i.e.,Ŷ = arg max Y log p ✓ (Y |X), to findŶ that (approximately) maximizes log p(Y |X).
Corruption Process C There is little consensus on the best corruption process for a sequence, especially of discrete tokens. In this work, we use a corruption process proposed by Hill et al. (2016), which has recently become more widely adopted (see, e.g., Artetxe et al., 2017;Lample et al., 2017). Each y ⇤ t in a reference target (2) replace y ⇤ t with a token uniformly selected from a vocabulary of all unique tokens at random, or (3) swap y ⇤ t and y ⇤ t+1 . This is done sequentially from y ⇤ 1 until y ⇤ T .
Learning
Cost function Although it is possible to train the proposed non-autoregressive sequence model using either of the cost functions above (J LVM or J DAE ,) we propose to stochastically mix these two cost functions. We do so by randomly replacing each termŶ l 1 in Eq. (2) withỸ in Eq. (3): , and ↵ l is a sample from a Bernoulli distribution with the probability p DAE .
p DAE is a hyperparameter. As the first conditional p(Y 0 |X) in Eq. (1) does not take as input any target Y , we set ↵ 0 = 1 always.
Distillation Gu et al. (2017), in the context of machine translation, and Oord et al. (2017), in the context of speech generation, have recently discovered that it is important to use knowledge distillation (Hinton et al., 2015;Kim and Rush, 2016) to successfully train a non-autoregressive sequence model. Following Gu et al. (2017), we also use knowledge distillation by replacing the reference target Y ⇤ of each training example (X, Y ⇤ ) with a target Y AR generated from a welltrained autoregressive counterpart. Other than this replacement, the cost function in Eq (4) and the model architecture remain unchanged.
Target Length Prediction One difference between the autoregressive and non-autoregressive models is that the former naturally models the length of a target sequence without any arbitrary upper-bound, while the latter does not. It is hence necessary to separately model p(T |X), where T is the length of a target sequence, although during training, we simply use the length of each reference target sequence.
Inference: Decoding
Inference in the proposed approach is entirely deterministic.
We start from the input X and first predict the length of the target sequenceT = arg max T log p(T |X). Then, given X andT we generate the initial target sequence byŷ 0 t = arg max yt log p(y 0 t |X), for t = 1, . . . , T We continue refining the target sequence byŷ l t = arg max yt log p(y l t |Ŷ l 1 , X), for t = 1, . . . , T .
Because these conditionals, except for the initial one, are modeled by a single, shared neural network, this refinement can be performed as many iterations as necessary until a predefined stopping criterion is met.
A criterion can be based either on the amount of change in a target sequence after each iteration (i.e., D(Ŷ l 1 ,Ŷ l ) ✏), or on the amount of change in the conditional log-probabilities (i.e., | log p(Ŷ l 1 |X) log p(Ŷ l 1 |X)| ✏) or on the computational budget. In our experiments, we use the first criterion and use Jaccard distance as our distance function D.
Related Work
Non-Autoregressive Neural Machine Translation Schwenk (2012) proposed a continuousspace translation model to estimate the conditional distribution over a target phrase given a source phrase, while dropping the conditional dependencies among target tokens. The evaluation was however limited to reranking and to short phrase pairs (up to 7 words on each side) only. Kaiser and Bengio (2016) investigated neural GPU (Kaiser and Sutskever, 2015), for machine translation. They evaluated both non-autoregressive and autoregressive approaches, and found that the non-autoregressive approach significantly lags behind the autoregressive variants. It however differs from our approach that each iteration does not output a refined version from the previous iteration. The recent paper by Gu et al. (2017) is most relevant to the proposed work. They similarly introduced a sequence of discrete latent variables. They however use supervised learning for inference, using the word alignment tool (Dyer et al., 2013). To achieve the best result, Gu et al. (2017) stochastically sample the latent variables and rerank the corresponding target sequences with an external, autoregressive model. This is in contrast to the proposed approach which is fully deterministic during decoding and does not rely on any extra reranking mechanism.
Parallel WaveNet Simultaneously with Gu et al. (2017), Oord et al. (2017) presented a nonautoregressive sequence model for speech generation. They use inverse autoregressive flow (IAF, Kingma et al., 2016) to map a sequence of independent random variables to a target sequence. They apply the IAF multiple times, similarly to our iterative refinement strategy. Their approach is however restricted to continuous target variables, while the proposed approach in principle could be applied to both discrete and continuous variables. Novak et al. (2016) proposed a convolutional neural network that iteratively predicts and applies token substitutions given a translation from a phasebased translation system. Unlike their system, our approach can edit an intermediate translation with a higher degree of freedom. QuickEdit (Grangier and Auli, 2017) and deliberation network (Xia et al., 2017) incorporate the idea of refinement into neural machine translation. Both systems consist of two autoregressive decoders. The second decoder takes into account the translation generated by the first decoder. We extend these earlier efforts by incorporating more than one refinement steps without necessitating extra annotations. Bordes et al. (2017) proposed an unconditional generative model for images based on iterative refinement. At each step l of iterative refinement, the model is trained to maximize the log-likelihood of target Y given the weighted mixture of generated samples from the previous iterationŶ l 1 and a corrupted targetỸ . That is, the corrupted version of target is "infused" into generated samples during training. In the domain of text, however, computing a weighted mixture of two sequences of discrete tokens is not well defined, and we propose to stochastically mix denoising and lowerbound maximization objectives.
Network Architecture
We use three transformer-based network blocks to implement our model. The first block ("Encoder") encodes the input X, the second block ("Decoder 1") models the first conditional log p(Y 0 |X), and the final block ("Decoder 2") is shared across iterative refinement steps, modeling log p(Y l |Ŷ l 1 , X). These blocks are depicted side-by-side in Fig. 2. The encoder is identical to that from the original Transformer (Vaswani et al., 2017). We however use the decoders from Gu et al. (2017) with additional positional attention and use the highway layer (Srivastava et al., 2015) instead of the residual layer (He et al., 2016).
The original input X is padded or shortned to fit the length of the reference target sequence before being fed to Decoder 1. At each refinement step l, Decoder 2 takes as input the predicted target se-quenceŶ l 1 and the sequence of final activation vectors from the previous step.
Experimental Setting
We evaluate the proposed approach on two sequence modeling tasks: machine translation and image caption generation. We compare the proposed non-autoregressive model against the autoregressive counterpart both in terms of generation quality, measured in terms of BLEU (Papineni et al., 2002), and generation efficiency, measured in terms of (source) tokens and images per second for translation and image captioning, respectively.
Machine Translation We choose three tasks of different sizes: IWSLT'16 En$De (196k pairs), WMT'16 En$Ro (610k pairs) and WMT'14
En$De (4.5M pairs). We tokenize each sentence using a script from Moses (Koehn et al., 2007) and segment each word into subword units using BPE (Sennrich et al., 2016). We use 40k tokens from both source and target for all the tasks. For WMT'14 En-De, we use newstest-2013 and newstest-2014 as development and test sets. For WMT'16 En-Ro, we use newsdev-2016 and newstest-2016 as development and test sets. For IWSLT'16 En-De, we use test2013 for validation.
We closely follow the setting by Gu et al. (2017). In the case of IWSLT'16 En-De, we use the small model (d model = 278, d hidden = 507, p dropout = 0.1, n layer = 5 and n head = 2). 2 For WMT'14 En-De and WMT'16 En-Ro, we use the base transformer by Vaswani et al. (2017) (d model = 512, d hidden = 512, p dropout = 0.1, n layer = 6 and n head = 8). We use the warm-up learning rate scheduling (Vaswani et al., 2017) for the WMT tasks, while using linear annealing (from 3 ⇥ 10 4 to 10 5 ) for the IWSLT task. We do not use label smoothing nor average multiple check-pointed models. These decisions were made based on the preliminary experiments. We train each model either on a single Table 1: Generation quality (BLEU") and decoding efficiency (tokens/sec" for translation, images/sec" for image captioning). Decoding efficiency is measured sentence-by-sentence. AR: autoregressive models. b: beam width.
i dec : the number of refinement steps taken during decoding. Adaptive: the adaptive number of refinement steps. NAT: non-autoregressive transformer models (Gu et al., 2017). FT: fertility. NPD reranking using 100 samples. P40 (WMT'14 En-De and WMT'16 En-Ro) or on a single P100 (IWSLT'16 En-De) with each minibatch consisting of approximately 2k tokens. We use four P100's to train non-autoregressive models on WMT'14 En-De.
Image Caption Generation: MS COCO We use MS COCO (Lin et al., 2014). We use the publicly available splits (Karpathy and Li, 2015), consisting of 113,287 training images, 5k validation images and 5k test images. We extract 49 512-dimensional feature vectors for each image, using a ResNet-18 (He et al., 2016) pretrained on ImageNet (Deng et al., 2009). The average of these vectors is copied as many times to match the length of the target sentence (reference during training and predicted during evaluation) to form the initial input to Decoder 1. We use the base transformer (Vaswani et al., 2017) except that n layer is set to 4. We train each model on a single 1080ti with each minibatch consisting of approximately 1,024 tokens.
Target Length Prediction
We formulate the target length prediction as classification, predicting the difference between the target and source lengths for translation and the target length for image captioning. All the hidden vectors from the n layer layers of the encoder are summed and fed to a softmax classifier after affine transformation. We however do not tune the encoder's parameters for target length prediction. We use this length predictor only during test time. We find it important to accurately predict the target length for good overall performance. See Appendix A for an analysis on our length prediction model.
Training and Inference
We use Adam (Kingma and Ba, 2014) and use L = 3 in Eq. (1) during training (i train = 4 from hereon.) We use p DAE = 0.5. We use the deterministic strategy for IWSLT'16 En-De, WMT'16 En-Ro and MS COCO, while the stochastic strategy is used for WMT'14 En-De. These decisions were made based on the validation set performance. After both the non-autogressive sequence model and target length predictor are trained, we decode by first predicting the target length and then running iterative refinement steps until the outputs of consecutive iterations are the same (or Jaccard distance between consecutive decoded sequences is 1). To assess the effectiveness of this adaptive scheme, we also test a fixed number of steps (i dec ). In machine translation, we remove any repetition by collapsing multiple consecutive occurrences of a token.
Results and Analysis
We make some important observations in Table 1. First, the generation quality improves across all the tasks as we run more refinement steps i dec even beyond that used in training (i train = 4), which supports our interpretation as a conditional denoising autoencoder in Sec. 3.2. To further verify this, we run decoding on WMT'14 (both directions) up to 100 iterations. As shown in Fig. 1 (a), the quality improves well beyond the number of refinement steps used during training.
Second, the generation efficiency decreases as more refinements are made. We plot the average seconds per sentence in Fig. 1 (b), measured on GPU while sequentially decoding one sentence at a time. As expected, decoding from the autoregressive model linearly slows down as the sen- tence length grows, while decoding from the nonautoregressive model with a fixed number of iterations has the constant complexity. However, the generation efficiency of non-autoregressive model decreases as more refinements are made. To make a smooth trade-off between the quality and speed, the adaptive decoding scheme allows us to achieve near-best generation quality with a significantly lower computational overhead. Moreover, the adaptive decoding scheme automatically increases the number of refinement steps as the sentence length increases, suggesting that this scheme captures the amount of information in the input well. The increase in latency is however less severe than that of the autoregressive model. We also observe that the speedup in decoding is much clearer on GPU than on CPU. This is a consequence of highly parallel computation of the proposed non-autoregressive model, which is better suited to GPUs, showcasing the potential of using the non-autoregressive model with a specialized hardware for parallel computation, such as Google's TPUs (Jouppi et al., 2017). The results of our model decoded with adaptive decoding scheme are comparable to the results from (Gu et al., 2017), without relying on any external tool. On WMT'14 En-De, the proposed model outperforms the best model from (Gu et al., 2017) by two points.
Lastly, it is encouraging to observe that the proposed non-autoregressive model works well on image caption generation. This result confirms the generality of our approach beyond machine translation, unlike that by Gu et al. (2017) which was for machine translation or by Oord et al. (2017) which was for speech synthesis. Table 2. First, we observe that it is beneficial to use multiple iterations of refinement during training. By using four iterations (one step of decoder 1, followed by three steps of decoder 2), the BLEU score improved by approximately 1.5 points in both directions. We also notice that it is necessary to use the proposed hybrid learning strategy to maximize the improvement from more iterations during training (i train = 4 vs. i train = 4, p DAE = 1.0 vs. i train = 4, p DAE = 0.5.) Knowledge distillation was crucial to close the gap between the proposed deterministic non-autoregressive sequence model and its autoregressive counterpart, echoing the observations by Gu et al. (2017) andOord et al. (2017). Finally, we see that removing repeating consecutive symbols improves the quality of the best trained models (i train = 4, p DAE = 0.5) by approximately +1 BLEU. This suggests that the proposed iterative refinement is not enough to remove repetitions on its own. Further investigation is necessary to properly tackle this issue, which we leave as a future work.
We then compare the deterministic and stochastic approximation strategies on IWSLT'16 En!De and WMT'14 En!De. According to the results in Table 3, the stochastic strategy is crucial with a large corpus (WMT'14), while the deterministic strategy works as well or better with a small corpus (IWSLT'16). Both of the strategies benefit from knowledge distillation, but the gap between the two strategies when the dataset is large is much more apparent without knowledge distillation.
Qualitative Analysis
Machine Translation In Table 4, we present three sample translations and their iterative refinement steps from the development set of IWSLT'16 (De!En). As expected, the sequence generated from the first iteration is a rough version of translation and is iteratively refined over multiple steps. By inspecting the underlined sub-sequences, we see that each iteration does not monotonically improve the translation, but overall modifies the Src seitdem habe ich sieben Häuser in der Nachbarschaft mit den Lichtern versorgt und sie funktionierenen wirklich gut . Iter 1 and I 've been seven homes since in neighborhood with the lights and they 're really functional . Iter 2 and I 've been seven homes in the neighborhood with the lights , and they 're a really functional . Iter 4 and I 've been seven homes in neighborhood with the lights , and they 're a really functional . Iter 8 and I 've been providing seven homes in the neighborhood with the lights and they 're a really functional . Iter 20 and I 've been providing seven homes in the neighborhood with the lights , and they 're a very good functional . Ref since now , I 've set up seven homes around my community , and they 're really working .
Src er sah sehr glücklich aus , was damals ziemlich ungewöhnlich war , da ihn die Nachrichten meistens deprimierten . Iter 1 he looked very happy , which was pretty unusual the , because the news was were usually depressing . Iter 2 he looked very happy , which was pretty unusual at the , because the news was s depressing . Iter 4 he looked very happy , which was pretty unusual at the , because news was mostly depressing . Iter 8 he looked very happy , which was pretty unusual at the time because the news was mostly depressing . Iter 20 he looked very happy , which was pretty unusual at the time , because the news was mostly depressing .
Ref
there was a big smile on his face which was unusual then , because the news mostly depressed him .
Src furchtlos zu sein heißt für mich , heute ehrlich zu sein . Iter 1 to be , for me , to be honest today . Iter 2 to be fearless , me , is to be honest today . Iter 4 to be fearless for me , is to be honest today . Iter 8 to be fearless for me , me to be honest today . Iter 20 to be fearless for me , is to be honest today . Ref so today , for me , being fearless means being honest . translation towards the reference sentence. Missing words are added, while unnecessary words are dropped. For instance, see the second example. The second iteration removes the unnecessary "were", and the fourth iteration inserts a new word "mostly". The phrase "at the time" is gradually added one word at a time.
Image Caption Generation Table 5 shows two examples of image caption generation. We observe that each iteration captures more and more details of the input image. In the first example (left), the bus was described only as a "yellow bus" in the first iteration, but the subsequent iterations refine it into "yellow and black bus". Similarly, "road" is refined into "lot". We notice this behavior in the second example (right) as well. The first iteration does not specify the place in which "a woman" is "standing on", which is fixed immediately in the second iteration: "standing on a tennis court". In the final and fourth iteration, the proposed model captures the fact that the "woman" is "holding" a racquet.
Conclusion
Following on the exciting, recent success of nonautoregressive neural sequence modeling by Gu et al. (2017) and Oord et al. (2017), we proposed a deterministic non-autoregressive neural sequence model based on the idea of iterative refinement. We designed a learning algorithm specialized to the proposed approach by interpreting the entire model as a latent variable model and each refinement step as denoising. We implemented our approach using the Transformer and evaluated it on two tasks: machine translation and image caption generation. On both tasks, we were able to show that the proposed nonautoregressive model performs closely to the autoregressive counterpart with significant speedup in decoding. Qualitative analysis revealed that the iterative refinement indeed refines a target sequence gradually over multiple steps.
Despite these promising results, we observed that proposed non-autoregressive neural sequence model is outperformed by its autoregressive counterpart in terms of the generation quality. The following directions should be pursued in the future to narrow this gap. First, we should investigate better approximation to the marginal logprobability. Second, the impact of the corruption process on the generation quality must be studied. Lastly, further work on sequence-to-sequence model architectures could yield better results in non-autoregressive sequence modeling.
Generated Caption
Iter 1 a yellow bus parked on parked in of parking road . Iter 2 a yellow and black on parked in a parking lot . Iter 3 a yellow and black bus parked in a parking lot . Iter 4 a yellow and black bus parked in a parking lot .
Reference Captions a tour bus is parked on the curb waiting city bus parked on side of hotel in the rain . bus parked under an awning next to brick sidewalk a bus is parked on the curb in front of a building . a double decked bus sits parked under an awning Generated Caption Iter 1 a woman standing on playing tennis on a tennis racquet . Iter 2 a woman standing on a tennis court a tennis racquet . Iter 3 a woman standing on a tennis court a a racquet . Iter 4 a woman standing on a tennis court holding a racquet .
Reference Captions a female tennis player in a black top playing tennis a woman standing on a tennis court holding a racquet . a female tennis player preparing to serve the ball . a woman is holding a tennis racket on a court a woman getting ready to reach for a tennis ball on the ground Table 5: Two sample image captions from the proposed non-autoregressive sequence model. The images are from the development set of MS COCO. The first iteration is from decoder 1, while the subsequent ones are from decoder 2. Subsequences with changes across the refinement steps are underlined. | 7,131.6 | 2018-02-19T00:00:00.000 | [
"Computer Science"
] |
Multi-Label Remote Sensing Image Scene Classification by Combining a Convolutional Neural Network and a Graph Neural Network
: As one of the fundamental tasks in remote sensing (RS) image understanding, multi-label remote sensing image scene classification (MLRSSC) is attracting increasing research interest. Human beings can easily perform MLRSSC by examining the visual elements contained in the scene and the spatio-topological relationships of these visual elements. However, most of existing methods are limited by only perceiving visual elements but disregarding the spatio-topological relationships of visual elements. With this consideration, this paper proposes a novel deep learning-based MLRSSC framework by combining convolutional neural network (CNN) and graph neural network (GNN), which is termed the MLRSSC-CNN-GNN. Specifically, the CNN is employed to learn the perception ability of visual elements in the scene and generate the high-level appearance features. Based on the trained CNN, one scene graph for each scene is further constructed, where nodes of the graph are represented by superpixel regions of the scene. To fully mine the spatio-topological relationships of the scene graph, the multi-layer-integration graph attention network (GAT) model is proposed to address MLRSSC, where the GAT is one of the latest developments in GNN. Extensive experiments on two public MLRSSC datasets show that the proposed MLRSSC-CNN-GNN can obtain superior performance compared with the state-of-the-art methods.
Introduction
Single-label remote sensing (RS) image scene classification considers the image scene (i.e., one image block) as the basic interpretation unit and aims to assign one semantic category to the RS image scene according to its visual and contextual content [1][2][3]. Due to its extensive applications in object detection [4][5][6][7], image retrieval [8][9][10], etc., single-label RS image scene classification has attracted extensive attention. To address single-label RS classification, many excellent algorithms have been proposed [11][12][13][14]. At present, single-label RS scene classification has reached saturation accuracy [15]. However, one single label is often insufficient to fully describe the content of a real-world image.
Compared with single-label RS image scene classification, multi-label remote sensing image scene classification (MLRSSC) is a more realistic task. MLRSSC aims to predict multiple semantic labels to describe an RS image scene. Because of its stronger description ability, MLRSSC can be applied in many fields, such as image annotation [15,16] and image retrieval [17,18]. MLRSSC is also a more
•
We propose a novel MLRSSC-CNN-GNN framework that can simultaneously mine the appearances of visual elements in the scene and the spatio-topological relationships of visual elements. The experimental results on two public datasets demonstrate the effectiveness of our framework. • We design a multi-layer-integration GAT model to mine the spatio-topological relationship of the RS image scene. Compared with the standard GAT, the recommended multi-layer-integration GAT benefits fusing multiple intermediate topological representations and can further improve the classification performance.
The remainder of this paper is organized as follows: Section 2 reviews the related works. Section 3 introduces the details of our proposed framework. Section 4 describes the setup of the experiments and reports the experimental results. Section 5 discusses the important factors of our framework. Section 6 presents the conclusions of this paper.
Related Work
In the following section, we specifically discuss the related works from two aspects: MLRSSC and GNN-based applications.
MLRSSC
In early research on MLRSSC, handcrafted features were often employed to describe image scenes [44,45]. However, handcrafted features have limited generalization ability and cannot achieve an optimal balance between discriminability and robustness. Recently, deep learning methods have achieved impressive results in MLRSSC [32,46]. For instance, the standard CNN method can complete feature extraction and classification end-to-end with a deep network framework. Moreover, Zeggada et al. designed a multi-label classification layer to address multi-label classification via customized threshold operation [33]. To exploit the co-occurrence dependency of multiple labels, Hua et al. combined the CNN and the RNN to sequentially predict labels [34]. However, due to the accumulation of misclassification information during the generation of label sequences, the use of the RNN may cause an error propagation problem [47]. Hua et al. also considered the label dependency and proposed a relation network for MLRSSC using the attention mechanism [48]. These methods are limited by only considering visual elements in the image scene but disregarding the spatio-topological relationships of visual elements. In addition, Kang et al. proposed a graph relation network to model the relationships between image scenes for MLRSSC [49]. However, it mainly focused on leveraging the relationship between image scenes, and still did not model the spatial relationship between visual elements in each image scene.
GNN-Based Applications
The GNN is a novel model with great potential that can extend the ability of deep learning to process non-Euclidean data. The GNN is extensively applied to the fields of social network [50], recommender system [51], knowledge graph [52], etc. In recent years, some GNNs, such as the GCN, have been employed to solve image understanding problems. Yang et al. constructed scene graphs for images and completed image captioning via the GCN [53]. Chaudhuri et al. used the Siamese GCN to assess the similarity of the scene graph for image retrieval [54]. Chen et al. proposed a GCN-based multi-label natural image classification model, where the GCN is employed to learn the label dependency [43]. However, the GCN is limited for exploring complex node relationships because it only uses a fixed or learnable polynomial of the adjacency matrix to aggregate node features. Compared with the GCN, the GAT is a more advanced model, which can learn the aggregation weights of nodes using the attention mechanism. The adaptability of the GAT can make it more effective to fuse information from graph topological structures and node features [55]. However, due to the difference between image data and graph-structured data, it is still a problem worth exploring to mine the spatio-topological relationship of images via GAT.
Method
To facilitate understanding, our proposed MLRSSC-CNN-GNN framework is visually shown in Figure 1. Generally, we propose a way to map an image into graph-structured data and transform the MLRSSC task into the graph classification task. Specifically, we consider the superpixel regions of the image scene as the nodes of the graph to construct the scene graph, where the node features are represented by the deep feature maps from the CNN. According to the proximity and similarity between superpixel regions, we define the adjacency of nodes, which can be easily employed by the GNN to optimize feature learning. With the scene graph as input, the multi-layer-integration GAT is designed to complete multi-label classification by fusing information from the node features and spatio-topological relationships of the graph.
Method
To facilitate understanding, our proposed MLRSSC-CNN-GNN framework is visually shown in Figure 1. Generally, we propose a way to map an image into graph-structured data and transform the MLRSSC task into the graph classification task. Specifically, we consider the superpixel regions of the image scene as the nodes of the graph to construct the scene graph, where the node features are represented by the deep feature maps from the CNN. According to the proximity and similarity between superpixel regions, we define the adjacency of nodes, which can be easily employed by the GNN to optimize feature learning. With the scene graph as input, the multi-layer-integration GAT is designed to complete multi-label classification by fusing information from the node features and spatio-topological relationships of the graph.
Using CNN to Generate Appearance Features
Generating visual representations of the image scene is crucial in our framework. In particular, we use the CNN as a feature extractor to obtain deep feature maps from intermediate convolutional layers as the representations of high-level appearance features. To improve the perception ability of the CNN and make it effective in the RS image, we retrain the CNN by transfer learning [56].
Considering as the parameters of convolutional layers and as the parameters of fully connected layers, the loss function during the training phase can be represented by Equation (1):
Using CNN to Generate Appearance Features
Generating visual representations of the image scene is crucial in our framework. In particular, we use the CNN as a feature extractor to obtain deep feature maps from intermediate convolutional layers as the representations of high-level appearance features. To improve the perception ability of the CNN and make it effective in the RS image, we retrain the CNN by transfer learning [56].
Considering Θ as the parameters of convolutional layers and Φ as the parameters of fully connected layers, the loss function during the training phase can be represented by Equation (1): where f CNN (·) represents the nonlinear mapping process of the whole CNN network, I indicates an RS image, y (c) indicates the ground truth binary label of class c, and C is the number of categories. The process of feature extraction can be represented by Equation (2): where f FR (·) represents the feature representation process of the trained CNN, and M indicates the deep feature maps of image I. Note that the CNN can also be trained from scratch using the RS image dataset. However, considering that the size of the experimental dataset is small, we choose to fine-tune the weights of the deep convolutional layers to quickly converge the model.
Constructing Scene Graph
We construct the scene graph for each image to map the image into graph-structured data. Graph-structure data are mainly composed of the node feature matrix X ∈ R N×D and the adjacency matrix A ∈ R N×N , where N is the number of nodes and D is the dimension of features. In our framework, X is constructed based on appearance features from the CNN, and A is constructed according to the topological structure of the superpixel regions.
We use the simple linear iterative clustering (SLIC) superpixel algorithm [57] to segment the image and obtain N nonoverlapping regions to represent the nodes of the graph. The SLIC is an unsupervised image segmentation method that uses k-means to locally cluster image pixels, which can generate a compact and nearly uniform superpixel. It is notable that the superpixel consists of homogeneous pixels, so it can be assumed that it is an approximate representation of local visual elements. We apply the high-level appearance features as the initial node features to construct X. Specifically, we combine the deep feature maps M and segmentation results by upsampling M to the size of the original image. To catch the main visual features, we obtain the max value of the feature map slice according to each superpixel region boundary as the corresponding node feature. The node feature extraction will be repeated for each slice of M. Therefore, we can obtain multidimensional features from multiple channels of M as the node features.
We construct A considering the proximity and similarity between superpixel regions. We measure the spatial proximity of nodes by the adjacency of the superpixel regions and quantify the similarity of nodes by calculating the distance between superpixel regions in the color space, which satisfies human perception. In addition, we use the threshold of color distance to filter noisy links. When regions i and j have a common boundary, the adjacency value A ij is defined by Equation (3): where v i and v j represent the mean value of regions i and j in the HSV color space, and threshold t is empirically set to 0.2 according to the common color aberration of different categories. Note that our A is the symmetric binary matrix with the self-loop to define whether the nodes are connected. The specific adjacency weights will be adaptively learned in the GNN module to represent the relationships among nodes. The detailed process of constructing the scene graph is shown in Algorithm 1.
Learning GNN to Mine Spatio-Topological Relationship
Benefiting from the mechanism of node message passing, the GNN can integrate the spatio-topological structure into node feature learning. Thus, we treat the MLRSSC task as the graph classification task to mine the spatial relationships of the scene graph via the GNN. For graph classification, the GNN is composed of graph convolution layers, graph pooling layers and fully connected layers. Specifically, we adopt the GAT model [40] as the backbone of the graph convolution layer and design the multi-layer-integration GAT structure to better learn the complex spatial relationship and topological representations of the graph.
Algorithm 1 Algorithm to construct the scene graph of an RS image
Input: RS image I. Output: Node feature matrix X and adjacency matrix A. 1: for each I do 2: Extract deep feature maps M from image I; 3: Segment I into N superpixel regions R; 4: for each r ∈ R do 5: Obtain the max values of M according to the boundary of r in D channels, and update the vector X r ∈ R D of the matrix X; 6: Calculate the mean value v r of r in the HSV color space; 7: Obtain the adjacent regions list R of r; 8: end for 9: for each r ∈ R do 10: A rr = 1; 11: Calculate color distance v r − v r between r and r ∈ R ; 12: if v r − v r ≤ t do 13: A rr = 1; 14: end if 15: end for 16: end for
Graph Attention Convolution Layer
We construct the graph convolution layer following the GAT model to constantly update the node features and adjacency weights. With the attention mechanism, the adjacency weights are adaptively learned according to the node features, which can represent the complex relationships among nodes. Considering X i ∈ R D as the features of node i, the attention weight e ij between node i and node j is calculated with a learnable linear transformation, which can be represented by Equation (4): where is the concatenation operation, W ∈ R D ×D and H ∈ R 2D are the learnable parameters, and D indicates the dimension of the output features. The topological structure is injected into the mechanism by a mask operation. Specifically, only e ij for nodes j ∈ η i are employed in the network, where η i is the neighborhood of node i, which is generated according to A. Subsequently, e is nonlinearly activated via the LeakyReLU function and normalized by Equation (5): We can fuse information from the graph topological structures and node features by matrix multiplication between α and X. In addition, we adopt multi-head attention to stabilize the learning process. Considering X in ∈ R N×D as the input node features, the output node features X GAT ∈ R N×KD of a graph attention convolution layer can be computed by Equation (6): where represents the concatenation operation, α (k) is the normalized attention matrix of the k-th attention mechanism, and W (k) is the corresponding weight matrix. Equation (6), represents the concatenation of the output node features from K independent attention mechanisms.
To synthesize the advantage of each graph attention convolution layer and obtain comprehensive representation of the graph, we design the multi-layer-integration GAT structure that is shown in Figure 1. After multiple graph attention convolution layers, the hierarchical features of the same node are summarized as the new node features X mGAT , which can be computed by Equation (7): where X GAT (l) represents the output node features of the l-th graph attention convolution layer, and L is the total number of graph attention convolution layers.
Graph Pooling Layer
For graph classification, we use a graph pooling layer to convert a graph of any size to a fixed-size output. Specifically, we adopt differentiable pooling proposed in [58] to construct the graph pool layer. The idea of differentiable pooling is to transform the original graph into a coarsened graph through the way of embedded. Considering X in ∈ R N×D as the input node features and N as the new number of nodes, the embedded matrix S ∈ R N×N can be learned by Equation (8): where W emb ∈ R D×N represents the learnable weight and b emb is the bias. The softmax function is applied in a row-wise function. The node feature matrix output X GP ∈ R N ×D of a graph pooling layer can be calculated by Equation (9): Because the graph pooling operation is learnable, the output graph is an optimized result that represents the reduced-dimension input graph.
Classification Layer
After graph pooling, we flatten the node features matrix and obtain a finite dimensional vector to represent the global representation of the graph. Taking X in as the input node features, the flatten operation can be represented by Equation (10): where x is a feature vector. At the end of the network, we add fully connected layers followed by the sigmoid activation function as the classifier to complete the graph classification. The classification probability outputŷ of the last fully connected layer can be computed by Equation (11): where σ(·) is the sigmoid function, W f c represents the learnable weight and b f c is the bias. Furthermore, we apply the binary cross-entropy as the loss function, which can be defined by Equation (12): where Λ represents the parameters of the whole GNN network and y (c) indicates the ground truth binary label of class c. Via back-propagation, Λ can be optimized based on the gradient of the loss. Thus, we can use GNN to complete the multi-label classification in an end-to-end manner. The training process of the whole MLRSSC-CNN-GNN framework is shown in Algorithm 2.
Experiments
In this section, the data description is presented at first. Afterwards, the evaluation metrics and details of the experimental setting are shown. The experimental results and analysis are given at the end.
Algorithm 2 Training process of the proposed MLRSSC-CNN-GNN framework Input: RS images I and ground truth multi-labels y in training set. Output: Model parameters Θ and Λ.
Step 1: Learning CNN 1: Take I and y as input, and train CNN to optimize Θ according to Equation (1); 2: Extract deep feature maps M of I according to Equation (2); Step 2: Constructing scene graph 3: Construct node feature matrix X and adjacency matrix A of I according to Algorithm 1; Step 3: Learning GNN 4: for iter = 1, 2, . . . do 5: Initialize parameters Λ of the network in the first iteration; 6: Update X using L graph attention convolution layers according to Equation (4)-(6); 7: Fuse X GAT from L graph attention convolution layers according to Equation (7); 8: Cover X mGAT to a fixed-size output via the graph pooling layer according to Equation (8-9); 9: Flatten X GP and generate the classification probability after the classification layer according to Equation (10-11); 10: Calculate the loss based on the outputŷ of the network and y according to Equation (12); 11: Update Λ by back-propagation; 10: end for
Dataset Description
We perform experiments on the UCM multi-label dataset and AID multi-label dataset, which are described here. The UCM multi-label dataset contains 2100 RS images with 0.3 m/pixel spatial resolution, and the image size is 256 × 256 pixels. For MLRSSC, the dataset is divided into the following 17 categories based on the DLRSD dataset [59]: airplane, bare soil, buildings, cars, chaparral, court, dock, field, grass, mobile home, pavement, sand, sea, ship, tanks, trees, and water. Some example images and their labels are shown in Figure 2. Fuse from graph attention convolution layers according to Equation (7); 8: Cover to a fixed-size output via the graph pooling layer according to Equation (8-9); 9: Flatten and generate the classification probability after the classification layer according to Equation (10-11); 10: Calculate the loss based on the output ̂ of the network and according to Equation (12); 11: Update by back-propagation; 12: end for
Dataset Description
We perform experiments on the UCM multi-label dataset and AID multi-label dataset, which are described here. The UCM multi-label dataset contains 2100 RS images with 0.3 m/pixel spatial resolution, and the image size is 256 × 256 pixels. For MLRSSC, the dataset is divided into the following 17 categories based on the DLRSD dataset [59]: airplane, bare soil, buildings, cars, chaparral, court, dock, field, grass, mobile home, pavement, sand, sea, ship, tanks, trees, and water. Some example images and their labels are shown in Figure 2. The AID multi-label dataset [48] contains 3000 RS images from the AID dataset [60]. For MLRSSC, the dataset is assigned 17 categories, which are the same as those in the UCM multi-label dataset. The spatial resolutions of the images vary from 0.5 m/pixel to 0.8 m/pixel, and the size of each image is 600 × 600 pixels. Some example images and their labels are shown in Figure 3. The AID multi-label dataset [48] contains 3000 RS images from the AID dataset [60]. For MLRSSC, the dataset is assigned 17 categories, which are the same as those in the UCM multi-label dataset. The spatial resolutions of the images vary from 0.5 m/pixel to 0.8 m/pixel, and the size of each image is 600 × 600 pixels. Some example images and their labels are shown in Figure 3.
Remote Sens. 2020, 12, x FOR PEER REVIEW 9 of 17 Figure 3. Samples in the AID multi-label dataset.
Note that all the evaluation indicators are example-based indices that are formed by averaging the scores of each individual sample [62]. Generally, F1-Score and F2-Score are relatively more important for performance evaluation.
Experimental Settings
In our experiments, we adopt VGG16 [63] as the CNN backbone. The network is initialized with the weights trained on ImageNet [64], and we fine-tune it with the experimental datasets. In addition, we use fusion features by combining feature maps from the "block4_conv3" and "block5_conv3" layers in VGG16 as the node features of the scene graph. Thus, the total dimension of the initial node features is 1024.
Our recommended GNN architecture contains two graph attention convolution layers with the output dimensions of 512 and multi-head attention with = 3. The multi-layer-integration GAT structure is applied to construct the graph attention convolution layers. Subsequently, we set up one graph pooling layer that fixes the size of the graph to 32 nodes and two fully connected layers with the output dimensions of 256 and 17 (number of categories). Moreover, the dropout layer is set in the middle of each layer, and batch normalization is employed for all layers but the last layer. The network is trained with the Adagrad optimizer [65], and the learning rate is initially set to 0.01, which decays during the training process.
Evaluation Metrics
We calculate Precision, Recall, F1-Score and F2-Score to evaluate the multi-label classification performance [61]. The evaluation indicators are computed based on the number of true positives (TP), false positives (FP), true negatives (TN) and false negatives (FN) in an example (i.e., an image with multiple labels). The evaluation indicators can be calculated using Equations (13) and (14): Fβ Score = 1 + β 2 Precision·Recall Note that all the evaluation indicators are example-based indices that are formed by averaging the scores of each individual sample [62]. Generally, F1-Score and F2-Score are relatively more important for performance evaluation.
Experimental Settings
In our experiments, we adopt VGG16 [63] as the CNN backbone. The network is initialized with the weights trained on ImageNet [64], and we fine-tune it with the experimental datasets. In addition, we use fusion features by combining feature maps from the "block4_conv3" and "block5_conv3" layers in VGG16 as the node features of the scene graph. Thus, the total dimension of the initial node features is 1024.
Our recommended GNN architecture contains two graph attention convolution layers with the output dimensions of 512 and multi-head attention with K = 3. The multi-layer-integration GAT structure is applied to construct the graph attention convolution layers. Subsequently, we set up one graph pooling layer that fixes the size of the graph to 32 nodes and two fully connected layers with the output dimensions of 256 and 17 (number of categories). Moreover, the dropout layer is set in the middle of each layer, and batch normalization is employed for all layers but the last layer. The network is trained with the Adagrad optimizer [65], and the learning rate is initially set to 0.01, which decays during the training process.
To pursue a fair comparison, based on the partition way in [48], the UCM and AID multi-label datasets are split into 72% for training, 8% for validation and 20% for testing. Note that instead of randomly division, this partition way is pre-set, where the training and testing samples have obvious style differences. Therefore, it will be more challenging for the classification methods. In the training phase, we only use the training images and their ground truth labels to train the CNN and the GNN. Specifically, we learn the CNN to extract deep feature maps of the images and then construct a scene graph for each image, which is the input of the GNN. In the testing phase, the testing images are fed into the trained CNN and GNN models to predict multi-labels.
Comparison with the State-of-the-Art Methods
We compare our proposed methods with several recent methods, including the standard CNN [63], CNN-RBFNN [33], CA-CNN-BiLSTM [34] and AL-RN-CNN [48]. For a fair comparison, all compared methods adopt the same VGG16 structure as the CNN backbone. We implement the standard CNN method as the baseline of MLRSSC and report the mean and standard deviation [66] of the evaluation results. Because the other methods also adopt the same dataset partition, we take the reported evaluation results from their corresponding publications as the comparison reference in this paper. It is noted that the existing methods don't report the standard deviation of evaluation results. As these methods don't release their source codes, it is hard to recover the standard deviation of the existing methods. Fortunately, we find the variance of repeated experiments is very slight, which helps to fully show the superiority of our proposed method. For the proposed methods, we report the results based on the MLRSSC-CNN-GNN via the standard GAT and the MLRSSC-CNN-GNN via the multi-layer-integration GAT, respectively.
Results on the UCM Multi-Label Dataset
The quantitative results on the UCM multi-label dataset are shown in Table 1. We can observe that our proposed MLRSSC-CNN-GNN via the multi-layer-integration GAT achieves the highest scores for Recall, F1-Score and F2-Score. In general, the proposed method achieves the best performance. The lower bound of our method can also be better than the performances of the existing methods. We can also observe that our methods with the GNN show significant improvement compared with the method that only uses the CNN. Compared with the standard CNN, the proposed method gains an improvement of 7.4% for F1-Score and an improvement of 7.09% for F2-Score, which demonstrates that learning the spatial relationship of visual elements via the GNN has an important role in advancing the classification performances. Moreover, the MLRSSC-CNN-GNN via the multi-layer-integration GAT performs better than the MLRSSC-CNN-GNN via the standard GAT, which shows the effectiveness of the proposed multi-layer-integration GAT. Some samples of the predicted results on the UCM multi-label dataset are exhibited in Figure 4. It can be seen that the proposed method can successfully capture the main categories of the scene. However, our method is still insufficient in the details, such as the prediction of cars, grass, and bare soil, which may be inconsistent with the ground truths. Table 2 shows the experimental results on the AID multi-label dataset. We can also observe that our proposed MLRSSC-CNN-GNN via the multi-layer-integration GAT achieves the best performance with the highest scores of Recall, F1-Score and F2-Score. Compared to the standard CNN, the proposed method increases F1-Score and F2-Score by 3.33% and 3.82%, respectively. Compared to AL-RN-CNN, the proposed method gains an improvement of 0.55% for F1-Score and an improvement of 0.87% for F2-Score. Compared to the MLRSSC-CNN-GNN via the GAT, the proposed method gains an improvement of 0.32% for F1-Score and an improvement of 0.52% for F2-Score. Table 2. Performances of different methods on the AID multi-label dataset (%).
Methods
Precision Recall F1-Score F2-Score CNN [63] 87.62 ± 0.14 86. 13 Some samples of the predicted results on the AID multi-label dataset are exhibited in Figure 5. Consistent with the results on the UCM multi-label dataset, our method can successfully capture the main categories of the scene. The superior performances on both UCM and AID multi-label datasets can show the robustness and effectiveness of our method.
Discussion
In this section, we analyze the influence of some important factors in the proposed framework, including the number of superpixel regions in the scene graph, the value K of multi-head attention in the GNN, and the depth of the GNN. Table 2 shows the experimental results on the AID multi-label dataset. We can also observe that our proposed MLRSSC-CNN-GNN via the multi-layer-integration GAT achieves the best performance with the highest scores of Recall, F1-Score and F2-Score. Compared to the standard CNN, the proposed method increases F1-Score and F2-Score by 3.33% and 3.82%, respectively. Compared to AL-RN-CNN, the proposed method gains an improvement of 0.55% for F1-Score and an improvement of 0.87% for F2-Score. Compared to the MLRSSC-CNN-GNN via the GAT, the proposed method gains an improvement of 0.32% for F1-Score and an improvement of 0.52% for F2-Score. Some samples of the predicted results on the AID multi-label dataset are exhibited in Figure 5. Consistent with the results on the UCM multi-label dataset, our method can successfully capture the main categories of the scene. The superior performances on both UCM and AID multi-label datasets can show the robustness and effectiveness of our method.
Effect on the Number of Superpixel Regions
When constructing the scene graph, the number of superpixel regions is a vital parameter, which determines the scale and granularity of the initial graph. Therefore, it is necessary to set an appropriate . Considering the tradeoff between efficiency and performance, we set the step size of the to 20, and study the effects of by setting it from 30 to 110. The results on the UCM and AID multi-label datasets are shown in Figure 6. It can be seen that when the is set between 50 to 90, our model can achieve better performance.
Sensitivity Analysis of the Multi-Head Attention
In the graph attention convolution layer of the GNN, we adopt multi-head attention to stabilize the learning process. However, a larger value of in multi-head attention will increase the parameters and calculation of the model. Thus, we study the effects of by setting it to a value from 1 to 5. The experimental results on the UCM and AID multi-label datasets are shown in Figure 7. Obviously, the use of multi-head attention can improve the classification performance because it can learn more abundant feature representations. It can be seen that when the value of reaches 3, the performance of the model begins to saturate. However, when the value of continues to increase, the model may face an overfitting problem.
Discussion
In this section, we analyze the influence of some important factors in the proposed framework, including the number of superpixel regions in the scene graph, the value K of multi-head attention in the GNN, and the depth of the GNN.
Effect on the Number of Superpixel Regions
When constructing the scene graph, the number of superpixel regions N is a vital parameter, which determines the scale and granularity of the initial graph. Therefore, it is necessary to set an appropriate N. Considering the tradeoff between efficiency and performance, we set the step size of the N to 20, and study the effects of N by setting it from 30 to 110. The results on the UCM and AID multi-label datasets are shown in Figure 6. It can be seen that when the N is set between 50 to 90, our model can achieve better performance.
Effect on the Number of Superpixel Regions
When constructing the scene graph, the number of superpixel regions is a vital parameter, which determines the scale and granularity of the initial graph. Therefore, it is necessary to set an appropriate . Considering the tradeoff between efficiency and performance, we set the step size of the to 20, and study the effects of by setting it from 30 to 110. The results on the UCM and AID multi-label datasets are shown in Figure 6. It can be seen that when the is set between 50 to 90, our model can achieve better performance.
Sensitivity Analysis of the Multi-Head Attention
In the graph attention convolution layer of the GNN, we adopt multi-head attention to stabilize the learning process. However, a larger value of in multi-head attention will increase the parameters and calculation of the model. Thus, we study the effects of by setting it to a value from 1 to 5. The experimental results on the UCM and AID multi-label datasets are shown in Figure 7. Obviously, the use of multi-head attention can improve the classification performance because it can learn more abundant feature representations. It can be seen that when the value of reaches 3, the performance of the model begins to saturate. However, when the value of continues to increase, the model may face an overfitting problem.
Sensitivity Analysis of the Multi-Head Attention
In the graph attention convolution layer of the GNN, we adopt multi-head attention to stabilize the learning process. However, a larger value of K in multi-head attention will increase the parameters and calculation of the model. Thus, we study the effects of K by setting it to a value from 1 to 5. The experimental results on the UCM and AID multi-label datasets are shown in Figure 7. Obviously, the use of multi-head attention can improve the classification performance because it can learn more abundant feature representations. It can be seen that when the value of K reaches 3, the performance of the model begins to saturate. However, when the value of K continues to increase, the model may face an overfitting problem.
Discussion on the Depth of GNN
The graph attention convolution layer in the GNN is the key part to learning the classification features of the graph. To explore the performance of the GNN in our framework, we build the GNN with a different number of graph attention convolution layers. Figure 8 shows the performance of our MLRSSC-CNN-GNN with one, two, and three graph attention convolution layers. The output dimensions of these layers are 512, and the remaining structures in GNN are the same. It can be seen that the MLRSSC-CNN-GNN with two graph attention convolution layers achieves the best performance with the highest F1-Score and F2-Score. However, when the number of graph attention convolution layers reaches three, both the F1-Score and F2-Score begin to drop. The possible reason for the performance drop of the deep GNN may be that the node features are oversmoothed when a larger number of graph attention convolution layers are utilized.
Conclusions
MLRSSC remains a challenging task because it is difficult to learn the discriminative semantic representations to distinguish multiple categories. Although many deep learning-based methods have been proposed to address MLRSSC and achieved a certain degree of success, the existing methods are limited by only perceiving visual elements in the scene but disregarding the spatial relationships of visual elements. With this consideration, this paper proposes a novel MLRSSC-CNN-GNN framework to address MLRSSC. Different from the existing methods, the proposed method can comprehensively utilize the visual and spatial information in the scene by combining the CNN and the GNN. Specifically, we encode the visual content and spatial structure of the RS image scene by constructing scene graph. The CNN and the GNN is used to mine the appearance features and spatio-
Discussion on the Depth of GNN
The graph attention convolution layer in the GNN is the key part to learning the classification features of the graph. To explore the performance of the GNN in our framework, we build the GNN with a different number of graph attention convolution layers. Figure 8 shows the performance of our MLRSSC-CNN-GNN with one, two, and three graph attention convolution layers. The output dimensions of these layers are 512, and the remaining structures in GNN are the same. It can be seen that the MLRSSC-CNN-GNN with two graph attention convolution layers achieves the best performance with the highest F1-Score and F2-Score. However, when the number of graph attention convolution layers reaches three, both the F1-Score and F2-Score begin to drop. The possible reason for the performance drop of the deep GNN may be that the node features are oversmoothed when a larger number of graph attention convolution layers are utilized.
Discussion on the Depth of GNN
The graph attention convolution layer in the GNN is the key part to learning the classification features of the graph. To explore the performance of the GNN in our framework, we build the GNN with a different number of graph attention convolution layers. Figure 8 shows the performance of our MLRSSC-CNN-GNN with one, two, and three graph attention convolution layers. The output dimensions of these layers are 512, and the remaining structures in GNN are the same. It can be seen that the MLRSSC-CNN-GNN with two graph attention convolution layers achieves the best performance with the highest F1-Score and F2-Score. However, when the number of graph attention convolution layers reaches three, both the F1-Score and F2-Score begin to drop. The possible reason for the performance drop of the deep GNN may be that the node features are oversmoothed when a larger number of graph attention convolution layers are utilized.
Conclusions
MLRSSC remains a challenging task because it is difficult to learn the discriminative semantic representations to distinguish multiple categories. Although many deep learning-based methods have been proposed to address MLRSSC and achieved a certain degree of success, the existing methods are limited by only perceiving visual elements in the scene but disregarding the spatial relationships of visual elements. With this consideration, this paper proposes a novel MLRSSC-CNN-GNN framework to address MLRSSC. Different from the existing methods, the proposed method can comprehensively utilize the visual and spatial information in the scene by combining the CNN and the GNN. Specifically, we encode the visual content and spatial structure of the RS image scene by constructing scene graph. The CNN and the GNN is used to mine the appearance features and spatio-
Conclusions
MLRSSC remains a challenging task because it is difficult to learn the discriminative semantic representations to distinguish multiple categories. Although many deep learning-based methods have been proposed to address MLRSSC and achieved a certain degree of success, the existing methods are limited by only perceiving visual elements in the scene but disregarding the spatial relationships of visual elements. With this consideration, this paper proposes a novel MLRSSC-CNN-GNN framework to address MLRSSC. Different from the existing methods, the proposed method can comprehensively utilize the visual and spatial information in the scene by combining the CNN and the GNN. Specifically, we encode the visual content and spatial structure of the RS image scene by constructing scene graph.
The CNN and the GNN is used to mine the appearance features and spatio-topological relationships, respectively. In addition, we design the multi-layer-integration GAT model to further mine the topological representations of scene graph for classification. The proposed framework is verified on two public MLRSSC datasets. As the experimental results shown, the proposed method can improve both the F1-Score and F2-Score by more than 3%, which demonstrates the importance of learning spatio-topological relationships via the GNN. Moreover, the proposed method can obtain superior performances compared with the state-of-the-art methods. As a general framework, the proposed MLRSSC-CNN-GNN framework is highly flexible, it can be easily and dynamically enhanced by replacing the corresponding modules with advanced algorithms. In future work, we will consider the adoption of more advanced CNN and GNN models to explore the potential of our framework. However, our proposed method has not explicitly modeled label dependency, which is also important in MLRSSC. In the future, we will focus on integrating this consideration into our method to further improve the performance.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,310 | 2020-12-07T00:00:00.000 | [
"Computer Science"
] |
Cited references and Medical Subject Headings (MeSH) as two different knowledge representations: clustering and mappings at the paper level
For the biomedical sciences, the Medical Subject Headings (MeSH) make available a rich feature which cannot currently be merged properly with widely used citing/cited data. Here, we provide methods and routines that make MeSH terms amenable to broader usage in the study of science indicators: using Web-of-Science (WoS) data, one can generate the matrix of citing versus cited documents; using PubMed/MEDLINE data, a matrix of the citing documents versus MeSH terms can be generated analogously. The two matrices can also be reorganized into a 2-mode matrix of MeSH terms versus cited references. Using the abbreviated journal names in the references, one can, for example, address the question whether MeSH terms can be used as an alternative to WoS Subject Categories for the purpose of normalizing citation data. We explore the applicability of the routines in the case of a research program about the amyloid cascade hypothesis in Alzheimer’s disease. One conclusion is that referenced journals provide archival structures, whereas MeSH terms indicate mainly variation (including novelty) at the research front. Furthermore, we explore the option of using the citing/cited matrix for main-path analysis as a by-product of the software.
Introduction
The ability to define research fields is one of several great challenges in information science (Chen, 2016).Early efforts relied on classifying publication sources, such as journals, to define research fields.In addition to disciplinary journals, however, the literature databases Web of Science (WoS, Thomson Reuters) and Scopus (Elsevier) contain multi-disciplinary journals such as Science and Nature.In recent years, new journals which are not organized along disciplinary lines, have been added to the databases.PLoS ONE, for example, tends to disturb the existing classifications of journals (Leydesdorff & De Nooy, in press).In response to these changes, bibliometricians have begun to cluster the database at the level of documents instead of journals (e.g., Waltman & van Eck, 2012;cf. Hutchins, Yuan, Anderson, & Santangelo, 2016).
An alternative to clustering documents on the basis of direct citations could be to use databases that are more specialized than WoS and Scopus, but with professional indexing at the document level.The National Library of Medicine, for example, makes a huge investment to maintain a classification system of Medical Subject Headings (MeSH) as tags to the PubMed/MEDLINE database (which is publicly available at http://www.ncbi.nlm.nih.gov/pubmed/advanced). 1 The classification at the article level is elaborated in great detail (Agarwal & Searls, 2009), with a hierarchical tree covering sixteen separate branches that can reach up to twelve levels of depth.
Diseases, for example, are classified under C. 1 The National Library of Medicine of the United States (NLM) has constantly received substantial funding for to maintain and update its biomedical and health information services-for example, the 2015 budget for these services was $117 Million (National Library of Medicine, 2015).This has enabled a relatively uniform application of the MeSH classification to publications by indexers over many years (Hicks & Wang, 2011, at p. 292;Petersen et al., 2016)."Alzheimer's disease" (AD) for example is classified as C10.228.140.380.100 under "Dementia," as C10.574.945.249 under "Neurodegenerative diseases," and as F03.615.400.100under "Neurocognitive disorders" in the F-branch covering "Psychiatry and psychology."Unlike other disciplinarily specialized databases such as Chemical Abstracts (Bornmann et al., 2009), the multiple tree-structure of the Index Medicus allows for mapping documents differently across heterogeneous domains (Leydesdorff, Rotolo, & Rafols, 2012;Rotolo, Rafols, Hopkins, & Leydesdorff, 2016).Unlike WoS or Scopus, Medline does not cover the full range of disciplines; but a large part of the scholarly literature in the life sciences is included even more exhaustively than in the more comprehensive databases (Lundberg et al., 2006).
A version of MEDLINE is integrated in the databases of Thomson Reuters.The advantage of this installation is that the "times cited" of each record (if the document is also available in the WoS Core Collection of the Citation Indices) is available on screen; but this field is not integrated when the records are downloaded.Rotolo & Leydesdorff (2015) provide software for integrating the "times cited" from the citation indices at WoS into the MEDLINE data.One technical advantage of the installation at PubMed is that the retrieval is not restrained.Using WoS, one can download only 500 records at a time and Scopus has a maximum of 2,000 records.
The MeSH terms attributed to a paper can be considered as references to a body of knowledge stored as documents in a database.Whereas the cited references are provided by the authors themselves, the MeSH categories are attributed by professional indexers.Using MeSH terms as references, one can envisage a matrix of documents referencing MeSH comparable to the cited/citing matrix at the article level.Both cited references and MeSH terms can be considered as attributes of articles, and thus be combined and compared using various forms of multi-variate analysis.The two matrices can also be integrated into a 2-mode matrix of MeSH terms versus cited references.In this brief communication, we explore these options computationally and describe software that has been developed and made available for this purpose on the internet.
We discuss the opportunities and the pros and cons of various approaches.
Data
At the professional suggestion of one of us (AS, the scientometrics editor of the Journal of Alzheimer's Disease), we selected the amyloid cascade hypothesis in Alzheimer's disease (AD) as a test case to develop software and routines to merge and analyze citation information from the Web of Science and MeSH.The amyloid cascade hypothesis in AD was formulated by Hardy andAllsop in 1991 (cf. Hardy andHiggins, 1992;Selkoe, 1991).Reitz (2012: 1) summarized this hypothesis as follows: "Since 1992, the amyloid cascade hypothesis has played a prominent role in explaining the etiology and pathogenesis of Alzheimer's disease (AD).It proposes that the deposition of βamyloid (Aβ) is the initial pathological event in AD leading to the formation of senile plaques (SPs) and then to neurofibrillary tangles (NFTs), neuronal cell death, and ultimately dementia.
While there is substantial evidence supporting the hypothesis, there are also limitations: (1) SP and NFT may develop independently, and (2) SPs and NFTs may be the products rather than the causes of neurodegeneration in AD.In addition, randomized clinical trials that tested drugs or antibodies targeting components of the amyloid pathway have been inconclusive." For the purpose of this study, the search string '("Alzheimer disease"[MeSH Terms] AND "amyloid beta-protein precursor"[MeSH Terms]) AND "mice, transgenic"[MeSH Terms])' was proposed to encompass the relevant literature.This string provided us (on March 6, 2016) with a retrieval of 3,558 records in both PubMed/MEDLINE and the MEDLINE version in WoS.Using PubMed Identifiers (PMID numbers), 3,416 of these records could be retrieved in the WoS Core Collection.As noted, not all journals covered by PubMed/MEDLINE are also covered in the WoS Core Collection.
Methods
Two dedicated programs, MHNetw.exe2 and CitNetw.exe,3 have been developed to generate reference matrices using the PubMed/MEDLine and the WoS data, respectively.The matrices are provided in the Pajek format.CitNetw.exegenerates the cited/citing matrix with the citing documents as units of analysis in the rows and the cited references as variables in the columns; MHNetw.exegenerates a similar matrix, but with the MeSH in the columns.The number of citing documents is determined by the retrieval from PubMed/MEDLINE or Medline in WoS, respectively.Instructions for how to use the databases and routines are provided in Appendix I.
The routine MHNetw.exepresumes that the data from WoS with the citation information is already organized (by CitNetw.exe) in the same folder so that the citation information can be retrieved locally and attributed to the MeSH categories.If this data is not yet present, the user is first prompted with a search string in the file "string.wos" that can be used at the advanced search interface of WoS. 4oth MHNetw.exe and CitNetw.exeprovide the following files: 1. "Mtrx.net"contains the reference matrix in the Pajek format; the Pajek format allows for virtually unlimited file sizes.
2. The SPSS syntax file "mtrx.sps"reads the reference matrix ("mtrx.txt")into SPSS and saves this file as an SPSS systems file ("mtrx.sav").MeSH terms are included as variable labels in the case of MHNetw.exe; in the case of CitNetw.exe, the cited references are the variable labels.The user can combine the two matrices using, for example, Excel.
MHNetw.exe additionally provides: a) Cr_mh.net, which contains the 2-mode matrix of cited references (CR) in the rows and MeSH terms in the columns; b) Jcr_mh.net,which simplifies cr_mh.netby using only the abbreviated journal names in the cited references in the rows and MeSH terms in the columns; c) The file jcr_mh_a.net,which contains the same information (abbreviated journal names and MeSH categories), but organized differently: both CR and MeSH are attributed as variables to the documents under study as the cases (in the rows).Within Pajek, one can convert this matrix into an affiliations matrix (using Network > 2-Mode Network > 2-Mode to 1-Mode > Columns).One can also export this file (e.g., to SPSS) for cosine-normalization of the matrix.
CitNetw.exe, furthermore, provides a file "lcs.net"containing the cited/citing matrix for the bounded citation network of the citing documents under study.The bounded citation network corresponds with what was defined as the "local citation environment" in HistCite™ (Garfield, Pudovkin, & Istomin, 2003;Garfield, Sher, & Torpie, 1964).The cited references are matched against a string composed from the meta-data of the citing document using the standard WoSformat of the cited references: "Name Initial, publication year, abbreviated journal title, volume number, and page number" (e.g., "Zhang CL, 2002, CLIN CANCER RES, V8, P1234").The matrix may be somewhat different from the one obtained from using HistCite™ because of different matching and disambiguation rules.
In order to proceed with main-path analysis in Pajek, the network has to be a-cyclical (de Nooy et al., 2011, pp. 244f.).If needed, one can make the network a-cyclical within Pajek by using the following steps in the order specified in Table 1.The choice of "Main Path > Global Search > Standard", for example, leads to the extraction of the subnetwork with the main path; this subnetwork is selected as the active network.The main path can then be drawn and/or further analyzed.
Note that the cited references are not disambiguated by these routines, but are used as they appear on the input file.The user may wish to disambiguate the references before entering this routine; for example, by using CRExplorer.EXE at http://www.crexplorer.net(Thor, Marx, Leydesdorff, & Bornmann, 2016).
Descriptive
Figure 1 shows the number of documents in the set over time and the development of the ratio of citations per publication (c/p).As noted, the research program under study was triggered by a paper in 1992 (Hardy & Higgins, 1992).However, there are 11 papers in the set with publication dates in 1991 predating this formulation.In the first decade, the number of publications shows exponential growth; but over the full time span linear growth prevails.In other words, this line of research is no longer booming, but since around 2000 can be considered as "normal science." The c/p ratio declines linearly with the subsequently shorter citation windows for more recent papers.However, the decline in this ratio may also indicate a diminishing attractiveness of this line of research (Hardy & Selkoe, 2002).The sharp decline in the number of publications in the most recent years confirms this inference (Selkoe & Hardy, 2016).Recently, Herrup (2015) concluded "that the time has come to face our fears and reject the amyloid cascade hypothesis," albeit at the moment without an alternative explanation of Alzheimer's Disease.Although there are more unique references to journals than to MeSH, their concentration indicates that the red-colored journals form a backbone structure with the MeSH terms spreading out as variations.This is the dominant structure in this data: the journals provide a core structure and the MeSH terms the variation.The journals are more concentrated than the MeSH terms (Table 3): the Gini coefficient of the journal distribution is 0.937 while it is 0.852 for the distribution of MeSH.This map can be web-started at http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/software/mhnetw/jcr_mh_map.txt&label_size_variation=0.3&zoom_level=1&scale=0.9
Analysis and decomposition
Whereas multivariate analysis (e.g., factor analysis) is limited by systems and software limitations, the new decomposition algorithms enable us to decompose large and even very large matrices.The above matrix (Figure 2), for example, can robustly be decomposed into five clusters using the algorithm of Blondel et al. (2008); the modularity of the network is low (Q = 0.066). 7Figure 3, for example, shows the fourth component consisting of 598 cited journals versus 326 MeSH terms focusing on techniques such as neuro-imaging.This cluster can be further subdivided into nine components (Q = 0.375).
6 While "Mice, transgenic" and "Amyloid beta-Protein Precursor" were both part of the original search string, the search also retrieves records with MeSH subsumed under these categories: these are "Mice, knockout" (333 times) and "Amyloid beta-Peptides" (2,492 times), respectively. 7The decomposition algorithm of VOSviewer distinguishes more than one hundred clusters after symmetrizing the asymmetrical matrix internally by summing the cells (i,j) and (j,i).et al., 2008).Layout using (Fruchterman & Reingold, 1991) and visualization in VOSviewer.This map can be web-started at http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/software/mhnetw/comp4map.txt&network=http://www.leydesdorff.net/software/mhnetw/comp4net.txt&label_size_variation=0.2&zoom_level=1&scale=1.20&colored_lines&n_lines=10000&curved_lines The file jcr_mh_a.netorganizes the same information as a matrix of the 3,558 documents under study as the cases and both the MeSH terms and abbreviated journal titles as variables in the columns.Using this file, one can normalize the variables or proceed to multivariate analysis.
After normalization using the Jaccard index-available in UCInet-the highly centralized structure, indeed, has disappeared.The resulting 1-mode similarity matrix can be decomposed into approximately 70 components by the algorithm of Blondel et al. (2008) and into 61 by the algorithm of VOSviewer (Waltman & van Eck, 2012).The modularity is an order of magnitude larger than in the previous case (Q = 0.577).After this normalization, however, journal names come even more to the fore on the map (Figure 3),8 indicating their structural role in this information.operates at the paper level may be less suited for the normalization of citations than journals or journal categories, which can reveal archival structures.
Main path analysis
As noted, CitNetw.exe also generates a file "lcs.net"containing the bounded network of the papers under study with "local citation scores (Garfield et al., 2003).Using the instruction provided in section 2, one can generate a main path using Pajek.Figure 5, for example, shows the so-called "key-route main path" as the most recommended option for this analysis (Liu & Lu, 2012).Forty of the 3,416 documents downloaded from WoS (or slightly more than 1%) are located on this main path.9 It is beyond the scope of this paper to compare these results with other options for main-path or critical path analysis (Batagelj, 2003;Hummon & Doreian, 1989).A review of the various options is provided by Liu & Lu (2012), who suggest that a combination of the results of several algorithms into an integrated model can improve the quality of the main-path analysis (cf.Lucio-Arias & Leydesdorff, 2008).The resulting main path can be further analyzed as a Pajek file; for example, the colors in Figure 5 show the results of decomposition using the algorithm of Blondel The generation of a main path of forty articles for a line of investigation encompassing approximately 3,500 papers is appealing due to the reduction by two orders of magnitude in the amount one would need to read to obtain an understanding of this subfield.However, a main path remains an algorithmic construct that one can use heuristically, but that otherwise requires The main path (as depicted in Figure 5) includes one or more papers from twelve of these authors.
Conclusions
We have developed two routines that enable the researcher to generate matrices of citing versus cited documents and/or citing documents versus MeSH terms.The data from WoS and PubMed/Medline was integrated using the PubMed Identifier (PMID).Since the number of citing documents is (almost) the same in both cases, the two matrices can also be juxtaposed and then merged so that combinations of citations and MeSH terms can be analyzed.These combinations can perhaps be considered as hybrid indicators (e.g., Braam, Moed, & van Raan, 1991).
Aggregation of the cited references at the journal level reduces the number of variables by orders of magnitude; the resulting numbers are comparable to the numbers of MeSH categories attributed.Further analysis leads to the conclusion that the abbreviated journal names in the cited references indicate a core structure of the set, 1 whereas the MeSH are attributed regarding to their relevance to current research options.This classification therefore seems less suited for carrying the normalization of citations than journals or journal groups.
In the context of this study, main-path analysis provides another example of the research potential of organizing the data into primary matrices extracted from downloads of PubMed and WoS.As a perspective for further research, Hellsten & Leydesdorff (2016), for example, analyze translational research in medicine in terms of combinations of MeSH terms, institutional addresses, and journal names.By considering these and other (meta-)data as attributes of documents, one can merge matrices and combine dimensions in the data as we have done above for cited references and MeSH terms, but also beyond two dimensions in terms of n-mode arrays and therefore heterogeneous networks (Callon & Latour, 1981;Law, 1986).
On the basis of a download of Web-of-Science data, CitNetw.EXE can generate the citation matrix with the citing papers in the rows and cited references in the columns in the following formats: (i) mtrx.net in the Pajek format and (ii) mtrx.sps+ mtrx.txt for SPSS.The matrix is binary, asymmetrical, 2-mode, and directed.(If so wished, one can transpose this matrix in Pajek or SPSS.)One can process the file "mtrx.net"further in Pajek, UCInet, or Gephi, etc.The file lcs.net (output of CitNetw.Exe) contains the bounded network of citations among the documents under study.This file can be used, for example, for main path analysis (see Appendix II).
Input to both routines is a file "data.txt"containing downloads from WoS and Medline, respectively, in the "plain text" or "Medline" format (tagged).This file is first processed into a format for relational database management.(One is prompted for skipping this reorganization if it was already done in a previous round.)If one wishes to combine the outputs of the two routines, the files mtrx.*should first be saved and stored elsewhere, since these files are overwritten in subsequent runs.
The objective of using MHNetw.EXE is to combine Medical Subject Headings (MeSH) and citation information at the article level.The MeSH are first retrieved from the PubMed database and can be organized into relational data using the routine pubmed.exeat http://www.leydesdorff.net/pubmed .Note that one also needs the file <pubmed.dbf> to be present in the same folder as the data and pubmed.exe.Alternatively, one can retrieve the data from Medline in WoS.The advantage of retrieval from PubMed above retrieval from WoS is that there is no limitation of 500 records each time.The data from either source has first to be organized in the same folder using PubMed.Exe.The program prompts with a question about either source.Input data have always to be named "data.txt".
Output of MHNetw.exeis: mtrx.net(Pajek) and mtrx.sps(for SPSS) containing the citing papers as rows and the MeSH as variables in the columns (analogous to CitNetw.exe). A file called "string.wos" which contains the search string for obtaining citation information at Web of Science (advanced search). The citation scores are written into the file with article descriptors ti.dbf in a field "tc"; citation scores are summed for MeSH into mh1.dbf. The file "string.wos"can be used to generate the corresponding file in the Science Citation indices of WoS; the file "string.pubmed"contains analogously the search string if one has worked from the WoS interface.
The file cr_mh.netcontains the citation information (cited references, CR) in the rows and the medical subject headings (MH) in the columns.The cell values provide the number of documents in which cited references and MeSH co-occur. The file jcr_mh.netcontains the abbreviated journal names in the cited references (CR) in the rows and the medical subject headings (MH) in the columns.The cell values provide the number of documents in which the cited journals and MeSH co-occur. The file jcr_mh_a.netcontains the same information (abbreviated journal names and MeSH categories), but differently organized: both are attributed as variables to the documents under study as the cases.Within Pajek, one can convert this matrix into an affiliations matrix (using Network > 2-Mode Network > 2-Mode to 1-Mode > Columns).
One can also export this file to SPSS for cosine-normalization of the matrix.
The asterisks in MeSH terms are discarded in this version.All files operate only on files present in the same folder.Note that mtrx.net,mtrx.txt, and mtrx.sps are overwritten in each run of MHNetw.exe or CitNetw.exe.One is advised to save all files mtrx.*elsewhere or to rename them for this reason.
We suggest the following order of the routines: 1. Download data at PubMed from the user interface at http://www.ncbi.nlm.nih.gov/pubmed/advanced .At the results page thereafter, select under "Send to" the format option MEDLINE and download to a file which has to be (re)named "data.txt"; 2. Run pubmed.exe(with this file data.txtas input) in the presence of pubmed.dbf;both files are available at http://www.leydesdorff.net/pubmed/index.htm ; 3. Use the resulting string "string.wos" at the advanced user interface of WoS; save the retrieval via "Marked list" in portions of 500 records.Combine the data into a file data.txt.4. Run CitNetw.EXE; save the citation matrices in the files mtrx.*elsewhere; 5. Run MHNetw.EXE; save the matrices that one wishes to use for further analysis.This analysis may take long.
Figure 2
Figure2provides a map which can be generated using the 2-mode matrix of 5,345 abbreviated journal names in the references (red) versus 3,482 MeSH terms (green). 5(To generate this figure,
Figure 4 :
Figure 4: First component of the Jaccard-normalized matrix: 1083 cited journals and 900 MeSH
Figure 5 :
Figure 5: Forty papers on the so-called "key route global main path" in the citations among the 3,416 WoS documents under study.Decomposition using the Louvain algorithm in Pajek (Blondel et al., 2008; Q = 0.757); layout using Kamada & Kawai (1989).
validation.For example, the paper byKawabata et al. (1991) published in December 1991 in Nature was retracted on March 19, 1992.This paper received 16 citations by other papers on the main path, thirteen of them in the years after the retraction.From an intellectual perspective, one might consider removing this article from the pool of candidate nodes before regenerating the main path.The two main scientific awards within the field of AD research are the "Potamkin Prize for Research in Pick's, Alzheimer's, and Related Diseases" and the "MetLife Foundation Award for Medical Research in Alzheimer's Disease."Both prizes have been awarded since the late 1980s, thus capturing in full the time period of our analysis.Forty investigators have won both awards.
Table 1 .
Main or critical path analysis using lcs.net
Table 2
tells us that the number of cited references in the papers under study (176,670) is almost three times that of the MeSH terms attributed (62,648).In terms of unique cited references (67,831) versus unique MeSH terms (3,532), the ratio is further worsened.On a map, the citations would completely overshadow the MeSH terms.However, the number of referenced journals (5,345) is of the same order as the unique MeSH terms.
Table 2 :
Some descriptive statistics of the data under study.
Table 3 :
Ten most frequently cited journals and ten most frequently referenced MeSH. | 5,345.2 | 2016-07-21T00:00:00.000 | [
"Computer Science"
] |
Antioxidant Supplementation in Oxidative Stress-Related Diseases: What Have We Learned from Studies on Alpha-Tocopherol?
Oxidative stress has been proposed as a key contributor to lifestyle- and age-related diseases. Because free radicals play an important role in various processes such as immune responses and cellular signaling, the body possesses an arsenal of different enzymatic and non-enzymatic antioxidant defense mechanisms. Oxidative stress is, among others, the result of an imbalance between the production of various reactive oxygen species (ROS) and antioxidant defense mechanisms including vitamin E (α-tocopherol) as a non-enzymatic antioxidant. Dietary vitamins, such as vitamin C and E, can also be taken in as supplements. It has been postulated that increasing antioxidant levels through supplementation may delay and/or ameliorate outcomes of lifestyle- and age-related diseases that have been linked to oxidative stress. Although supported by many animal experiments and observational studies, randomized clinical trials in humans have failed to demonstrate any clinical benefit from antioxidant supplementation. Nevertheless, possible explanations for this discrepancy remain underreported. This review aims to provide an overview of recent developments and novel research techniques used to clarify the existing controversy on the benefits of antioxidant supplementation in health and disease, focusing on α-tocopherol as antioxidant. Based on the currently available literature, we propose that examining the difference between antioxidant activity and capacity, by considering the catabolism of antioxidants, will provide crucial knowledge on the preventative and therapeutical use of antioxidant supplementation in oxidative stress-related diseases.
Introduction
Nutrition and other lifestyle factors have been shown to have an important impact on the incidence and outcomes of most of the common non-communicable diseases that have been associated with aging, such as neurodegenerative and cardiovascular diseases, type 2 diabetes and cancer [1]. Aging is a biological process of progressive decline in physiological functions with advancing chronological age, leading to an increased vulnerability to disease and, subsequently, death [2]. The characteristic functional changes that precede these diseases, such as physical impairment and cognitive decline, are driven by multiple biomolecular mechanisms, including the accumulation of cellular damage and epigenetic alterations, which collectively result in altered functioning at the cellular, tissue and organism levels [3,4]. These characteristic mechanisms have collectively been described as the "hallmarks of ageing" [5] and might comprise effective targets for preventive and curative treatments of multiple age-related disease conditions. Age-related diseases, such as neurodegenerative and cardiovascular diseases, type 2 diabetes and cancer, are affected by the hallmarks of aging [2]. Besides well-known pharmacological therapies such as statins, management of body weight and physical exercise have been shown to be preventive (lifestyle) strategies [6,7]. However, effective regulation of the age-associated cellular damage described through the hallmarks has not been accomplished yet.
One of the processes contributing to age-and adverse lifestyle related disease is mitochondrial dysfunction, of which oxidative damage may be an important cause and consequence [8]. The process of oxidative phosphorylation in the mitochondria produces reactive oxygen species (ROS). ROS encompass a group of molecules, either free radical or non-radical species, derived from molecular oxygen (O 2 ) formed during reductionoxidation (redox) reactions or by electronic excitation [9]. Free radicals have an unpaired electron, making them less stable and thus more reactive with various organic substrates than non-radical species. Non-radical species can, however, easily lead to free radical reactions in living organisms in the presence of transitions metals such as iron or copper [10]. Sources of ROS include endogenous sources (e.g., mitochondria, peroxisomes and NADPH oxidases) and exogenous sources (e.g., ultraviolet light, pollutants and ionizing radiation). These ROS can cause damage to macromolecules and mitochondria when the balance between ROS compounds and antioxidant defense mechanisms is disrupted. In turn, mitochondrial dysfunction will promote further free radical and non-radical ROS generation [9,11], for example, via the decreased expression of crucial proteins for electron transport due to damaged mitochondrial DNA (mtDNA). Oxidative stress refers to an "imbalance between oxidants and antioxidants in favor of the oxidants, leading to a disruption of redox signaling and control and/or molecular damage" [11]. Importantly, redox signaling by ROS compounds is required for normal cellular functioning and host defense mechanisms. When ROS generation is deficient or excessive, this may lead to a broad range of phenotypic changes including altered gene expression, cellular senescence and inhibited growth [9].
To prevent cellular damage and maintain ROS homeostasis, a complex system of different antioxidants exists. For example, antioxidant enzymes are involved in the neutralization of ROS in the mitochondria, including superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GPX). Non-enzymatic antioxidants comprise dietary vitamins such as vitamin C and vitamin E (α-tocopherol), which intercept free radical chain reactions. Alteration in acting antioxidant levels could result in a disruption of ROS production and removal, leading to disruption of ROS signaling or in oxidative-stress induced damage. Antioxidants have therefore been hypothesized to play an important role in the development of multiple diseases. In line with this hypothesis, a promising antioxidant in observational studies is α-tocopherol [12]. However, although many prospective cohort studies have observed associations between higher α-tocopherol levels and a lower risk of overall and chronic disease mortality [13][14][15], randomized clinical trials comparing αtocopherol supplementation with placebo have failed to demonstrate any beneficial clinical effect of higher α-tocopherol levels on the onset and development of disease, particularly cardiovascular diseases [16][17][18][19].
To date, it remains difficult to make causal inferences about oxidative stress and the use of antioxidant supplementation in nutrition, and the implications in human health and disease. In the present review, we focus on the paradox of the therapeutic role of (dietary) antioxidants in disease with regard to the rapidly evolving field of nutrition and medical sciences, integrating important recent studies that used novel research techniques such as Mendelian randomization. Accordingly, we first provide a brief overview of the chemical processes resulting in oxidative damage and the role of (anti)oxidants, focusing on the non-enzymatic antioxidant α-tocopherol. We then summarize the pertinent evidence on antioxidant supplementation in both the general and disease population. The final part of the review addresses the controversy between the circulating levels and capacity of antioxidants and discusses directions for future research.
Generation of Reactive Oxygen Species (ROS)
The presence of ROS was first recognized in biological systems several decades ago [20]. ROS do not relate to a single species; rather, the term covers a range of small, shortlived molecules containing unpaired electrons formed by the partial reduction of O 2 [20]. Of ROS molecules, non-radical hydrogen peroxide (H 2 O 2 ) and typical free radicals of hydroxyl radical (•OH) and superoxide anion radical (O 2 •− ) have been well studied and are considered among the key players in cellular damage [19].
The major endogenous enzymatic sources of ROS are transmembrane NADPH oxidases (NOXs) and the mitochondrial electron transport chain (ETC), as well as several other intracellular pathways involving cytosolic and membranal enzymes (e.g., cytochrome P450 enzymes, superoxide dismutase and monoamine oxidase) [9,21,22]. It is worth noting that the oxidation of polyunsaturated fatty acids generates lipid hydroperoxides and related radicals, alkoxyl and peroxyl, which impact redox signaling [23]. In addition to these endogenous sources, ROS are also produced from cumulative exposure to environmental factors such as nutrients, drugs, toxicants and physical or psychological stressors, albeit these exposures are highly variable [9,24]. O 2 •− , a free radical ROS, either dismutases spontaneously or deliberately via catalyzation by superoxide dismutates to H 2 O 2 and O 2 [25,26]. Hence, O 2 •− serves as a major source of H 2 O 2. This two-electron (non-radical) H 2 O 2 is produced mainly by NOXs along with superoxide dismutases, as well as the mitochondrial ETC and many other enzymes [9]. H 2 O 2 is a strong oxidant, but only reacts with a few biological targets including CO2/bicarbonate, which leads to peroxymonocarbonate (HCO 4 -) [9]. In turn, the most reactive ROS-free radical •OH-is formed by reduction of H 2 O 2 in metal-catalyzed Fenton chemistry involving free iron (Fe 2+ ) [25,26]. •OH reacts directly with the nearest neighboring biomolecule at the site of its generation, making the location of Fe 2+ , a strong determinant of the site of •OH toxicity. In summary, a key process of •OH generation can be schematically described as •− also reacts efficiently with other radicals including nitric oxide (NO), of which peroxynitrite (ONOO − ) is formed. Peroxynitrite can, in turn, modify proteins by the oxidation or nitration of amino acids such as tyrosine, leading to altered physical and chemical properties [27]. For decades, research has mainly focused on the damaging effects of ROS due to their close association to age-related diseases [26]. However, ROS are also important components at a low to modest range in many redox-dependent processes to maintain cellular functions-considered as their "physiological range." To define a certain physiological ROS range of such chemically diverse and transient species, both the beneficial and adverse effects of ROS should be taken into consideration. Nevertheless, the high rate of ROS generation and neutralization forms a challenge in determining this range. The chemical reactivity of the various ROS molecules is vastly different, extending up to 11 orders of magnitude in their respective second-order rate constants with particular targets [9]. Moreover, the range of physiological ROS may vary substantially between humans depending on numerous factors, including sex, age, nutritional and health status [28,29], and this range may vary between different time points even within one homogenous group, making it difficult to measure ROS population-wide or compare a certain ROS range directly between individuals.
Implications of the ROS
The beneficial effects of ROS on cellular function and homeostasis are achieved via several signaling pathways [30]. For example, ROS may affect the activation of the nuclear factor kappa B (NF-κB) pathway via the inhibition of IκBα phosphorylation, inactivation of IκB kinase (IKK) or upstream kinases and interruption of the ubiquitination and degradation of IκB by inactivating Ubc12 [31][32][33]. The mitogen-activated protein kinase (MAPK) cascade is also influenced by ROS compounds. Here, different ROS may activate members of the MAPK family by influencing their receptor or abolishing their inhibition, leading to intracellular signaling transduction essential to cell proliferation, differentiation, development, cell survival and apoptosis [30,34]. There are also strong links between oxidants and p53, which regulates cell cycle progression in response to a variety of stressors [35]. ROS may be implicated in the regulation and responsiveness of stress sensors by enhancing the antioxidant defense via p53 to maintain cellular redox balance and by indirectly modulating selective transactivation of p53 target genes [35].
Given the signaling effects of various ROS molecules, they play a pivotal role as second messengers in the maintenance of many cellular processes. Therefore, a small (albeit transient) increase in ROS levels within the physiological range may optimize cellular signaling and function, and thereby be beneficial for health [36]. As defense systems cannot eliminate all ROS before they react with macromolecules due to their extremely high reactiveness to specific molecules, even in a healthy situation [37], some oxidative damage to cells is always produced. The constantly changing dynamics in ROS exposure of the human body can be best described via an optimal curve, as discussed in detail previously [22]. When ROS production and removal remain within the physiological range, this will have beneficial effects on the functioning of the human body. However, when the body cannot adapt to the decrease or increase in ROS leading to dysregulated ROS homeostasis, this will result in adverse effects ( Figure 1).
The beneficial effects of ROS on cellular function and homeostasis are achieved via several signaling pathways [30]. For example, ROS may affect the activation of the nuclear factor kappa B (NF-B) pathway via the inhibition of I B phosphorylation, inactivation of I B kinase (IKK) or upstream kinases and interruption of the ubiquitination and degradation of I B by inactivating Ubc12 [31][32][33]. The mitogen-activated protein kinase (MAPK) cascade is also influenced by ROS compounds. Here, different ROS may activate members of the MAPK family by influencing their receptor or abolishing their inhibition, leading to intracellular signaling transduction essential to cell proliferation, differentiation, development, cell survival and apoptosis [30,34]. There are also strong links between oxidants and p53, which regulates cell cycle progression in response to a variety of stressors [35]. ROS may be implicated in the regulation and responsiveness of stress sensors by enhancing the antioxidant defense via p53 to maintain cellular redox balance and by indirectly modulating selective transactivation of p53 target genes [35].
Given the signaling effects of various ROS molecules, they play a pivotal role as second messengers in the maintenance of many cellular processes. Therefore, a small (albeit transient) increase in ROS levels within the physiological range may optimize cellular signaling and function, and thereby be beneficial for health [36]. As defense systems cannot eliminate all ROS before they react with macromolecules due to their extremely high reactiveness to specific molecules, even in a healthy situation [37], some oxidative damage to cells is always produced. The constantly changing dynamics in ROS exposure of the human body can be best described via an optimal curve, as discussed in detail previously [22]. When ROS production and removal remain within the physiological range, this will have beneficial effects on the functioning of the human body. However, when the body cannot adapt to the decrease or increase in ROS leading to dysregulated ROS homeostasis, this will result in adverse effects ( Figure 1).
Figure 1.
Optimal curve of the effect of exposure to ROS on physiological function of the human body. ROS within the physiological range have beneficial effects on physiological function. When the exposure to ROS in the human body goes beyond the physiological range, either too low or too high, this may lead to adverse effects and thus reduced physiological functioning. Abbreviations: ROS, reactive oxygen species.
Pushing the Boundaries of the ROS Balance
ROS beyond the physiological range may irreversibly react with macromolecules, including DNA, proteins and lipids, causing them to lose their function or gain
Pushing the Boundaries of the ROS Balance
ROS beyond the physiological range may irreversibly react with macromolecules, including DNA, proteins and lipids, causing them to lose their function or gain inappropriate functionalities [20]. In turn, these damaged macromolecules may accumulate intracellularly and accelerate age-related diseases.
As is clear from previous research, deficient or excessive levels of ROS molecules are associated with a wide range of diseases [37][38][39][40]. However, an unidentified grey area remains in which various ROS may contribute to accelerated aging and diseases without exhibiting a clear disease phenotype. Individuals with a small (subclinical) increase in ROS over a longer period of time, either by overproduction, inadequate counter mechanisms or a combination of the two, may experience a constant level of moderate oxidative stress level on their tissues. In line with this hypothesis, oxidative stress has been suggested to play a role in multiple diseases [37,41], including cardiovascular disease, neurodegenerative disease and cancer.
The Role of Excessive ROS in Cardiovascular Diseases
The vascular endothelium is crucial in preserving vascular function, making endothelial dysfunction a major initial cause of cardiovascular disease (CVD). Considering that endothelial function is partly regulated by redox components, excessive ROS have been associated with increased risks of cardiac hypertrophy, heart failure and atherosclerosis [41][42][43][44][45]. From a physiological point of view, cardiac myocytes are more susceptible to high ROS production than other, less energy-demanding, cells due to their relatively high number of mitochondria [46]. Although physiological low levels of H 2 O 2 by NOX4 are required for vasodilation, normal endothelial function and vascular remodeling, supraphysiological H 2 O 2 levels have the opposite, adverse effect: vasoconstriction, endothelial dysfunction, hypertension and increased inflammation [9].
Lipid peroxidation, a process involved in oxidative stress, contributes to the development of atherosclerosis and other CVDs. For example, malondialdehyde, a lipid peroxidationderived aldehyde, can induce proinflammatory responses and contribute to the activation of the complement system in atherosclerosis [47]. Furthermore, 4-hydroxynonenal (4-HNE), generated by the decomposition of arachidonic acid and larger polyunsaturated fatty acids, has been implicated in the regulation of autophagy during myocardial ischemia and reperfusion. Accordingly, suppression of 4-HNE-stimulated autophagy in mice transfected with aldehyde dehydrogenase 2, a major enzyme involved in neutralization of 4-HNE, has been reported to reduce myocardial dysfunction [48]. In addition, ROS molecules contribute to endothelial damage and the consequential transformation of recruited macrophages into atherosclerotic plaque-forming foam cells by promoting the oxidation of low-density lipoproteins (LDL) [49]. ROS also induce the release of matrix metalloproteinases (MMPs), which promotes physical disruption of the atherosclerotic plaque and thereby exacerbates atherosclerosis development [50].
Similar to other unsaturated lipids, cholesterol is also susceptible to oxidative modification [51]. These oxygenated derivatives of cholesterol (oxysterols) present a remarkably diverse profile of biological activities, including apoptosis and platelet aggregation. The accumulation of oxysterols has been implicated in oxidative stress-related pathophysiology. For example, oxysterols are found enriched in pathologic structures such as macrophage foam cells and atherosclerotic lesions [52]. Notably, oxysterols have been shown to enhance MMP-9 levels and activity in human cells of the macrophage lineage through the induction of NOX2 activity, hence contributing to atherosclerotic plaque erosion and rupture, as well as ROS production [53]. However, despite their harmful proinflammatory features, oxysterols are currently emerging as fine regulators of physiological processes, including those involved in aging [54]. For example, at submicromolar concentrations, oxysterols have been reported to have anti-inflammatory activity. Oxysterols may also regulate cell death and protein homeostasis. Nevertheless, the impact of oxysterols on biological processes under physiological circumstances remains to be explored in more detail.
Mitochondrial dysfunction has also been directly linked to CVD. For example, a lower mtDNA copy number in lymphocytes, as a rough proxy of mitochondrial dysfunction, has been associated with higher CVD risk in large prospective studies, and the association between low mtDNA copy number and coronary artery disease is likely to be causal [50].
The Role of Excessive ROS in Neurodegenerative Diseases
Neurodegenerative diseases (NDDs) are characterized by the progressive loss of neurons [55]. Neuronal cells are particularly vulnerable to oxidative stress due to a combination of high energy and oxygen demand, low antioxidant activity, a high number of cells in the post-mitotic state, abundant lipid content and a limited capacity of cell renewal [56]. Misfolded proteins aggregate and accumulate in the brain and contribute to neurodegener-ation [55], for example, via the upregulation of NOX activity and oxidant generation [9]. In fact, several of these proteins are connected to mitochondrial (dys)function and associated with the production of ROS compounds. For example, Alzheimer's disease (AD) may originate from deregulation of the redox balance [57]. In AD, lipid peroxidation, where lipids (e.g., in the myelin sheets) are oxidized by ROS, is greatly enhanced, especially in the amygdala and hippocampus [57]. The products of lipid peroxidation often cause crosslinked molecules (e.g., collagen) that are able to resist intracellular degradation and cause altered cellular communication. In addition, increased levels of sporadic (unique) mutations have been found in the mtDNA of AD patients [57,58]. Of specific interest, several of these mutations cause decreased transcription levels of essential mitochondrial proteins in AD. In the case of Parkinson's disease (PD), studies have demonstrated a reduced activity of mitochondrial complex I in the dopaminergic neurons of the substantia nigra of PD patients, presumably contributing to excessive ROS generation accounting for the apoptosis observed in this part of the brain [55,59].
The Role of ROS in Cancer Pathogenesis
Oxidant generation is strongly linked to initiation, progression and bystander effects in the tumor microenvironment, as well as to the biology of metastasis [21,60]. The role of ROS in cancer pathogenesis appears to be dependent on the stage of the tumor. In the early stages of cancer, ROS have been considered to have a pro-oncogenic role. As previously mentioned, ROS may modulate the selective transactivation of target genes of the tumor suppressor p53 [35]. Moreover, loss-of-function mutations in p53 may induce a further increase in intracellular ROS, provoke abnormal mitosis and promote cancer development. The increased production of ROS by cancer cells was shown to eventually support proliferation and allow cancer cells to adapt to stress due to a lack of nutrition or hypoxic environment [9,[61][62][63].
On the other hand, ROS may exhibit a tumor-suppressor role during the later stages of cancer. It was shown that the expression and activity of antioxidant enzymes were increased in malignant tumors compared to adjacent normal tissue [64]. However, this enhanced activity of antioxidant systems in tumors has been associated with chemotherapy resistance [62,65]. Considering that the antioxidant activity increases in later cancer stages, the excessive intratumor oxidative damage is limited, which, in the end, aids the cancer cells to escape apoptosis. Accordingly, studies have investigated the effect of ROS-scavenging antioxidant supplementation, such as high-dose (pharmacological) ascorbate, on cancer development [66,67]. Nevertheless, the results of these studies on the benefits and adverse effects of antioxidant supplementation in tumor progression remain inconsistent and require further investigation.
Working Mechanisms of Antioxidants
A complex defense mechanism to compensate for ROS generation consists, among other mechanisms, of multiple antioxidants. Antioxidants are compounds that inhibit oxidation, thereby delaying or inhibiting cellular damage [68]. The main antioxidants are either formed endogenously (glutathione, reduced coenzyme Q, uric acid, bilirubin) or are diet-derived, for example, from plant oils, nuts, and seeds (α-tocopherol (vitamin E)), (citrus) fruits and vegetables (ascorbate (vitamin C), carotenoids) [68,69]. Although it should be noted that antioxidants may not outcompete the dedicated enzymes that can catalytically deplete ROS (e.g., SOD, CAT and GPX), the mechanisms of antioxidants, such as α-tocopherol, have been researched [69,70]. Antioxidants may also be classified based on their activity, which includes enzymatic or non-enzymatic antioxidant activity. Enzymatic antioxidants catalyze the conversion of oxidized metabolic products to stable, nontoxic molecules, whereas non-enzymatic antioxidants intercept free radical chain reactions [71]. Although the individual roles of antioxidants in the human defense system are divergent, antioxidants act in a cooperative and synergistic manner, involving a complex network of interacting compounds [68,69].
The protecting actions of antioxidants can be described as two principal mechanisms that act simultaneously [68,69]. First, antioxidants prevent the formation of ROS via quenching oxygen molecules or sequestering active metal ions, including iron (Fe; II/III) and copper (Cu; I/II). In addition, antioxidant enzymes work to catalytically deplete ROS. For example, SOD catalyzes the dismutation of two molecules of O 2 •− to H 2 O 2 and molecular oxygen, and GPX prevents the harmful accumulation of H 2 O 2 by catalyzing the conversion of H 2 O 2 to water [71,72]. The activities of these antioxidant enzymes may, however, change during aging. For example, an age-related reduction in SOD and CAT gene expression was observed in the granulosa cells from periovulatory follicles in women [73], and a progressive decrease in SOD, CAT and GPX activity was observed in erythrocytes of older individuals when compared to younger individuals (55-59 y/o) [74].
The second protective mechanism of antioxidants concerns the chain-breaking antioxidants. These compounds contribute to the elimination of ROS compounds before they may irreversibly react with and impair biological macromolecules, for example, in lipid peroxidation. Chain-breaking antioxidants can either receive an electron from a radical or donate one in order to terminate the chain reaction, resulting in the formation of stable by-products [68,69]. When these two protective mechanisms appear insufficient to prevent oxidative damage by ROS, antioxidants and enzymes can repair the resultant damage and reconstitute the harmed tissues. The repair systems' intervention includes restoring oxidatively damaged nucleic acids, removing oxidized proteins via intra-and extracellular proteasomal systems and repairing oxidized lipids. Together, antioxidants provide a complex safety net to cope with the constant generation of various ROS molecules. As α-tocopherol is one of the most well-studied antioxidants, this review mainly focuses on the role of α-tocopherol in health and disease.
Antioxidant Supplementation in Age-Related Diseases
Hypothetically, increasing antioxidant levels in individuals with excessive ROS should alleviate the associated development of diseases by supporting the restoration of the ROS balance within the optimal physiological range. One way to effectively enhance functioning antioxidant levels is via dietary supplementation. Most epidemiological cohort studies have found protective effects of increased dietary or circulating levels of antioxidants on lower disease incidences [75]. For instance, several epidemiological cohort studies have shown that higher intake of antioxidants, either via regular diet or as oral (over the counter) supplements, were associated with a lower risk of incident CVD [76,77]. In addition to CVD, higher intake of antioxidants or supplements has been associated with a lower risk of incident Alzheimer's disease [78,79], Parkinson's disease [80] and amyotrophic lateral sclerosis (ALS) [81,82] in a number of prospective cohort studies.
The results from prospective cohort studies led to the concept of antioxidant supplementation in the general population, as it may ameliorate or even prevent several age-related diseases. However, evidence from clinical trials supporting the clinical benefit of the use of antioxidant supplements in the general population is still lacking. An example is the Women's Health Study, in which approximately 40,000 healthy US women aged 45 and older were randomly assigned to receive α-tocopherol or placebo and subsequently followed the treatment for more than 10 years [18]. Based on the results, the authors concluded that daily intake of α-tocopherol did not provide the overall clinical benefit for major CVD events or cancer. Moreover, the group taking α-tocopherol supplements did not show a lower risk of (cardiovascular) mortality. A similar result was seen in the Physicians' Health Study II and HOPE study, which examined a combination α-tocopherol and vitamin C supplementation; no reduced risks of major incident cardiovascular events [17] or cancer [83] were observed.
In addition to the conventional study designs, we previously implemented a Mendelian randomization (MR) framework to investigate the relationship between dietary-derived circulating antioxidants and CVD [84,85]. In MR studies, genetic variants are used as instrumental variables to infer causality of lifelong exposure to certain risk factors on diseases (outcome), as illustrated in Figure 2. As the genetic information is fixed at conception, MR is not affected by most confounding factors and reverse causation, which are the main limitations from prospective cohort studies. In our recent work comprising over 700,000 participants with more than 93,000 coronary heart disease cases, genetically predicted circulating dietary-derived antioxidants were unlikely to be causal determinants of primary CHD risk [84]. Similarly, in over one million individuals, no evidence was found for a causal association between dietary-derived circulation antioxidants and ischemic stroke [85]. In the context of neurodegenerative diseases, similar null findings were obtained between vitamin A, vitamin C, β-carotene, and urate and risk of AD [86]. Taken together, these genetic studies do not support the beneficial role of dietary-derived antioxidants on disease risks in the general population.
supplements did not show a lower risk of (cardiovascular) mortality. A similar result was seen in the Physicians' Health Study II and HOPE study, which examined a combination α-tocopherol and vitamin C supplementation; no reduced risks of major incident cardiovascular events [17] or cancer [83] were observed.
In addition to the conventional study designs, we previously implemented a Mendelian randomization (MR) framework to investigate the relationship between dietary-derived circulating antioxidants and CVD [84,85]. In MR studies, genetic variants are used as instrumental variables to infer causality of lifelong exposure to certain risk factors on diseases (outcome), as illustrated in Figure 2. As the genetic information is fixed at conception, MR is not affected by most confounding factors and reverse causation, which are the main limitations from prospective cohort studies. In our recent work comprising over 700,000 participants with more than 93,000 coronary heart disease cases, genetically predicted circulating dietary-derived antioxidants were unlikely to be causal determinants of primary CHD risk [84]. Similarly, in over one million individuals, no evidence was found for a causal association between dietary-derived circulation antioxidants and ischemic stroke [85]. In the context of neurodegenerative diseases, similar null findings were obtained between vitamin A, vitamin C, β-carotene, and urate and risk of AD [86]. Taken together, these genetic studies do not support the beneficial role of dietary-derived antioxidants on disease risks in the general population.
Antioxidant Supplementation in α-Tocopherol-Deficiency
The discrepancy in study results between the prospective studies on one hand and the randomized clinical trials and MR studies on the other hand may be related to differences in the study populations. Notably, the beneficial effects of antioxidants were mostly demonstrated in patients with extreme local concentrations of ROS or a deficiency in their antioxidant production and/or metabolism [79,82,87]. An example of an antioxidant deficiency disease in humans where supplements may provide health benefits is ataxia with isolated vitamin E deficiency (AVED), a rare inherited neurodegenerative disorder that affects approximately fewer than one in one million individuals [87,88]. AVED is induced by mutations in the gene coding for α-tocopherol transfer protein (α-TTP), which is required for α-tocopherol retention [69,89]. α-tocopherol deficiency can develop secondary disorders that cause an impaired absorption of α-tocopherol from adipose tissue. AVED is characterized by low plasma α-tocopherol levels, which can be increased through αtocopherol supplements to normal levels [87]. Accordingly, a study investigating the Antioxidants 2022, 11, 2322 9 of 17 effect of α-tocopherol supplementation on AVED disease status observed reduced disease progression after a 12-month treatment [88].
Apart from the observed results on AVED disease status, it is worth mentioning that beyond the antioxidant effect of α-tocopherol, there may be biological effects unrelated to its chain-breaking antioxidant actions that may contribute to the AVED phenotype. Considering that ascorbic acid deficiency causes the clinical syndrome scurvy due to its role in collagen synthesis [90], which is treated with supplemental vitamin C, the beneficial effects of supplementing α-tocopherol in α-tocopherol-deficient individuals may not be solely due to its antioxidant effects. For example, α-tocopherol was shown to inhibit protein kinase C (PKC) and has been associated in a number of cellular events that are related to non-antioxidant properties of α-tocopherol, including cell proliferation, cell adhesion, enhancement of the immune response and gene expression [70].
Antioxidant Supplementation in the General, Healthy Population
Most of the aforementioned randomized clinical trials investigating antioxidants included healthy individuals from the general population. Importantly, although healthy individuals in the general population may also occasionally experience lower antioxidant levels, these levels may overall still be sufficient to cope with the constant production of oxidants, causing the antioxidant supplements to have no effect on lowering disease risk. Generally, only very few individuals included in these studies had excessively high antioxidant levels. For this reason, supplementing antioxidants might not induce a sufficient clinical effect that can be detected in the statistical analyses. This hypothesis is supported by our recent work performed in the Netherlands Epidemiology of Obesity (NEO) study [91]. This population-based, prospective cohort study included individuals between 45 and 65 years of age living in the greater area of Leiden, the Netherlands (N = 6671). We were particularly interested in the associations between observed levels of α-tocopherol in serum and its metabolomics in urine in relation to behavioral and (subclinical) disease outcomes in a random subsample of 520 individuals.
In several studies, the associations between α-tocopherol serum levels and lifestyle factors (such as smoking and alcohol use [92]), measures of glucose homeostasis [93], measures of body fat [94] and lipoprotein (sub)particles [95] were investigated. Overall, these studies found no associations, or even trends, between circulating α-tocopherol in serum and the different study outcomes. This could be due to the relatively small study population included from the NEO study. However, since only a few cases of obesity-related disease or mortality had been documented over the course of a 10-year follow-up, it is plausible that the included participants were relatively healthy. This supports the hypothesis that increasing α-tocopherol levels, particularly via the intake of supplements, does not have an effect on the health status of the general population. As long as an individual's α-tocopherol level at baseline can adequately lower ROS generation and eliminate produced ROS, exceeding baseline levels with supplements may have little clinical effect.
In addition, the associations between different α-tocopherol urinary metabolites and serum α-tocopherol and lifestyle factors have been investigated previously. Since the metabolism of α-tocopherol can follow two pathways, it forms either the oxidized metabolite α-tocopheronolactone hydroquinone (α-TLHQ) or enzymatic metabolite of α-carboxymethyl-hydroxychroman (α-CEHC) that is measured in urine [94]. A-TLHQ is the oxidized metabolite generated when lipid peroxidation is successfully inhibited by α-tocopherol, representing ROS scavenging-dependent reactions, whereas α-CEHC is the product of enzymatic conversion of α-tocopherol in the liver. These metabolites were measured as sulfated and glucuronidated conjugates of α-tocopherol, the main forms of vitamin E, by mass spectrometry analyses of NEO urine samples. In the NEO study, circulating α-tocopherol did not correlate with its oxidized but with the enzymatic metabolite in urine [93]. This may suggest that the circulating α-tocopherol level was not a rate-limiting step for the conversion to its oxidized metabolites. Therefore, α-TLHQ is depicted as a marker of oxidative stress, while α-CEHC represents α-tocopherol status [94]. It was hypothesized that higher levels of α-TLHQ would be associated with higher disease risk and adverse lifestyle. Indeed, an association between current smokers and higher α-TLHQ levels compared with non-smokers was observed [92]. However, these studies also showed some contradictory results that urinary oxidized α-tocopherol metabolites were moderately associated with reduced insulin resistance [93] and marginally associated with lower body mass index, total body fat and visceral adipose tissue [94]. These findings provide remarkable insights on the role of α-tocopherol in health and disease and may suggest the urinary metabolite levels could instead reflect antioxidant capacity (e.g., lower levels of urinary metabolites as a marker of lower oxidant scavenger capacity).
Antioxidant Circulating Levels Versus Antioxidative Capacity
Given that serum α-tocopherol and its metabolites were not correlated in the previously described studies [93], observed the circulating levels of α-tocopherol-particularly induced by its synthetic forms, which have a lower bioavailability than natural α-tocopherol [96]-may not reflect the actual α-tocopherol activity. It should be emphasized that although the terms "antioxidant activity" and "antioxidant capacity" are often used interchangeably, they have different implications [68]. Notably, the antioxidant bioactivity of circulating levels refers to antioxidant kinetics in which a characteristic of a specific antioxidant is expressed as a value of the reaction rate times the reaction volume. The antioxidant capacity is rather defined as the measure of the total amount of oxidants scavenged via antioxidant mechanisms, which indicates the sum of antioxidant activity of the human body [97]. The bioactivity of α-tocopherol and other antioxidants can be influenced by several factors, including the intake of competing nutritional factors, absorption and metabolism, as well as genetics, age and lifestyle [98]. Therefore, it is plausible that only measuring (unmetabolized) antioxidants in blood is not sufficient to make inferences about antioxidant status. This hypothesis could explain why targeting antioxidant capacity by solely increasing circulating antioxidant levels, for example, via oral supplements, does not yield any clinically significant reductions in disease risks. Targeting the metabolism of antioxidants to oxidized or enzymatically converted metabolites may provide essential knowledge on antioxidant working mechanisms in the body, which may serve as a marker in future trials to monitor antioxidant utility after supplementation. This hypothesis should be examined in greater detail, preferably in larger study samples.
Antioxidant Supplements: Is There Really Any Benefit?
To date, there is an ongoing controversy about the use of antioxidant supplements for the prevention and treatment of multiple diseases. There is ample molecular evidence: an imbalance in ROS production and elimination can lead to oxidative damage, which triggers a cascade of the hallmarks of ageing and may contribute to the onset and development of numerous diseases [5,20,37,99]. Rationally, research has subsequently focused on enhancing the system that can effectively eliminate ROS: the complex network of antioxidants. Although it may seem only reasonable that increasing antioxidant levels to eradicate excessive ROS molecules should alleviate the burden caused by the overproduction of various ROS compounds, randomized clinical trials and MR studies to date have failed to provide evidence supporting this rationale [17,18,[83][84][85][86]. A large discrepancy exists between the molecular indication and clinical outcomes for antioxidant supplementation. Therefore, the question is whether antioxidant supplementation truly provides considerable benefits on health status. Notably, the intake of antioxidant supplements as a therapy for low antioxidant status, due to, e.g., antioxidant deficiency diseases, may improve the patients' health status and quality of life. However, this category of exceptionally low antioxidants levels only covers a small part of the dynamic and transient range of ROS. The greater part of the range of ROS, where defense mechanisms are sufficient for efficient ROS elimination, can be identified in the general population. These individuals with adequate antioxidant levels at baseline may only increase the circulating levels of antioxidants through the intake of supplements, but not the actual antioxidant capacity to eliminate part of the produced ROS. In other words, the network of antioxidant compounds may not become more effective by augmenting the pool of individual antioxidants with supplements in the general population ( Figure 3).
supplementation. Therefore, the question is whether antioxidant supplementation truly provides considerable benefits on health status. Notably, the intake of antioxidant supplements as a therapy for low antioxidant status, due to, e.g., antioxidant deficiency diseases, may improve the patients' health status and quality of life. However, this category of exceptionally low antioxidants levels only covers a small part of the dynamic and transient range of ROS. The greater part of the range of ROS, where defense mechanisms are sufficient for efficient ROS elimination, can be identified in the general population. These individuals with adequate antioxidant levels at baseline may only increase the circulating levels of antioxidants through the intake of supplements, but not the actual antioxidant capacity to eliminate part of the produced ROS. In other words, the network of antioxidant compounds may not become more effective by augmenting the pool of individual antioxidants with supplements in the general population ( Figure 3). The balance between ROS and antioxidants can also tilt toward excessive antioxidant levels ( Figure 3, left panel). Through increased endogenous production, enhanced daily food intake or a combination of the two, antioxidant levels could theoretically exceed its healthy boundaries and cause adverse effects. Although little is known about the possible detrimental effects of antioxidant supplementation, non-enzymatic antioxidants, including vitamin C and α-tocopherol, have been shown to have pro-oxidant effects at high concentrations, leading to ROS generation and contributing to a state of oxidative stress [100,101]. It has also been shown that α-tocopherol may interact with other vitamins to enhance or interfere with their function [102]. Accordingly, α-tocopherol can interfere with the blood clotting capacity of vitamin K [102], resulting in reduced blood clot formation. Although this aspect may be beneficial in certain patients, including in women with recurrent abortion due to impaired uterine blood flow [103], it may also increase the risk of bleedings in healthy individuals. However, it is important to consider that these adverse clinical effects of α-tocopherol antioxidant use could also be observed due to chance or possible flaws in the study design and/or selection of the study population. Taken together, these results indicate that antioxidant supplementation, particularly α-tocopherol, should be used with caution for adverse effects. It is therefore important to determine whether an individual genuinely requires antioxidant supplementation before intake.
To this end, it is essential to measure oxidative damage markers and ROS turnover in the human body. However, measuring these endpoints forms a challenge in research. No single parameter has been recommended as a gold standard for measuring redox status in clinical studies thus far [104]. A major limitation is the identification of reliable biomarkers [105]. Some biomarkers have been identified in experimental and populationbased epidemiological studies. Examples of current biomarkers for lipid peroxidation include plasma malondialdehyde, 4-hydroxynonenal and isoprostanes; for nucleic acid oxidation, examples include 8-oxo-7,8-dihydro-2'-deoxyguanosine (8-oxodG) and 8-oxo-7,8-dihydroguanosine (8-oxoG) for DNA and RNA, respectively [106]; and protein carbonyl can be used as a biomarker of protein oxidation [107]. Despite the potency of measuring these biomarkers of oxidative stress, the measured oxidative damage is often the result of a complex, interacting mechanism of numerous endogenous and exogenous antioxidants. Furthermore, these biomarkers cannot reflect the complete oxidative damage that has been brought to the body since they are mostly exclusive to certain macromolecule damage [22]. In addition to biomarkers, measuring ROS as a representation of oxidative stress has its limitations [104]. Some ROS molecules are highly reactive (particularly hydroxyl radicals) and therefore have a relatively short half-life, which makes their measurement in biological systems a complex task. Since accurate measurements of pro-and antioxidant levels are crucial to make inferences about the use of antioxidant supplementation, it is important to define an integrative yet clinically applicable approach to determine an individual's redox status.
Final Remarks and Conclusions
Regarding the key role of oxidative damage in ageing and the onset and development of several diseases, research on decreasing oxidative damage with antioxidants has emerged in the last few decades. However, since clinical trials to date have not supported the use of antioxidant supplementation in oxidative stress-related diseases, a paradox exists: does supplementation of antioxidants delay aging and/or treat oxidative stress-related diseases?
In summary, there are three critical points to consider when examining the use of antioxidant supplementation. First, identifying reliable biomarkers for antioxidant capacity and levels of oxidative species that reflect the overall redox status in vivo, as well as transient redox status in specific tissues or cells, is crucial for further research. To date, there is still little consensus about the gold standard for measurements of oxidative stress in vivo. An optimal biomarker should be easily accessible, simple to detect accurately in human tissue and/or body fluid and reasonably stable. Second, the difference between antioxidant activity and capacity should be recognized in further research. Supplementation of antioxidants may increase their circulating levels and bioactivity, but this does not imply that the capacity of antioxidants is enhanced. Furthermore, several mechanisms may contribute to the difference between antioxidant activity and capacity, including its metabolism. Third, regarding the physiological importance of ROS signaling, it is necessary to develop strategies in redox studies that selectively address disease-associated mechanisms without disrupting the signaling pathways of ROS compounds. Future research should therefore focus on exploring novel markers for measuring oxidative stress and antioxidant status in vivo. Reliable yet simple measurements can facilitate in-depth studies examining the effects of antioxidant supplementation in aging and the development and progression of oxidative stress-related diseases, as well as in the general population, providing crucial knowledge that is indispensable to make inferences about the use of antioxidant supplements by healthy and diseased individuals. | 9,867.2 | 2022-11-24T00:00:00.000 | [
"Biology"
] |
Drivers of diversification in fungal pathogen populations
To manage and treat chronic fungal diseases effectively, we require an improved understanding of their complexity. There is an increasing appreciation that chronic infection populations are often heterogeneous due to diversification and drift, even within a single microbial species. Genetically diverse populations can contribute to persistence and resistance to treatment by maintaining cells with different phenotypes capable of thriving in these dynamic environments. In chronic infections, fungal pathogens undergo prolonged challenges that can drive trait selection to convergent adapted states through restricted access to critical nutrients, assault by immune effectors, competition with other species, and antifungal drugs. This review first highlights the various genetic and epigenetic mechanisms that promote diversity in pathogenic fungal populations and provide an additional barrier to assessing the actual heterogeneity of fungal infections. We then review existing studies of evolution and genetic heterogeneity in fungal populations from lung infections associated with the genetic disease cystic fibrosis. We conclude with a discussion of open research questions that, once answered, may aid in diagnosing and treating chronic fungal infections.
Introduction
Chronic or long-term fungal infections are either not readily cleared by the host or never eliminated, often despite antifungal therapy.Most fungal infections are caused by opportunistic pathogens that establish infections in individuals with reduced host defenses, including primary or secondary immunodeficiencies due to AIDS, diabetes mellitus, Coronavirus Disease 2019 (COVID- 19), chronic granulomatous disease, immunosuppressive therapies or prolonged antibiotic treatment or defects in microbial clearance such as that associated with the genetic disease cystic fibrosis (CF) [1][2][3][4][5][6][7][8][9].Candida spp., Cryptococcus neoformans, and Aspergillus fumigatus are the most common agents of fungal infections, though the specific incidence of chronic infections caused by these pathogens is less clear.Chronic fungal infections associated with endemic mycoses [10][11][12][13] include those caused by Histoplasma capsulatum, Blastomyces dermatitidis, and Coccidioides immitis, which can endure for long periods as asymptomatic or mild presentations before advancing to more damaging disease states.
Improvements in microbial profiling technologies have led to the recognition that microbial diversity exists over space and time and at multiple scales, from polymicrobial communities to within a population and even between genetically identical cells.This review will first focus on mechanisms by which genetic diversity increases within fungal populations [14] and examples of how each mechanism has been shown to contribute to diverse fungal infections.Ecological theory has long recognized the intrinsic value of biodiversity on the function and stability of an ecosystem, referred to as "biological insurance theory" [15].Therefore, the stability of a diverse microbial ecosystem presents a dilemma when treating bacterial or fungal infections comprised of heterogeneous subpopulations.An inability to evaluate diversity within an infection may lead to the use of therapeutic strategies that allow for the survival of treatment-resistant lineages that can reestablish the infection [16].Thus, there is a growing need to apply ecological theory to population-level genomic data to predict cases in which problematic subpopulations may be present.
An additional limiting factor in the development of this field of research is that genetic heterogeneity in the context of chronic fungal infection is generally not currently evaluated diagnostically.With the increased capacity for next-generation sequencing of multiple isolates or populations from clinical samples, we now have the potential to catalog population heterogeneity and microbial succession comprehensively.Analyzing population-level heterogeneity in chronic infections of various fungi and the common themes apparent in infections in multiple individuals can provide insight into selective pressures that drive evolution in vivo and may indicate pathways to target for therapy development.The second half of the review will give special attention to fungal diversification and evolution studies in the context of chronic CF infections.We will discuss how the analysis of these populations has informed us about factors within the host environment that drive selection, such as drug treatment, host immune factors, nutrient restriction, and reactive compounds.We will conclude with a discussion of open questions regarding the causes and consequences of heterogeneity in chronic infections.In the future, our understanding of the fungal subpopulations present in chronic infections may allow for a more precise approach to their treatment [17].
Mechanisms that contribute to the generation of biologically relevant heterogeneity in fungal populations
It is essential to understand the mechanisms and frequencies at which population heterogeneity can arise and, therefore, adaptation after selection [18].In this review, we will primarily focus on genome-based changes that contribute to fungal diversity [14], which include single nucleotide polymorphisms (SNPs) due to mutation, insertions or deletions (indels) (Fig 1A upon the merging of nuclei to form a heterozygous diploid followed by recombination before a return to haploidy.The movement of mobile elements or loss of a mycophage is another way that populations can become heterogeneous (Fig 1F) [19,20].Other ways in which fungal populations increase diversity include morphological differentiation, stable phenotypic switches, and changes to chromatin state, and these nongenetic changes have been shown to be influenced by mutations that arise and appear to be under selection in vivo as discussed below.
Evolution through mutation
SNPs and indels in the genome arise from errors in replication.Increases in mutated alleles within a population can occur through direct selection or "hitchhiking" with beneficial mutations.High frequencies of mutated alleles can also arise by genetic drift, which might occur upon the migration of cells to a new site of infection or a substantial reduction in the population size, perhaps due to antibiotic treatment.Mutation rates vary between fungal species and may not be consistent across the genome [21].Mutations have been predicted to occur at a rate of 1.2 × 10 −10 per base pair per generation in CandidaAU : Pleasenotethatgenera}Candida}and"Sa albicans and 0.33 × 10 −9 per base pair per generation in Saccharomyces cerevisiae [14,22].It has also been posited that diploidy may accelerate the acquisition of adaptive mutations due to the ability to tolerate mutations in 1 allele of 2 [23,24].Over weeks, months, or years, the divergence of fungal populations can occur through the accumulation of mutations over successive generations, spatial isolation (e.g., different lung lobes), and both constant and variable selective pressures imposed by the host immune response or therapeutic interventions.Several microevolution studies demonstrate polymorphisms that increase fitness and phenotypic diversity in Candida, Aspergillus, and Cryptococcus [21,25,26].For example, examination of the genomes from 20 clinical isolates of Clavispora (Candida) lusitaniae from respiratory samples from a single individual found between 24 and 131 SNPs between any 2 isolates and significant phenotypic heterogeneity [27].A study of oral C. albicans isolates from 8 healthy individuals identified an average of approximatelyAU : PleasenotethatasperPLOSstyle; donotusethesymbol � inprosetomeanaboutorapprox 300 to approximately 1,400 SNP differences between isolates from the same (D) copy number variation (CNV).(E) Fungi are also capable of parasexual recombination through the formation of a heterokaryon [1], the merging of nuclei to form a heterozygous diploid [2], followed by recombination and a return to haploidy [3].(F) Mobile genetic elements and the presence of a mycophage can also alter phenotype.sample, with most of the SNP differences due to LOH events at the gene, region, or chromosome levels [28].Elevated mutation rates can enable rapid adaptation to local environments.In prominent human fungal pathogens, such as C. neoformans, A. fumigatus, C. albicans, and CandidaAU : Pleasenote glabrata, hypermutators have been shown to emerge during infection and contribute to treatment failure [29][30][31][32].The hypermutator phenotype often arises from defects in DNA proofreading and repair.For example, defects in Msh2, a core component of the DNA mismatch repair pathway, can increase the rate of acquired resistance to 5-fluoroorotic acid (5-FOA), which selects for uracil auxotrophs, by up to 6.6-fold [33,34].One study using whole genome sequencing found hypermutators with defective Msh2 in recurrent C. neoformans infections of people with HIV/AIDS in sub-Saharan Africa [35].In one lineage, increased diversity resulted from mutations in genes MSH2, MSH5, and RAD5, which encode mismatch repair proteins and conferred a hypermutator phenotype through defects in the repair of base-base and single insertion/deletion mismatches, respectively.Isolates in the Msh2 hypermutator lineage accumulated over 300 mutations daily, while members of nonmutator lineages of recurrent isolates acquired approximately 12 mutations per day [35].In clinical bloodstream isolates of C. glabrata, mutations in MSH2 were found in more than 50% of isolates, indicating that a hypermutation phenotype is advantageous in vivo [30].Mutations in DNA polymerase Pol3 can also increase the mutation rate in C. neoformans [36].The observed isolates in recurrent C. neoformans infection and C. glabrata highlight the impact of hypermutation on the genetic basis of infection, providing valuable insights into rapid mutation rates that result in the rapid acquisition of secondary mutations that can improve fitness.The presence of hypermutators accelerates the development of genetic diversity.It can significantly contribute to the emergence of drug-resistant strains, demonstrating the dynamic landscape of chronic fungal infections shaped by the accumulation of mutations.
Phenotypic changes resulting from the loss of heterozygosity
In diploid fungi, genetic variation in populations can be generated through the LOH due to diverse processes including double strand break-induced repair mechanisms, chromosome nondisjunction, and gene conversion (see A. Gusa and S. Jinks-Robertson [37] for review).Resultant phenotypic changes in newly homozygous strains may be more fit in specific environments.In response to infection-relevant stress conditions such as increased temperature, oxidative stresses, and antifungals, C. albicans increases the rate at which it undergoes mitotic recombination, resulting in more LOH [38].Consistent with the idea that physiological signals promote recombination rates, phenotypic and genotypic diversity was found to arise as much as 3 orders of magnitude more rapidly in vivo compared to rates observed in vitro [39,40].Furthermore, in these studies, rates of recombination were similar in oral and disseminated candidiasis models.In a study of 9 C. albicans clinical isolates collected from a bone marrow transplant patient over 35 days, fluconazole resistance rapidly developed in multiple isolates after 17 days of treatment with fluconazole and amphotericin B [41].One mechanism for the increased azole resistance was increased CDR1 expression through a LOH event that resulted in 2 alleles encoding a hyperactive variant of the CDR1-regulator Tac1.Notably, these mutations were not fixed within the population, and different genotypes predominated in various sites in the host.Other studies have also reported azole-resistant clinical isolates acquired through homozygosity of TAC1 hyperactive alleles, which conferred an intermediate, codominant phenotype when expressed alongside the nonhyperactive allele in a paired azole-sensitive strain [42].
Similar gain-of-function mutations followed by LOH were reported in a study by Dunkel and colleagues [43], where mutations in the gene encoding the drug efflux regulator Mrr1 were identified by comparing fluconazole-resistant isolates of C. albicans to related susceptible isolates from the same patients.Five of 7 resistant isolates had become homozygous for the mutated allele, which increased resistance to azole antifungals to a level greater than observed in strains that only had a single copy.The acquisition of the Mrr1 heterozygous mutation facilitated the modulation of the variant allele's gene dosage through LOH in conditions where it was selectively advantageous.The retention of alleles with different levels of activity may circumvent significant tradeoffs for Mrr1 hyperactivity [44,45], which we will discuss further below, such as fitness costs in the absence of a drug [46].
Aneuploidy and copy number variation rapidly diversify a population through gene dosage
Aneuploidy and CNV are 2 fascinating phenomena that contribute to diversity within fungal populations by modifying the dosage of genes through the gain or loss of whole chromosomes or specific genome regions.Gene redundancy can also facilitate evolution by allowing cells to harbor native and functional alleles.Alterations in ploidy and copy number can increase in response to stress, and this plasticity can significantly affect fitness and treatment outcomes [47].In particular, the duplication of genes contributing to azole drug resistance is common and challenging for the clinical management of fungal infections [48][49][50].Aneuploidies in C. albicans confer fitness benefits despite associated costs in various physiologically relevant conditions [51], and studies of recurrent Candida infections have identified aneuploidy events that alter antifungal resistance [41,[52][53][54][55][56].Similarly, heteroresistant populations of C. neoformans are known to have acquired disomic chromosomes in response to fluconazole [57].Evolution studies of CandidaAU : Pleasenotethatgenus}Candida}hasbeenaddedto}C:parapsilosis}and"C:a parapsilosis have identified karyotypes that promote cross-tolerance to multiple drugs [58] and evolved azole resistance through novel mechanisms such as segmental chromosome duplication in Candida auris [59].Whole chromosome duplications, such as trisomy of chromosomes 5 and 6 in C. albicans, significantly alter host interaction by conferring commensal-like phenotypes with less stimulation of inflammation and weight loss in the mouse oropharyngeal candidiasis model and reduced adhesion and invasion [60].
In the study by Rhodes and colleagues [35], aneuploidies were detected in 7 of 17 pairs of recurrent C. neoformans isolates recovered from meningitis patients in addition to hypermutator lineages.Chromosome 12 was frequently aneuploid, potentially increasing the expression of SFB2, a proposed member of the sterol regulatory element binding protein pathway, and alcohol dehydrogenase-encoding GNO1, both genes potentially promoting C. neoformans virulence.ERG11, which encodes a component of the sterol synthesis pathway and the target for azole drugs, also had increased copy numbers in 7 pairs of isolates [35] and wasAU : Pleasecheckandcon observed even in the absence of drug treatment.These observations underscore the potential for aneuploidies to impact multiple phenotypes.
Heterokaryon formation and other changes in ploidy
Haploid organisms such as Aspergillus nidulans and A. fumigatus can also undergo recombination through diploid heterokaryon formation and the parasexual cycle.The parasexual cycle and accompanying variation in ploidy in A. nidulans produce haploid recombinants with improved fitness measured by mycelial growth rate [61].A screen of A. fumigatus isolates from CF, chronic pulmonary lung disease, and chronic aspergillosis found evidence for diploid formation in vivo but notably did not find any diploids in 368 acute infection or environmental isolates, suggesting that chronic infections provide a niche environment for genetic diversity to develop through parasexual recombination [62].Further study is required to understand what stresses in chronic infection stimulate parasex and whether the parasexual process is relevant for other fungi in chronic illness.
Massively polyploid C. neoformans, which become enlarged in the initial stages of pulmonary infection, are known as "titan cells" and provide an extreme example of changes in ploidy associated with a morphological transition, leading to the variegation of function during infection [63].Titan cells can be 5 to 10 times larger than normal haploid C. neoformans cells and make up to 20% of the cell population and increase relative to normal yeast cells throughout infection [64,65].Exposure to reactive nitrogen species, bacterial cell wall components, etc., can trigger the formation of polyploid titan cell populations in C. neoformans, which are highly heterogeneous in size and nuclear content [66].Titanization increases resistance to reactive oxygen species and azoles and induces a strong Th2 response while being resistant to phagocytosis [63].In clinical isolates obtained from HIV/AIDS patients, these diverse C. neoformans cells are also accompanied by "seed" cells of diminished cell wall thickness and volume [67].Cryptococcus seed cells are induced by phosphate supplementation and excel at dissemination and invasion of extrapulmonary organs, which is partially dependent on size [67].The specific role of seed cells compared to other cryptococcal microcell variants in host interaction is unclear.Still, their negative correlation to acute symptoms indicates that they are a relevant morphology in persistence and chronic infection and further highlights the importance of phenotypic heterogeneity in infection.
Nongenetic mechanisms for the generation of population heterogeneity: C. albicans as an example
As detailed above, phenotypic switching, observed in many fungi such as C. neoformans, enables the coexistence of variable cell states, maximizing a fungus' ability to occupy multiple metabolic and immunological niches.C. albicans utilizes particularly well-studied mechanisms of genetic regulation and epigenetic stochasticity to switch between myriad cell states, thoroughly reviewed by Noble and colleagues [68].First, yeast-to-hyphae transitions contribute to niche specification in C. albicans.Yeast and hyphal forms of C. albicans are frequently observed to coexist in biofilms formed in infection [69], and the inability to interconvert between morphologies renders C. albicans avirulent [70,71].While filamentation is regulated by a network of transcription factors that respond to factors in the infection environment, genetic changes such as aneuploidy [72] and mutation [73,74] can modulate this response.Second, C. albicans can stably grow in a variety of distinct cell states named after the appearance of colonies formed by each type (Fig 2).The white-opaque switch is a reversible and stochastic transition between 2 cellular states with distinct genetic programming.White and opaque cell types are phenotypically distinct, with a metabolic bias towards glycolysis in white cells and increased beta-oxidation and respiration in opaque cells [75].The impact of these metabolic differences on C. albicans fitness is extensively dissected in a 2016 paper by Ene and colleagues [76], which establishes that while white cells have competitive fitness advantages in most tested growth conditions, there are specific substrates (e.g., glucose and triglycine) where opaque cells have improved growth and biofilm formation relative to white cells.More recent work has revealed additional plasticity in the white-grey-opaque and white-GUT switches [77,78].Homozygous deletion of the white-opaque master regulator Wor1 and transcriptional regulator Efg1 results in the production of semi-mating competent grey cells, while Wor1 overexpression produces the GUT morphology.Various morphotypes are proposed to be an essential component of host-Candida interactions in the gut.The central transcriptional regulator of C. albicans morphology, Efg1, is observed to vary between high and low expression in gut isolates [79,80].Several recent studies have reinforced the fitness trade-offs of invasive growth in exchange for commensalism, such as increased colonization fitness in C. albicans strains lacking SAP6 or UME6 [81], and loss-of-function mutations in FLO8 and EFG1, which encode hyphal growth regulating transcription factors.The loss-of-function mutations followed by LOH resulted in commensal phenotypes [82,83].Anderson and colleagues [84] observed extensive diversity within a single host and phenotypic variation between closely related isolates in a study of C. albicans commensal gut isolates from 35 healthy donors.These studies highlight that during a shift to pathogenesis due to a change in host protective mechanisms, there is diversity in the population, which will likely undergo selection for those strains with maximal fitness.The dynamic nature of fungal morphology and cell state suggests infectious populations exist in heterogeneous cell states, which appears to maximize the occupation of multiple metabolic and immunological niches.
Assessment of the environmental drivers of selection in chronic, heterogeneous cystic fibrosis fungal populations
The study of CF lower airway infections has provided unique insight into fungal heterogeneity within chronic infections.CF respiratory infections occur in a dynamic neutrophilic environment where the host's immune defenses continuously interact with colonizing fungal and bacterial pathogens.The study of intraspecies heterogeneity in CF has primarily focused on the most common infections caused by bacteria, such as Pseudomonas aeruginosa and Staphylococcus aureus [85][86][87].In P. aeruginosa, mutations that lead to amino acid auxotrophies, overproduction of the exopolysaccharide alginate, loss-of-function of the LasR quorum sensing regulator, and drug resistance are common.In S. aureus, mutations in agr, which encodes a quorum sensing regulator, or heme auxotrophy, and the small colony variant morphology are common.
In contrast, less is known about how fungi evolve in CF airways.The prevalence of fungi in CF respiratory samples has risen in recent years, and greater diversity in fungal species has been detected [88].Multiple species of Candida, as well as A. fumigatus and ExophialaAU : Pleasenoteth dermatitidis, are among the most common fungi associated with chronic CF lung infections [89].While the pathogenicity of these organisms in CF is still an active area of research, there is evidence that fungal colonization contributes to worsened patient outcomes, as highlighted in several papers and reviews [88,[90][91][92][93][94].Below, we will discuss insights into the evolution of fungal populations over time derived from studying the heterogeneity of fungi in CF.These findings contribute to a better understanding of factors influencing infecting microbes, such as host immune responses, antimicrobial treatments, metabolite production, nutrient limitation, and oxygen availability (Fig 3).
C. albicans mutations in polymicrobial CF lung infections
In work that assessed mycobiome heterogeneity in 28 study participants with CF (pwCF) Kim and colleagues [74], significant heterogeneity in antifungal drug resistance was observed in both Candida spp.and A. fumigatus isolates collected from CF sputum.In 5 pwCFAU : PleasenotethatasperPL colonized with C. albicans and 1 colonized by C. parapsilosis, isolates thatAU : Pleasecheckandconfirmthattheeditto}In5CFp had a wrinkled morphology, which is associated with filamentation, were found.Genetic analysis found evidence that the Candida populations in each of these 6 individuals had at least 1 lineage with an independent loss-offunction mutation in the gene encoding Nrg1, a well-characterized repressor of filamentation.The repeated loss of Nrg1 function across different individuals and in 2 species suggests that Nrg1 loss-of-function confers increased fitness in the CF lung.Recently, Gnaien and colleagues [73] also used whole genome sequence data to show that C. albicans from individuals with CF were clonal and that there was clear divergence among isolates from the same patient.Their work found a gain-of-function mutation in the gene encoding Rob1, a positive regulator of filamentation [73].Because the mutations in NRG1 and ROB1 found in CF C. albicans isolates lead to increased growth as hyphae rather than as yeast, it is reasonable to hypothesize that either the hyphal morphology or genes coregulated with this morphological change (e.g., increased expression of adhesins and host-damaging toxins such as candidalysin or increased acquisition of micronutrients such as iron [95,96AU : Pleasecheckandconfirmthattheplacementofclosingparenthesisin ]) may promote fitness in chronic lung infections.C. albicans is frequently found in coinfections with the bacterium P. aeruginosa [97], and all but one of the hyperfilamentous isolates were recovered from patients coinfected with P. aeruginosa.One of the C. albicans nrg1 mutants was isolated from a coinfection with Burkholderia multivorans [74].Interestingly, both the nrg1 and rob1 alleles found in CF isolates better resisted the repression of filamentation and antagonism by P. aeruginosa in in vitro cocultures [73,74,98].
Dynamic selection for and against high Mrr1 activity in CF C. lusitaniae infections
Analysis of regional microbial populations by bronchoalveolar lavage found 3 individuals with CF with infections dominated by C. lusitaniae with no evidence of coinfecting bacteria [99,100].C. lusitaniae is a non-albicans Candida species closely related to C. auris and has been observed to rapidly evolve antifungal resistance through ERG3 and FKS1 mutations in response to clinical antifungal treatment [101,102].C. lusitaniae has been previously identified in CF lung infections [103][104][105], but its presence in the CF lung is uncommon.In the C. lusitaniae population from 1 patient, multiple independent mutations in the MRR1 gene that encodes the multidrug resistance transcription factor were identified, with 13 alleles found within the genomes of the 20 sequenced isolates.Genetic heterogeneity in MRR1 led to phenotypic variation in azole resistance [27], and the presence of fluconazole-resistant strains was surprising as there was no evidence of prior exposure to antifungals.Thus, it was proposed that the increased Mrr1 activity was under selection for different reasons, such as improved resistance to antimicrobial peptides such as histatin 5 or P. aeruginosa-produced phenazines through elevated levels of the Mdr1 efflux pump [27,106].Studies of the Mrr1 regulon also found that it regulates 2 genes encoding methylglyoxal (MG) reductases, Mgd1 and Mgd2, which detoxify MG, an electrophilic, toxic metabolite [107].Given the impaired detoxification of MG in the CF lung [108,109], it is possible that elevated MG levels contributed to the selection of strains with high Mrr1 activity.The retrospective detection of fluconazole-resistant strains with hyperactive Mrr1 alleles explained the observed azole treatment failure [44].Interestingly, isolates with MRR1 alleles with activating mutations frequently acquired secondary nonsense or missense mutations in the 3 0 end of the MRR1 gene, which reduced the constitutive activity of Mrr1 and thus lowered fluconazole and MG resistance [44].Phenotype analysis of strains with different MRR1 alleles found that while high Mrr1 activity conferred increased protection against MG, it rendered cells more sensitive to oxidative stress [44].Furthermore, longitudinal analyses of respiratory samples revealed an inverse correlation between high Mrr1 activity (fluconazole and MG resistance) and hydrogen peroxide resistance at the population level over time, suggesting that there could be opposing selective pressures in the lung [44].This complex scenario highlights the potential need for combinatorial treatments in addressing drug resistance in CF-associated fungal infections, informed by trade-offs in fitness that commonly accompany specific mutations and phenotypes.
Mutations in MRS4 across fungal species in CF lung infections
In all 3 chronic CF C. lusitaniae infections, at least 1 lineage with loss-of-function mutations in the gene encoding the mitochondrial iron importer Mrs4 was detected [100].Allelic exchange of the mutated mrs4 alleles into a common background demonstrated that they all produced a hyperactive iron-scavenging response [100].Expression of genes involved in iron uptake through heme, siderophores, and reductive mechanisms was significantly increased, and isolates with mutated mrs4 alleles had significantly greater intracellular iron content than those with fully functional Mrs4.Pooled sequencing of bronchoalveolar lavage fluid from each patient with C. lusitaniae revealed that different genotypes of MRS4 were spatially biased to different lobes of the lung, indicating the role of spatial separation in the generation of genetic diversity and the value of studying populations from various locations [100].Importantly, analysis of isolates of the black yeast E. dermatitidis from an independent chronic CF infection also showed diversification over time with the persistence of 2 major subpopulations, with evidence for the evolution of a loss-of-function mrs4 allele in 1 clade [110].It is speculated that the driving force for the selection of mrs4 loss-of-function mutations is to overcome nutritional immunity, wherein vital micronutrients such as iron are sequestered by calprotectin and other iron-binding proteins like ferritin and lactoferrin [111][112][113][114][115].There is evidence for metal restriction in the CF environment, likely due to nutritional immunity factors, even though some studies have presented evidence for increased iron content in sputum and BAL fluid [116,117].While the loss of Mrs4 function in CF isolates may aid in iron acquisition, an mrs4Δ/Δ mutant results in loss of virulence in a C. albicans murine model of systemic candidiasis [118].This contradiction highlights that acute and chronic infections require different fitness traits.Further, these studies suggest that changes in iron acquisition and perhaps storage increase fitness in chronic CFrelated lung infections in diverse species (e.g., C. lusitaniae and E. dermatitidis).
A. fumigatus heterogeneity in the CF lung: Changes in Hog1
A. fumigatus is a ubiquitous mold and a critical causal agent of invasive and chronic infections.The detection of Aspergillus sp. in CF clinical samples is associated with worse clinical outcomes and increased lung damage [119].Ross and colleagues [120] highlight the striking genotypic and phenotypic diversity of A. fumigatus isolates collected throughout 4 and a half years from a single individual with CF who had not received antifungal treatment.Whole genome sequencing analysis of these longitudinal A. fumigatus CF isolates identified 2 persistent lineages with differences in phenotypes relevant to CF lung infections, including their ability to grow in low oxygen and to adapt to osmotic and oxidative stress and their sensitivity to voriconazole.One lineage of persistent isolates in this individual contained a novel allele encoding the high-osmolarity glycerol (HOG) pathway protein kinase kinase Pbs2 [120].Allelic exchange experiments demonstrated that the novel pbs2 allele encoded a variant that hyperactivates that HOG Map kinase ortholog SakA and is sufficient to confer changed phenotypes in low oxygen, the presence of voriconazole, and high osmolarity conditions.While the novel pbs2 allele was beneficial in these CF-relevant conditions, it also led to significant differences in morphotype (conidia versus hyphae) specific growth and disadvantages in growth in normoxic environments compared to isolates containing the more common pbs2 allele.Interestingly, the introduction of the novel pbs2 allele into more genetically distant backgrounds of A. fumigatus was not sufficient to promote a growth advantage in CF-relevant environments, highlighting that the benefit of some adaptive mutations can be highly dependent on specific genetic backgrounds and may involve complex genetic interactions.
Outstanding questions and concluding remarks
Due to their complex and heterogeneous nature, chronic fungal infections are challenging to manage and treat.Genetic heterogeneity in chronic fungal infections is generally not evaluated diagnostically, and the lack of such information may limit the use of therapeutic strategies suited to combat complex populations and communities, such as combination drug therapy [16].With the increased availability of technology for next-generation sequencing of multiple isolates or populations from clinical samples, it is now possible to describe fungal populations within samples and how they change over time.Current knowledge of the diversity of fungal infection populations in other types of mycoses that occur in diverse body sites, such as the sinuses, mouth, lungs, groin, and feet, is limited.There is evidence that vulvovaginal candidiasis isolates and isolates from aspergillomas can be closely related but genetically distinct, which may be due to genetic adaptation to the host environment [121][122][123].
The mechanisms driving fungal heterogeneity are varied and often include mutations, mobile elements, LOH, and changes in the copy number of genomic regions or, in some cases, whole chromosomes.Emerging research on mycophage's prevalence, retention, and effects on human fungal pathogens may add to our understanding of fungal population dynamics.Cross-sectional and longitudinal characterization of fungal populations from CF lung infections found evidence of selection for increased growth in the filamentous morphology, efflux pump activity, iron acquisition, and hypoxia tolerance, among other phenotypes.These examples highlight how studying intraspecies heterogeneity can elucidate relevant host stresses and pathoadaptive mechanisms for clinically important fungi.However, several crucial questions remain unanswered, including the following: • Are there in vivo or in vitro models that best represent the selective pressures present in chronic infections?
• Are there selective pressures that are specific to chronic infections and acute infections?
• Are there mutations or levels of population heterogeneity predictive of virulence, potential for clearance by immune responses, or the likelihood of antifungal therapy failure?
PLOS PATHOGENS
Pathogens | https://doi.org/10.1371/journal.ppat.1012430September 12, 2024 • Do genetically diverse populations have emergent properties due to interactions between evolved lineages that may influence host damage or response to antifungal action?
• Are body sites distinct in adaptation, or are there convergent phenotypes that repeatedly arise?
• Do other microbes influence the evolution of fungi in complex multispecies infections?
• Are similar pathways undergoing selection across different types of microbial pathogens?
As studies on fungal populations in chronic infections advance, ecological and evolutionary models incorporating parameters such as mutation rate, growth rate, population size, and interactions may be helpful in assessing the strength of different selective pressures.In the future, we hope these insights will aid in developing therapeutic strategies that address the diverse and evolving nature of chronic fungal infections and may ultimately improve outcomes for individuals grappling with these persistent health challenges.
Fig 1 .
Fig 1. Genetic mechanisms leading to phenotype diversification in fungal pathogens.Functional variation can arise through genome sequence changes such as (A) single nucleotide mutations, insertions, and deletions that occur during replication (in red).Hypermutators have an increased rate of mutation due to defects in mismatch repair, (B) loss of heterozygosity (LOH) in diploid species, (C) altered gene dosage of relevant alleles can occur through aneuploidy, and(D) copy number variation (CNV).(E) Fungi are also capable of parasexual recombination through the formation of a heterokaryon[1], the merging of nuclei to form a heterozygous diploid[2], followed by recombination and a return to haploidy[3].(F) Mobile genetic elements and the presence of a mycophage can also alter phenotype.Figure prepared using BioRender.
Fig 1. Genetic mechanisms leading to phenotype diversification in fungal pathogens.Functional variation can arise through genome sequence changes such as (A) single nucleotide mutations, insertions, and deletions that occur during replication (in red).Hypermutators have an increased rate of mutation due to defects in mismatch repair, (B) loss of heterozygosity (LOH) in diploid species, (C) altered gene dosage of relevant alleles can occur through aneuploidy, and(D) copy number variation (CNV).(E) Fungi are also capable of parasexual recombination through the formation of a heterokaryon[1], the merging of nuclei to form a heterozygous diploid[2], followed by recombination and a return to haploidy[3].(F) Mobile genetic elements and the presence of a mycophage can also alter phenotype.Figure prepared using BioRender.https://doi.org/10.1371/journal.ppat.1012430.g001
Fig 3 .
Fig 3. Factors contributing to genetic and phenotypic heterogeneity in fungal populations in CF-associated lung infections.Initial colonizers generate genotypic and phenotypic diversity (variation) through the mechanisms described in Fig 1. Selective pressures include (1) hypoxia, (2) damaging toxic metabolites including hydrogen peroxide (H 2 O 2 ) and methylglyoxal (MG), (3) products from other coinfecting microbes, (4) interactions with the immune effectors, (5) nutritional immunity factors that, for example, restrict access to transition metals such as iron (Fe) or zinc, and (6) antifungals or other therapeutic agents.Spatial separation across lungs and lobes of the lung and the dynamic infection environment with periods of disease stability and exacerbation increases the heterogeneity of the population.Figure prepared using BioRender.
Fig 3. Factors contributing to genetic and phenotypic heterogeneity in fungal populations in CF-associated lung infections.Initial colonizers generate genotypic and phenotypic diversity (variation) through the mechanisms described in Fig 1. Selective pressures include (1) hypoxia, (2) damaging toxic metabolites including hydrogen peroxide (H 2 O 2 ) and methylglyoxal (MG), (3) products from other coinfecting microbes, (4) interactions with the immune effectors, (5) nutritional immunity factors that, for example, restrict access to transition metals such as iron (Fe) or zinc, and (6) antifungals or other therapeutic agents.Spatial separation across lungs and lobes of the lung and the dynamic infection environment with periods of disease stability and exacerbation increases the heterogeneity of the population.Figure prepared using BioRender.https://doi.org/10.1371/journal.ppat.1012430.g003 | 7,632.8 | 2024-09-01T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Three-dimensional total internal reflection fluorescence nanoscopy with sub-10 nm resolution
Here, we present a single-molecule localization microscopy (SMLM) analysis method that delivers sub-10 nm z-resolution when combined with 2D total internal reflection (TIR) fluorescence imaging via DNA point accumulation for imaging nanoscale topography (DNA-PAINT). Axial resolution is obtained from a precise measurement of the emission intensity of single molecules under evanescent field excitation. This method can be implemented on any conventional TIR wide-field microscope without modifications. We validate this approach by resolving the periodicity of alpha-tubulin assembly in microtubules, demonstrating isotropic resolution below 8 nm.
Here, we present a single-molecule localization microscopy (SMLM) analysis method that delivers sub-10 nm z-resolution when combined with 2D total internal reflection (TIR
Recently, two techniques outstand for reaching this level of resolution in two-dimensions: DNA-PAINT 1,2 and MINFLUX 3 . Although both methods provide lateral resolutions well below 10 nm, the issue is not yet solved for the axial counterpart. Axial resolution of fluorescence nanoscopy using a single objective lens lies in the range of 35 to 120 nm for both coordinate-targeted and coordinate-stochastic methods 4,5 , including recent intensity-based approaches that rely on supercritical angle fluorescence or accurate photometry determination 6,7 . By exploiting the 4Pi configuration 8 it is possible to reach axial resolution below 35 nm, but at the cost of increased technical complexity. Isotropic STED (isoSTED) has been shown to deliver nearly isotropic resolution in the range of 30 to 40 nm 9,10 , whereas 4-Pi PALM/STORM has reached 10 to 20 nm resolution in 3D [11][12][13] . To date, sub-10 nm axial resolution was only achieved by decoding zposition of fluorophores through lifetime imaging making use of the distance-dependent energy transfer from excited fluorophores to a metal film 14 or a graphene sheet 15 . However, combining these ns time-resolved methods with other nanoscopy methods in order to obtain 3D imaging with sub-10 nm resolution is not straightforward 16 .
Here, we present Supercritical Illumination Microscopy by Photometric z-Localization Encoding (SIMPLE), an easy-to-implement photometric method to determine the axial position of molecules near a dielectric interface under total internal reflection excitation. Under this condition, fluorescent molecules that are closer to the interface appear brighter due to two factors. First, they are excited more efficiently because the TIR illumination field decays exponentially from the interface. Second, molecules closer to the interface emit more photons into the glass semi-space and into the collection solid angle. SIMPLE consists of calibrating the detected fluorescence signal considering these two effects in order to retrieve the axial position of single molecules from a direct measurement of their detected fluorescence intensity. SIMPLE can be combined with any fluorescence nanoscopy method based on localization of single molecules. In combination with DNA-PAINT, SIMPLE delivers sub-10 nm resolution in all three dimensions, enabling the direct recognition of protein assemblies at the molecular level. Figure 1 illustrates the concept of SIMPLE. TIR occurs when light incides from a medium with refractive index on an interface with another medium of smaller refractive index < . If the angle of incidence is larger than the critical angle = arcsin( / ), light is fully reflected at the interface and an evanescent field appears, penetrating the medium of low refractive index with an intensity that decays exponentially. In a fluorescence microscope, TIR illumination can be generated by controlling the angle of incidence of the excitation light using an immersion objective lens as schematically shown in the inset of Figure 1a. In practice, the excitation field contains also a non-evanescent component due to scattering, that decays on a much longer scale 17 . Near the interface, the non-evanescent component can be considered constant and the overall illumination field is represented by a linear superposition of both contributions, ( ) = 0 − / + (1 − ) 0 with 0 the intensity at the interface, = 0 /4 /( 2 sin 2 ( ) − 2 ) −1/2 the penetration depth, 0 the vacuum wavelength, and 1 − the scattering contribution fraction. Figure 1a shows ( ) for our configuration ( 0 = 642 nm, = 1.517, = 1.33 water, = 69.5°, = 0.9), which decays with = 102 nm.
The excitation rate of a fluorophore (under linear excitation) will depend on the axial position according to ( ). The fraction of the fluorescence emission collected by the solid angle of the microscope objective also depends on the axial position of the fluorophore, as well as on the relative orientation of its emission dipole to the interface 18 . Figure 1b where 0 is the number of photons emitted by a fluorophore at = 0.
Using the exponential expression of ( ), an estimation of the axial position of a molecule () can be obtained from a measurement of the number photon count emitted in a camera frame time (̂), as follows: Then, the standard error of , which ultimately determines the axial resolution, is given by: In this expression, we have considered ̂= √̂ which arises from the fact that ̂ is Poisson distributed and that in typical stochastic-coordinate nanoscopy the number of emitted photons of each fluorophore is determined in one single measurement. Instead, 0 is a reference parameter that depends on the nature of the fluorophore and the experimental conditions. Since it can be measured an arbitrary number of times, its error can be made negligible small; we have therefore considered 0 = 0 for the computation of the theoretical lower bound for the resolution. Figure 1d shows ̂ as a function of the axial position for experimentally accessible values of 0 and . Clearly, this method is able to deliver an axial resolution below 10 nm under usual experimental conditions. The range of sub-10 nm resolution depends strongly on the uncertainty of . For = 1 nm, a resolution well below 10 nm is expected up to = 250 nm for 0 > 10,000. If = 5 nm, the resolution becomes fairly independent of the photon count for 0 > 30,000, but the range of sub-10 nm resolution is limited to < 170 nm.
It is interesting to note that up to 100 nm ( ) can be approximated fairly well by a single exponential with no background ( = 1). Under these conditions, and if could be determined with negligible error, then ̂= / √̂. This bound to the resolution is analogous to the one for lateral resolution in single molecule localization, with the difference that the numerator is not the lateral size of the point-spread function, but the much smaller decay constant of the detected fluorescence signal under TIR conditions.
In practice, data is acquired and analyzed as in any other coordinate-stochastic fluorescence nanoscopy method, with the addition that the detected number of photons per frame (̂) is used to determine the z-coordinate through equations (2) and (3) to the supramolecular assembly of structural proteins. Due to its simplicity and power, we believe SIMPLE will enable a new wave of discoveries about the structure and pathways of sub-cellular structures and protein-protein interactions.
Single molecule emission
The emission pattern of single molecules was simulated as a small dipole using a Finite Difference Table 1.
Super-resolution microscopy setup
The microscope used for TIR fluorescence SMLM was built around a commercial inverted Typically, we acquired sequences of 50,000-70,000 frames at 4 Hz acquisition rate with a laser power density of ~2.5 kW/cm 2 .
Samples were then used immediately for DNA-PAINT imaging.
Data acquisition, analysis and 3D image rendering
Lateral (x,y) molecular coordinates and photon counts (̂) were obtained using the Localize module of Picasso software 2 , selecting a threshold net gradient of 3000 for microtubules, and For each image, a photon count was assigned to z = 0 ( ). We set it using biological considerations (i.e. the estimated distance of a structure to the cover-glass). For example, spectrin rings are attached to the plasma membrane, hence we set so that the lower bound of the rings sit at z = 5 -10 nm from the coverslip ( Fig. 2a; Supplementary Figs. 2 and 4). A similar approach was used to set for microtubules images ( Fig. 2b and 2d Fig. 3b and 4). For each localization, z-localization precision ( z) could be calculated from Eq. (3) using a d value of 1 nm, based on an error of 0.5° in the determination of the incident angle.
Finally, z-color-coded image rendering was done using the ImageJ plug-in ThunderStorm 21 , importing the list of (x, y, z). A Gaussian filter with = 2 nm was used for all three dimensions. A lenient density filter was applied, to discard localizations with less than 100 neighbours in a 67nm radius, to enhance contrast by suppressing some of the non-specific localizations of the background.
-tubulin structure analysis The 5-first neighbours' distance analysis for -tubulin was made as follows. First, the list of (x, y) coordinates was multiplied by a rotation matrix, in order to align the microtubule with the rise to the histograms shown in Fig. 2e.
Data availability.
The data sets generated and analyzed in this study are available from the corresponding author upon reasonable request.
Supplementary Figure 2
Influence of the first and last frame filtering step on image quality. Supplementary Figure 3 Quantification of the differences in z values obtained using the exact solution of the fluorescence signal or the exponential approximation, for varying and N0. Supplementary Figure 4 Comparison of side-view reconstructions by SIMPLE using different computation methods. Supplementary Figure 5 x/y and x/z images of a single microtubule immunolabeled with -tubulin. Supplementary Figure 6 Calibration of the TIRF excitation angle.
Supplementary Table 1
Axial dependence of the collected fluorescence signal. and last frames that are dismissed during the optional frame filtering step because it cannot be assured that those molecule were emitting during the whole frame duration. Computing those frames could lead to a photon count lower than expected. The effect of this frame filtering is illustrated in Supplementary Fig. 2.
Supplementary Figure 2. Influence of the first and last frame filtering step on image quality.
Overview image of β2-spectrin rings in neurons and magnified side-view reconstructions, i.e. z-y projections, of the boxed regions in the x-y view where the rendering was done with (a) and without (b) the frame filtering step of the localizations (described in Methods and Supplementary Fig. 1). In the x-y view, the filter's action resembles the one of a density filter, improving contrast by suppressing isolated or unspecific events. In the z-y projections, we see that the filter suppresses localizations that are wrongly assigned with higher z coordinate due to the incorrectly determined lower photon count. Scale bars represent 1 m (top view) and 100 nm (side view). Variations of in this range do not introduce distortions greater than 5 nm for < 150 nm. | 2,524.2 | 2019-07-05T00:00:00.000 | [
"Physics"
] |
Microscopic Theory of Multipole Ordering in f-Electron Systems
A microscopic framework to determine multipole ordering in f -electron systems is provided on the basis of the standard quantum field theory. For the construction of the framework, a seven-orbital Hubbard Hamiltonian with strong spin-orbit coupling is adopted as a prototype model. A type of multipole and ordering vector is determined from the divergence of multipole susceptibility, which is evaluated in a random phase approximation. As an example of the application of the present framework, a multipole phase diagram on a three-dimensional simple cubic lattice is discussed for the case of n = 2, where n denotes the average f -electron number per site. Finally, future problems concerning multipole ordering and fluctuations are briefly discussed.
Introduction
Recently, complex magnetism in rare-earth and actinide compounds has attracted much attention in the research field of condensed matter physics [1][2][3].Since in general, spin-orbit coupling between electrons in 4 f and 5 f orbitals is strong, spin and orbital degrees of freedom are tightly coupled in f -electron materials.Thus, when we attempt to discuss magnetic ordering in f -electron systems, it is necessary to consider the ordering of spin-orbital complex degrees of freedom, that is, multipole.In fact, ordering of higher-rank multipole has been actively investigated both from experimental and theoretical sides in the research field of strongly correlated f -electron systems [2,3].Moreover, due to recent remarkable developments in experimental techniques and measurements, nowadays it has been possible to detect directly and/or indirectly the multipole ordering.Note, however, that only spin degree of freedom often remains, when orbital degeneracy is lifted, for instance, due to the effect of crystal structure with low symmetry.In order to promote the research of multipole phenomena, felectron compounds crystallizing in the cubic structure with high symmetry are quite important.For instance, octupole ordering has been discussed in the phase IV of Ce 0.7 La 0.3 B 6 [4] and NpO 2 [3,[5][6][7][8] with cubic structure.As for NpO 2 , recently, a possibility of dotriacontapole ordering has been also pointed out [9,10].
Here we emphasize that the study of multipole phenomena has been activated due to the focusing research of filled skutterudite compounds LnT 4 X 12 with lanthanide Ln, transition metal atom T, and pnictogen X [11].Since these compounds crystallize in the cubic structure of T h point group, they have provided us an ideal stage for the research of multipole physics.Furthermore, many isostructural materials with different kinds of rare-earth and actinide ions have been successfully synthesized, leading to the development of systematic research on multipole ordering.In fact, recent experiments in close cooperation with phenomenological theory have revealed that multipole ordering frequently appears in filled skutterudites.For instance, a rich phase diagram of PrOs 4 Sb 12 with field-induced quadrupole order has been unveiled experimentally and theoretically [12][13][14].Furthermore, antiferro-Γ 1 -type higher multipole order [2] has been discussed for PrRu 4 P 12 [15,16] and PrFe 4 P 12 [17][18][19].
Now we turn our attention to theoretical research on multipole order.Thus far, theory of multipole ordering has been developed mainly from a phenomenological viewpoint on the basis of an LS coupling scheme for multi-f -electron state.It is true that several experimental results have been explained by those theoretical studies, but we believe that it is also important to promote microscopic approach for understanding of multipole phenomena in parallel with 2 Physics Research International phenomenological research.Based on this belief, the present author has developed a microscopic theory for multipolerelated phenomena with the use of a j-j coupling scheme [1,[20][21][22].In particular, octupole ordering in NpO 2 has been clarified by the evaluation of multipole interaction with the use of the standard perturbation method in terms of electron hopping [6][7][8]23].We have also discussed possible multipole states of filled skutterudites by analyzing multipole susceptibility of a multiorbital Anderson model based on the j-j coupling scheme [24][25][26][27][28][29].
On the other hand, it is still difficult to understand intuitively the physical meaning of multipole degree of freedom due to the mathematically complicated form of multipole operator defined by using total angular momentum.As mentioned above, multipole is considered to be spin-orbital complex degree of freedom.In this sense, it seems to be natural to regard multipole as anisotropic spincharge density.This point has been emphasized in the visualization of multipole order [6][7][8]23].Then, we have defined multipole as spin-charge density in the form of onebody operator from the viewpoint of multipole expansion of electromagnetic potential from charge distribution in electromagnetism [30,31].Due to the definition of multipole in the form of one-electron spin-charge density operator, it has been possible to discuss unambiguously multipole state by evaluating multipole susceptibility even for heavy rare-earth compounds with large total angular momentum [30].
As for the determination of the multipole state, we have proposed to use the optimization of multipole susceptibility on the basis of the standard linear response theory.We have analyzed an impurity Anderson model including seven f orbitals with the use of the numerical renormalization group technique and checked the effectiveness of the microscopic model on the basis of the j-j coupling scheme for the description of multipoles.We have also shown the result for multipole susceptibility of several kinds of filled skutterudite compounds.With the use of the seven-orbital Anderson model, we have discussed field-induced multipole phenomena in Sm-based filled skutterudites, [32] multipole Kondo effect, [33] and multipole state of Yb-and Tm-based filled skutterudites [34].We have also discussed possible multipole state in transuranium systems such as AmO 2 [35] and magnetic behavior of CmO 2 [36].
From our previous investigations on the basis of the multiorbital Anderson model, it has been clarified that the multipole can be treated as spin-orbital complex degree of freedom in the one-electron operator form.However, in order to discuss the ordering of multipole, it is necessary to consider a periodic system including seven f orbitals per atomic site with strong spin-orbit coupling.The validity of the model on the basis of the j-j coupling scheme can be also checked by such consideration.Namely, for the steady promotion of multipole physics, it is highly expected to treat the multipole ordering in a seven-orbital periodic model by overcoming a heavy task to solve the model including 14 states per atomic site.
In this paper, we define a seven-orbital Hubbard model with strong spin-orbit coupling and explain a procedure to define the multipole ordering by the divergence of multipole susceptibility from a microscopic viewpoint.For the evaluation of multipole susceptibility, we introduce a random phase approximation.In principle, we can treat all the cases for n = 1 ∼ 13 on the same footing, but here we focus on the case of n = 2 corresponding to Pr and U compounds.As a typical example of the present procedure, we show a phase diagram including quadrupole ordering in a threedimensional simple cubic lattice.Finally, we also discuss some future problems such as superconductivity induced by multipole fluctuations near the multipole phase.
The organization of this paper is as follows.In Section 2, we explain each part of the seven-orbital Hubbard model with strong spin-orbit coupling.For the reference of readers, we show the list of hopping integrals among f -orbitals along x, y, and z-axes through σ, π, δ, and φ bonds.In Section 3, we define the multipole operator as the complex spincharge degree of freedom in the one-electron form.Then, we explain a scheme to determine the multipole ordering from the multipole susceptibility.Here we use a random phase approximation for the evaluation of the multipole susceptibility.In Section 4, we show the results for the case of n = 2 in a three-dimensional simple cubic lattice.We discuss the phase diagram of the multipole ordering.In Section 5, we discuss some future problems and summarize this paper.Throughout this paper, we use such units as
Model Hamiltonian
The model Hamiltonian H is split into two parts as where H kin denotes a kinetic term and H loc is a local part for potential and interaction.The latter term is further given by where H so is a spin-orbit coupling term, H CEF indicates crystalline electric field (CEF) potential term, and H C denotes Coulomb interaction term.We explain each term in the following.
Local f -Electron
Term.Among the three terms of H loc , the spin-orbit coupling part is given by where f imσ is an annihilation operator of f -electron at site i, σ = +1 (−1) for up (down) spin, m is the z-component of angular momentum = 3, λ is the spin-orbit interaction, and the matrix elements are expressed by and zero for other cases.
Next we consider the CEF term, which is expressed as where B m,m is the CEF potential for f electrons from the ligand ions, which is determined from the table of Hutchings for angular momentum = 3 [37].For the cubic structure with O h symmetry, B m,m is expressed by using three CEF parameters, B 40 and B 60 , as Note the relation of B m,m = B m ,m .Following the traditional notation [38], we define 6) , where W determines an energy scale for the CEF potential, x specifies the CEF scheme for O h point group, and F(4) = 15 and F(6) = 180 for = 3.Finally, the Coulomb interaction term H C is given by where the Coulomb integral I m1m2,m3m4 is expressed by Here F k is the Slater-Condon parameter and c k is the Gaunt coefficient which is tabulated in the standard textbooks of quantum mechanics [39].Note that the sum is limited by the Wigner-Eckart theorem to k = 0, 2, 4, and 6.The Slater-Condon parameters should be determined for the material from the experimental results, but in this paper, for a purely theoretical purpose, we set the ratio among the Slater-Condon parameters as physically reasonable values, given by Note that F 6 is considered to indicate the scale of Hund's rule interaction among f orbitals.
Kinetic Term.
Next we consider the kinetic term of f electrons.When we discuss magnetic properties of felectron materials as well as the formation of heavy quasiparticles, it is necessary to include simultaneously both conduction electrons with wide bandwidth and f electrons with narrow bandwidth, since the hybridization is essentially important for the formation of heavy quasiparticles.In this sense, it is more realistic to construct orbital-degenerate periodic Anderson model for the theory of multipole ordering in heavy-electron systems.However, if we set the starting point of the discussion in the periodic Anderson model, the calculations for multipole susceptibility will be very complicated.Thus, we determine our mind to split the problem into two steps: namely, first we treat the formation of heavy quasiparticles and then, we discuss the effective model for such heavy quasiparticles.If we correctly include the symmetry of f -electron orbital, we believe that it is possible to grasp qualitatively correct points concerning the multipole ordering by using an effective kinetic term for f electrons.
Based on the above belief, we consider the effective kinetic term in a tight-binding approximation for f electrons.Then, H kin is expressed as where t a m,m indicates the f -electron hopping between mand m -orbitals of adjacent atoms along the a direction.The hopping amplitudes are obtained from the table of Slater-Koster integrals, [40][41][42] but, for convenience, here we show explicitly t a m,m on the three-dimensional cubic lattice.The hopping integrals along the z-axis are given in quite simple forms as and zeros for other cases.Here ( f f ) denotes the Slater-Koster integral through bond between nearest neighbor sites.Note that the above equations are closely related to the definitions of ( f f σ), ( f f π), ( f f δ), and ( f f φ).
On the other hand, hopping integrals along the xand y-axes are given by the linear combination of ( where the coefficient E a m,m indicates the two-center integral along a direction between m and m orbitals and runs among σ, π, δ, and φ.In Table 1, we show the values of E a m,m .Other components are zeros unless they are obtained with the use of relation of E a m,m = E a m ,m = E a −m,−m .By using the experimental results concerning the Fermisurface sheets for actual materials, it is possible to determine the Slater-Koster parameters, ( f f σ), ( f f π), ( f f δ), and ( f f φ), so as to reproduce the experimental results.Namely, the hopping integrals should be effective ones for quasiparticles, as mentioned above.Here it is important to include correctly the symmetry of local f orbitals in the evaluation of hopping amplitudes, although the whole energy scale will be adjusted by experimental results and band-structure calculations.
Table 1: Coefficients E a m,m along the x-and y-axes between f orbitals of nearest neighbor sites.Note that in double signs, the upper and lower signs correspond to the value along the x-and yaxes, respectively.
Multipole Ordering
In order to discuss the multipole ordered phase from the itinerant side, we evaluate the multipole susceptibility χ by following the standard quantum field theory.The multipole susceptibility is defined by where X q denotes the multipole operator with momentum q, ν = 2πTn is the boson Matsubara frequency with an integer n, T is a temperature, X q (τ) = e Hτ X q e −Hτ , and • • • indicates the thermal average by using H.In the following, we introduce the multipole operator and explain a method to evaluate the susceptibility.
Multipole Operator.
In any case, first it is necessary to define multipole.As for the definition of multipole, readers should consult with [30,31], but here we briefly explain the definition in order to make this paper self-contained.We define X in the one-electron density-operator form as where k denotes the rank of multipole, γ indicates the irreducible representation for cubic point group, and T (k) γ (q) indicates the cubic tensor operator, expressed in the secondquantized form as Here the matrix elements of the coefficient T (k,γ) are calculated from the Wigner-Eckert theorem as [43] T where = 3, s = 1/2, j = ± s, μ runs between − j and j, q runs between −k and k, G (k) γ,q is the transformation matrix between spherical and cubic harmonics, JM|J M J M denotes the Clebsch-Gordan coefficient, and j T (k) j is the reduced matrix element for spherical tensor operator, given by Note that k ≤ 2 j and the highest rank is 2 j.When we define multipoles as tensor operators in the space of total angular momentum J on the basis of the LS coupling scheme, there appear multipoles with k ≥ 8 for the cases of J ≥ 4, that is, for 2 ≤ n ≤ 4 and 8 ≤ n ≤ 12, where n is local f -electron number.If we need such higher-rank multipoles with k ≥ 8, it is necessary to consider many-body operators beyond the present one-body definition.
Note that when we express the multipole moment as ( 16) and ( 17), we normalize each multipole operator so as to satisfy the orthonormal condition [44] Tr where δ kk denotes the Kronecker's delta.
Multipole
Susceptibility.Now we move to the evaluation of multipole susceptibility.In order to determine the coefficient p k,γ (q) in ( 15), it is necessary to calculate the multipole susceptibility in the linear response theory.The multipole susceptibility is expressed as where the susceptibility matrix is given by Then, χ and p k,γ are determined by the maximum eigenvalue and the corresponding normalized eigenstate of the susceptibility matrix equation (21).
In order to calculate actually the multipole susceptibility, it is necessary to introduce an appropriate approximation.In this paper, we use a random phase approximation (RPA) for the evaluation of multipole susceptibility.For the purpose, we redivide the Hamiltonian H into two parts as where H 0 indicates the one-electron part given by H 0 = H kin + H so + H CEF and H 1 is the interaction part, which is just equal to H C in the present case.Then, we consider the perturbation expansion in terms of the Coulomb interaction.The susceptibility diagrams are shown in Figure 1 and they are expressed in a compact matrix form as where U and J are, respectively, given by U m1σ1m2σ2,m3σ3m4σ4 = I m1m2,m3m4 δ σ1σ4 δ σ2σ3 , and the dynamical susceptibility χ (0) is given by Here G (0) is the one-electron Green's function defined by the noninteracting part H 0 .
In order to determine the multipole ordering, it is necessary to detect the divergence of χ at ν n = 0. We cannot evaluate the susceptibility just at a diverging point, but we find such a critical point by the extrapolation of 1/χ max as a function of U, where U indicates the energy scale of the Slater-Condon parameters and χ max denotes the maximum eigenvalue of susceptibility matrix equation (21) for ν n = 0.When we increase the magnitude of U, 1/χ max is gradually decreased from the value in the weak-coupling limit.In actual calculations, we terminate the calculation when 1/χ max arrives at a value in the order of unity.By using the calculated values of 1/χ max , we make an extrapolation of 1/χ max as a function of U.Then, we find a critical value of U at which 1/χ max becomes zero.As for the type of multipole and ordering vector in the ordered phase, we extract such information from the eignevectors of the susceptibility matrix corresponding to the maximum eigenvalue.By performing the above calculations, it is possible to find the multipole ordered phase from a microscopic viewpoint in principle.
Results
In the previous sections, we have explained the model Hamiltonian and the procedure to determine the type of multipole ordering.We believe that the present procedure can be applied to actual materials, but there are so many kinds of materials and multipole phenomena.Here we show the calculated results for the case of n = 2 concerning Γ 3 non-Kramers quadrupole ordering, in order to see how the present procedure works.The results for actual materials will be discussed elsewhere.
CEF States.
First we discuss the local CEF states in order to determine the CEF parameter.We consider the case of n = 2 corresponding to Pr 3+ and U 4+ ions.Since we discuss the local electron state, the energy unit is taken as F 6 .As for the spin-orbit coupling, here we take λ/F 6 = 0.1.Concerning the value of W, it should be smaller than λ and we set W as W/F 6 = 0.001.
In Figure 2, we show the CEF energies as functions of x.As easily understood from the discussion in the LS coupling scheme, the ground state multiplet for n = 2 is characterized by J = 4, where J is total angular momentum given by J = |L − S| with angular momentum L and spin momentum S. For n = 2, we find L = 5 and S = 1 from the Hund's rules and, thus, we obtain J = 4. Due to the effect of cubic CEF, the nonet of J = 4 is split into four groups as Γ 1 singlet, Γ 3 non-Kramers doublet, Γ 4 triplet, and Γ 5 triplet.In the present diagonalization of H loc , we find such CEF states, as shown in Figure 2. When we compare this CEF energy diagram with that of the LS coupling scheme [38], we find that the shape of curves and the magnitude of excitation energy are different with each other.However, from the viewpoint of symmetry, the structure of the low-energy states is not changed between the LS and j-j coupling schemes [1].Since we are interested in a possibility of Γ 3 quadrupole ordering, we choose the value of x as x = 0.0 in the following.
Energy Bands.
Next we consider the band structure obtained by the diagonalization of H 0 = H kin + H CEF + H so .As for the Slater-Koster integrals, it is one way to determine them so as to reproduce the Fermi-surface sheets of actual materials, but here we determine them from a theoretical viewpoint as where t indicates the magnitude of hopping amplitude.The size of t should be determined by the quasi-particle bandwidth, but here we simply treat it as an energy unit.
In Figure 3, we depict the eigen energies of H 0 along the lines connecting some symmetric points in the first Brillouin zone.As for the spin-orbit coupling and CEF parameters, we set λ/t = 0.1 and W/t = 0.001.First we note that there exist seven bands and each band has double degeneracy due to time-reversal symmetry, which is distinguished by pseudospin.Since the magnitude of λ is not so large, we do not observe a clear splitting between j = 7/2 octet and j = 5/2 sextet bands.Around at Γ point, we find that j = 5/2 sextet is split into two groups, Γ 7 doublet and Γ 8 quartet.Here we note that the energy of Γ 8 quartet is lower than that of Γ 7 .Since the Γ 8 has orbital degeneracy, it becomes an origin of the formation of Γ 3 non-Kramers doublet, when we accommodate a couple of electrons per site.
Note that the Fermi level is denoted by a horizontal line, which is determined by the condition of n = 2, where n is the average electron number per site.When we pay our attention to the band near the Fermi level, we find that the orbital degeneracy exists in the bands on the Fermi surface.For instance, we see the degenerate bands on the Fermi surface around the Γ point.Such orbital degeneracy in the momentum space is considered to be a possible source Energy (units of Figure 2: CEF energy levels obtained by the diagonalization of H loc for λ/F 6 = 0.1 and W/F 6 = 0.001 with F 0 = 10F 6 , F 2 = 5F 6 , and of Γ 3 quadrupole ordering, which will be discussed in the next subsection.Finally, in the present case, we expect the appearance of the large-volume Fermi surface as well as the small-size pocket-like Fermi surface.Such mixture of the Fermi surface sheets with different topology may be an important issue for the appearance of higher-rank multipole ordering.
Phase
Diagram.Now we show the phase diagram of the multipole state.First it is necessary to calculate the susceptibility equation ( 25) at ν n = 0.As for the momentum q, we divide the first Brillouin zone into 16 × 16 × 16 meshes.Concerning the momentum integration in (25), we exploit the Gauss-Legendre quadrature with due care.At low temperatures such as T/t = 0.01, it seems to be enough to divide the range between −π and π into 60 segments along each direction axis.As found in (25), χ (0) has 14 4 components in the spin-orbital space, but it is not necessary to calculate all the components due to the symmetry argument.We have checked that it is enough to evaluate 1586 components of χ (0) .We set the parameters as λ/t = 0.1, x = 0.0, W/t = 0.001, Note that the ratio among the Slater-Condon parameters is the same as that in Figure 2. We also note that the hopping amplitude t is relatively large compared with local potential and interactions, since we consider the multipole ordering from the itinerant side.Here we emphasize that our framework actually works for the microscopic discussion on the multipole ordering.A way to determine more realistic parameters in the model will be discussed elsewhere.
By changing the values of temperature T/t, we depict the phase diagram in the plane of t/F 0 and T/t.Note that t 2 /F 0 corresponds to the typical magnitude of multipolemultipole interaction between nearest neighbor sites.As naively understood, when the temperature is increased, larger value of U is needed to obtain the ordered state.Then, the phase diagram is shown in Figure 4. We evaluate Energy (units of t) 1, and W/t = 0.001.Note that we show the eigen energies along the lines of the maximum eigenvalue of the multipole susceptibility by increasing F 0 /t.One may think that the magnitude of t/F 0 in Figure 4 is too small to obtain reasonable results in the RPA calculations.Here we note that the total bandwidth of the seven-orbital system is in the order of 10t, as shown in Figure 3. Namely, the critical value of the interaction F 0 c at low enough temperatures is considered to be in the order of the total bandwidth.In this sense, we consider that the value of t/F 0 in Figure 4 is not small for the RPA calculations.Note also that when the temperature is increased, the magnitude of noninteracting susceptibility is totally suppressed, leading to the enhancement of F 0 c .Thus, t/F 0 is decreased when T is increased, as observed in Figure 4.
At low temperatures as T/t < 0.3, we obtain that the maximum eigen value of susceptibility matrix is characterized by the multipole with Γ 3 symmetry and the ordering vector Q = (π, π, π).The component of the multipole depends on the temperature, but the 90% of the optimized multipole is rank 2 (quadrupole).Others are rank 4 (hexadecapole) and rank 6 (tetrahexacontapole) components, which are about 10%.Note again that the multipoles with the same symmetry are mixed in general, even if the rank of the multipole is different.Namely, quadrupole is the main component, while hexadecapole and tetrahexacontapole are included with significant amounts.Note also that the phase diagram is shown only in the region of T/t < 1, but the boundary curve approaches the line of t/F 0 = 0. Since the case with very large F 0 is unrealistic, we do not pay our attention to the phase for T > t, although we can continue the calculation in such higher temperature region.
When we increase the temperature, the magnetic phase is observed for T/t > 0.3.The main component is Γ 4 dipole and the ordering vector is Q = (0, 0, 0).Note that the susceptibility for Γ 4 multipole moment does not mean magnetic susceptibility, which is evaluated by the response of magnetic moment L+2S, that is, J+S.At T/t = 0.4, admixture of the multipole is as follows: rank 1 (dipole) 90.7%, rank 3 (octupole) 6.5%, rank 5 (dotriacontapole) 2.1%, and rank 7 (octacosahectapole) 0.7%.The amounts are changed by the temperature, but the main component is always dipole.
We have found the low-temperature antiferroquadrupole state and the high-temperature ferromagnetic phase.Such a combination of nonmagnetic and magnetic phases can be observed in other parameter sets including quadrupole ordering.
Discussion and Summary
We have constructed the microscopic framework to discuss the multipole ordering due to the evaluation of multipole susceptibility in f -electron systems on the basis of the sevenorbital Hubbard model with strong spin-orbit coupling.For the evaluation of multipole susceptibility, we have used the RPA and found the critical point from 1/χ max .As an example of the present scheme, we have shown the results for the case of n = 2 concerning quadrupole ordering on the threedimensional simple cubic lattice.If we specify the lattice structure and determine the hopping parameter from the comparison with the experimental results on the Fermisurface sheets, in principle, it is possible to determine the type of multipole ordering with the use of appropriate local CEF parameters and Coulomb interactions.
Although the microscopic theory of multipole ordering has been proposed, it is necessary to elaborate the present scheme both from theoretical and experimental viewpoints.In order to enhance the effectiveness of the present procedure, we should increase the applicability of the theory.For instance, we have not considered at all the sublattice structure in this paper, but in actuality, the staggered-type multipole ordering has been observed.In order to reproduce the structure, it is necessary to maximize the multipole susceptibility by taking into account the sublattice structure.It is one of future problems from a theoretical viewpoint.
It is also highly expected that the present scheme should be applied to actual materials in order to explain the origin of multipole ordering.For instance, it is interesting to seek for the origin of peculiar incommensurate quadrupole ordering observed in PrPb 3 [45].At the first glance, it seems to be quite difficult to explain the origin of the Γ 3 quadrupole ordering with the ordering vector of Q = (π/2±δ, π/2±δ, 0) with δ = π/8.However, if we use the present scheme, it may be possible to find a solution in a systematic way.Another issue is the revisit to octupole and higher-rank multipole ordering in NpO 2 .The significant amount of dotriacontapole component may be understood naturally in the present scheme.
Another interesting future problem is the emergence of superconductivity near the multipole ordered phase.It has been widely accepted that anisotropic d-wave superconductivity appears in the vicinity of the antiferromagnetic phase, as observed in several kinds of strongly correlated electron materials.In general, near the quantum phase transition, anisotropic superconducting pairs are formed due to the effect of quantum critical fluctuations.Thus, also in the vicinity of multipole ordering, superconductivity is generally expected to occur.Even from purely theoretical interest, it is worthwhile to investigate superconductivity near the antiferroquadrupole phase in Figure 4.When we turn our attention to actual material, in PrIr 2 Zn 20 , superconductivity has been observed and quadrupole fluctuations have been considered to play some roles [46].Within the RPA, it is possible to discuss the appearance of superconductivity in the vicinity of quadrupole ordering in the present scheme.It is another future problem.
In summary, we have proposed the prescription to determine the type of multipole ordering from a microscopic viewpoint on the basis of the seven-orbital Hubbard model.The multipole susceptibility has been obtained in the RPA and the quadrupole ordering has been actually discussed in a way similar to that for the spin ordering in the singleorbital Hubbard model.The application to actual f -electron materials will be discussed elsewhere, but we believe that the present scheme is useful to consider the origin of multipole ordering.In addition, a possibility of superconductivity near the multipole ordering is an interesting future problem.
Figure 1 :
Figure 1: Feynman diagrams for multipole susceptibility in the RPA.The solid curve and broken line denote the noninteracting Green's function G (0) and Coulomb interaction, respectively.
Figure 4 :
Figure 4: Phase diagram of the multipole ordering for n = 2 on the three-dimensional simple cubic lattice. | 7,047.6 | 2012-05-13T00:00:00.000 | [
"Physics"
] |
Detection for Power line Inspection
Power line inspection is very important for electric company to keep good maintenance of power line infrastructure and ensure reliable electric power distribution. Research efforts focus on automating the inspection process by looking for strategies to satisfy all kinds of requirements. Following this direction, this paper proposes a learning approach for all kinds of detecting problems, where aggregate channel features are used to train the boost classifier. Adopting the sliding window paradigm, the electric tower, insulator and nest can be located very fast. The main advantage of this approach is its efficiency and accuracy for processing huge quantity of image data. Obtaining highly encouraging results shows that it is really a promising technique.
Introduction
Electric power companies invest significantly for power line inspection to ensure reliable electric power distribution. Now the common strategy is aerial inspection by manned helicopter equipped with multiple sensors such as visual, infrared and ultra-violet cameras etc. Data captured or recorded by expert crew with those above cameras are examined later manually to detect potential faults and damage on different power line components. This process is not only extremely time consuming, but also very expensive and prone to human error. With these problems in mind, power industry is actively seeking solutions to automate different aspects of power line inspection.
After data acquisition, the main task of power line inspection is fault identification where computer vision can contribute to help. It involves automatically detection and localization of electric devices such as wires, towers, insulators, conductors etc. The state of art has focused on tower detection. And the detection of tower can then be used to find various defects/faults of power line infrastructure. This paper presents a learning based solution for all kinds of detection and localization. Including tower, insulator and nest on the tower. Therefore the background or other parts of image can be cropped quickly. Focus of Attention is concentrated on image region with what people care about. This process can also be used for images filtration. When millions of images were obtained by Unmanned Aerial Vehicles, Maybe lots of them don't include any power line devices.
Those images which have no use for power line inspection should be deleted. Running this detection process can be used as preprocessing stage which can delete those images without ROI in them automatically.
Some researchers have also focused on detection and segmentation of electric towers in the images [1][2][3][4][5]. Several authors apply straight line segment extraction for tower detection or localization [1,2,4,5]. Other authors then apply different segmentation approaches to extract the complete tower from the image: e.g. a template matching approach is used in [1]; graph-cut [6] based segmentation is used in [4]; a rule-based, as well as watershed segmentation [7] is used in [2]. Golightly and Jones [3] presented a different approach from the state of art where, instead of lines, the corners were DOI: 10.1051/ , 0 (2017) 710003010 considered the key identifying features of tower. They used a modified corner detector [8] to detect and track the tower tops. Although different approaches to tower detection and segmentation have reported promising results, Most of the results have been reported on just one type of tower. However, the electric towers are extremely diverse in shape, appearance and size. Therefore, most the state of the art results cannot be generalized to several different tower types.
To achieve the goal of complete autonomy, researchers must aim towards developing more general approaches. This paper following this direction considers tower detection as a learning problem. Sliding window paradigm is adopted for tower detection with boosted detector which is based on boosted decision trees computed over multiple feature channels such as color, gradient magnitude and histogram of gradient oration. Our solution for tower detection automatically can get top performing with very fast speed no matter the image size is large or small since multi-scale pyramid strategy is considered and implemented well. With the same method of tower detection, other power line devices can also be detected such as insulator. The nest which is one defect of power line can be detected perfectly too.
The rest of the paper is organized as follows: Section 2 states the problem addressed in this paper and describes several challenges which need to be addressed; Section 3 presents our approach to detection for power line inspection; The results are reported and discussed in Section 4; The final section is our conclusion and future research directions.
Problem statement
Currently, different projects are looking for automating either the acquisition process or the analysis process, or both, with the main objective of being able to detect and diagnose different defects of the power line infrastructure by using new sensors or by using new inspection platforms (e.g. robots; UAVs). In all these new possible approaches, computer vision plays an important role for automatically moving the camera in order to maintain the electric tower inside the filed of view of the camera, and for identifying and categorizing the different defects and failures of the power line infrastructure.
Nonetheless, computer vision is in fact a very challenging task for this technique. There are all kinds of complicated situation the visual system has to deal with such as viewpoint variation, illumination and background change. Because of the high variability of background, it is difficult to find a unique feature that can work in all the possible scenarios.
Illumination changes also play an important role. That directly leads to power line segmentation from background algorithm not working for some low contrast images. Other problems such as constant viewpoint changes (e.g. especially when cameras are manually moved) and scale changes of the electric tower and its components add additional complexity to the idea of applying computer vision to solve this problem, in which, depending on the adopted strategy, could require a system that automatically defines which is the best frame to be used for detecting defects.
Currently, there is not a complete solution that satisfies the different requirements of automated power line inspection: simultaneously detect electric towers, check for defects, and also analyze security distances. Therefore, in terms of cost-benefits, it is important for energy companies to solve this problem and try to find a system that can deal with the different requirements of the inspections at high speed. In this paper we explore the electric tower detection problem applying a machine learning approach, using low quality images. Therefore, the system will help in reducing the maintenance cost of the electric system by being able to cope with one of the problems of increasing the vehicle speed (reducing the quality of the images).
Tower detection strategy
The Boosting is a simple yet powerful tool for classification and can model complex non-linear functions [9,10]. The general idea is to train and combine a number of weak learners into a more powerful strong classifier. Decision trees are frequently used as the weak learner in conjunction with boosting, and in particular orthogonal decision trees, that is trees in which every split is a threshold on a single feature, are especially popular due to their speed and simplicity [11][12][13].
Decision trees with oblique splits can more effectively model data with correlated features as the topology of the resulting classifier can better match the natural topology of the data [14].
For training and detecting, the channel features are used. Given an input image, Aggregate Channel Features (ACF) computes several feature channels, where each channel is a per-pixel feature map such that output pixels are computed from corresponding patches of input pixels (thus preserving image layout). We use the same channels as [12]: normalized gradient magnitude (1 channel), histogram of oriented gradients (6 channels), and LUV color channels (3 channels), for a total of 10 channels. We down sample the channels by 2x and features are single pixel lookups in the aggregated channels.
Thus, given a h×w detection window, there are h/2×w/2×10 candidate features (channel pixel lookups). We use RealBoost [10] with multiple rounds of bootstrapping to train and combine 2048 depth-3 decision trees over these features to distinguish object from background. Soft-cascades [14] and an efficient multiscale sliding-window approach are employed. Our implementation uses slightly altered parameters from [12] (RealBoost, deeper trees, and less down sampling); this increases model capacity and benefits our final approach.
The boosting algorithm for learning can be described as bellow:
log
The algorithm described above [14] is used to select key weak classifiers from the set of possible weak classifiers.
While the AdaBoost process is quite efficient, the set of weak classifier is extraordinarily large. Since there is one weak The wrapper method can be used to learn a perceptron which utilizes M weak classifiers. The wrapper method also proceeds incrementally by adding one weak classifier to the perceptron in each round. The weak classifier added is the one which when added to the current set yields a perceptron with lowest error. Each round takes at least O(NKN); the time to enumerate all binary features and evaluate each example using that feature. This neglects the time to learn the perceptron weights. Even so, the final work to learn a 200 feature classifier would be something like O(MNKN).
The key advantage of AdaBoost over the wrapper method is the speed of learning. Using AdaBoost a feature classifier can be learned in O(MNK). In each round the entire dependence on previously selected features is efficiently and compactly encoded using the example weights. These weights can then be used to evaluate a given weak classifier in constant time.
We use AdaBoost for learning and training with ACF for feature computing. Further more we add multi-scale mechanism in our implementation. Given an image, ten channel features in original scale are computed as Figure 1.
Experiments and results
In order to train and evaluate the ACF boosted detector for detection, For each target, 1400 images have been divided into 3 sets: training, cross validation, and test set. 300 images (target and background) have been used for training, while 200 images are used for the cross validation and 200 for the test set. A total test error of 3.25% is attained. A false positive rate of 1.5% was achieved, which means that only 3 of the 200 background test images were incorrectly classified as target. On the other hand, we obtain a false negative rate of 2%, which indicates that 4 target images, out of 200 used for testing, were predicted as background. These results suggest that, although overall performance of the classifier is good, tower images get predicted as background more often than background images as target. See Figure 2.
Conclusions
Power line infrastructures are heterogeneous and complex, making automatic power line inspection a difficult problem.
Therefore, there is a current considerable interest in this area of research. To achieve the goal of autonomous inspection, research efforts must aim towards developing general approaches that satisfy the several requirements: e.g. simultaneous detection of power lines and electric towers, check for defects in several power line components, analyze security distances, among others. The current paper is an effort in this direction, with emphasis on electric tower detection in aerial inspection data. We believe this is a key stage to be able to develop more complex tasks such as defects analysis, especially when the main source of information comes from poor quality images.
The key novelty of this paper is the investigation of a learning framework for providing a complete solution for detection which may be the fastest paradigm since we use boosted detector. Another main reason for the good performance is we use aggregate channel features including color, normalized gradient magnitude and histogram of gradients. However, such features may not be an ideal representation for all types of power line targets. This problem was more visible during the evaluation of the complete system, especially with higher number of misclassifications. Therefore, immediate future work is lined towards exploring other feature spaces. Finally, we hope to significantly enhance the results of power line targets detection. | 2,793 | 2017-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
The physics program of the NA60+ experiment
. The NA60 + experiment, which has been proposed as a fixed target experiment at the CERN SPS, is designed to study the phase diagram of the strongly inter-acting matter at high baryochemical potential by performing precision studies of thermal dimuons, heavy quark and strangeness production in Pb-Pb collisions at center-of-mass energies ranging from 5 to 17 GeV. The progress of the R&D and the key points of the NA60 + physics program will be described.
Introduction
One of the key points in the understanding of the QCD under extreme conditions is the exploration of its phase diagram.Lattice QCD calculations predict a cross-over transition from the hadronic matter to the Quark-Gluon Plasma (QGP) at vanishing baryochemical potential µ B around a critical temperature T c ≈ 155 MeV, while at large µ B values a first order phase transition is expected.A strong experimental program is being carried out since two decades at the CERN-SPS, BNL-RHIC and CERN-LHC at top energies to characterize the region around µ B ∼ 0, showing that a deconfined state of matter is formed, with properties consistent to the predictions from lattice QCD.More recently, interest in the experimental study of the large µ B region increased.The search of a critical point in the phase diagram, the order of the phase transition and of the properties of the medium at large µ B , the chiral symmetry restoration effects and the temperature at which the onset of the deconfinement takes place constitute fundamental issues in the understanding of the phase diagram properties.
The target of the NA60+ experiment is to study electromagnetic and hard probes in the range 200 < µ B < 400 MeV via an energy scan at the SPS in the range 6 < √ s < 17 GeV.Electromagnetic probes, and in particular dileptons, will allow to determine the temperature of the system via a measurement of the thermal dimuon mass spectrum.Also chiral symmetry restoration effects related to the modification spectral function of the ρ and its chiral partner a 1 can be investigated.Hard probes bring information on the onset of deconfinement via the measurement of J/ψ suppression versus CM energy, while the production of open charm hadrons can provide information on the transport properties of the QGP.
The experimental apparatus
The detector concept of NA60+ is based on the design of NA60.The layout of the experiment is shown in figure 1.The two main components of the apparatus are a muon spectrometer (MS) and a vertex spectrometer (VS), separated by a thick hadron absorber made of BeO and graphite, which acts as a muon filter.
The MS is composed of a set of six tracking stations (plotted as green boxes in the figure) and a large aperture toroidal magnet.Two tracking stations are located upstream the magnet, which is followed by two other tracking stations.A thick graphite wall (brown rectangle in the figure) filters out possible residual hadrons.Finally, a set of two more tracking stations is placed downstream of the graphite wall.The relatively low rates after the hadron absorber, ∼ 2 kHz for a Pb beam rate of 10 −6 s −1 , allows to adopt the GEM or MWPC technologies for the tracking stations.A prototype of trapezoidal MWPC is currently being constructed and will be tested at the CERN SPS in autumn 2022.The prototype detector constitutes a module that would be replicated several times and arranged in such a way to cover the designed geometry in each tracking station.As an alternative option, triple GEM modules will also be studied.The toroidal magnet, which is being designed for NA60+, is foreseen to provide a magnetic field of 0.5 T over a volume of 120 m 3 .A small-scale prototype (1:5) was constructed and tested.The simulations of the magnetic field were found to be in agreement with the results of the tests within 3%.
The MS can be moved on rails, to cover the midrapidity region at different collision energies.At top SPS energies, the absorber thickness will be increased.
The hadron absorber, while providing muon identification, deteriorates the muon momentum resolution, due to multiple scattering and fluctuations in the energy loss.This loss in resolution is recovered by matching the tracks measured in the MS with those reconstructed in the VS, both in coordinates and momentum space.
The VS (figure 2) consists of a set of five to ten planes of ultra-thin, large area Monolithic Active Pixel Sensors (MAPS) for tracking, embedded in the gap of a dipole magnet.MAPS sensors are composed of 25 mm long units, which are replicated through a stitching technique to cover an area of 15×15 cm 2 .Each plane contains four MAPS sensors.The sensor thickness is of about 0.1% X 0 , reducing the effect of multiple scattering in the VS.The spatial resolution is expected to be 5 µm or better, with an improvement of a factor of two with respect to the hybrid technology.The magnet considered for the VS is the CERN MEP48 dipole, that provides a field up to B = 1.47 T.
Physics performances
The physics performances of NA60+ were studied executing fast simulations of the signals with semi-analytical tracking based on the Kalman filter, while the simulation of the background was performed with FLUKA.The obtained opposite sign dimuon mass spectrum is shown in figure 3 (left).The combinatorial background and the fake matches contribution can be evaluated using event mixing techniques and subtracted to the opposite sign mass spectrum.The signal mass spectrum resulting after background subtraction is dominated by the hadronic cocktail for M < 1.5 GeV/c 2 .In this region, a precision measurement of the ρ spectral function can be performed, complementing the NA60 measurement in In-In at top SPS energy with results at lower energy.
In the region 1 < M < 1.5 GeV/c 2 , a dimuon enhancement due to the chiral mixing between the ρ and its chiral partner a 1 via 4π states can be observed [1].With the foreseen accuracy of the measurement, a 20-30% enhancement, expected in case of full mixing, can be detected by NA60+.
A study of the thermal dimuon mass spectrum can be performed for M > 2 GeV/c 2 .A fit with the form dN/dM ∝ M −3/2 exp(−M/T S ) allows to extract the parameter T S , which represents a space-time average of the thermal temperature over the fireball evolution and can be determined with a precision of the order of 10 MeV.The determination of the evolution of T S vs CM energy for √ s NN < 10 GeV may allow to discover a plateau in the caloric curve (fig.3, right) that would be present in case of a phase transition of the first order [2].Charmonium suppression in Pb-Pb collisions was extensively studied at the top SPS energy by the NA50 collaboration [3].NA60+ aims to extend the measurements at lower energies, down to E lab /A = 40 GeV, in order to search for the onset of the deconfinement.A statistics ranging from 10 4 to 10 5 reconstructed J/ψ, depending on the beam energy, can be collected at the foreseen Pb beam intensity, allowing for a precise measurement of the suppression.Also data taking with p-A collisions is foreseen, in order to determine precisely cold nuclear matter effects.Open charm measurements will be performed using the vertex telescope as a stand-alone detector, reconstructing the decays of charmed hadrons into two or three charged hadrons.The huge combinatorial background can be reduced by applying geometrical selections on the displaced decay vertex topology.The MAPS technology can provide a signal to background ratio ∼ 10 times higher than the corresponding value obtainable with hybrid pixel sensors.No open charm measurement are currently available below the top SPS energy.In one month data taking, a measurement of the D 0 yield in central Pb-Pb collisions with a statistical precision much better than 1% can be obtained, allowing for a precise determination of the yield and v 2 as a function of p T , rapidity and centrality.At √ s NN = 10.6 GeV due to the lower production cross section, a reduction of the statistics by an order of magnitude is expected.Still, the measurement will be feasible with a statistical precision at the level of the percent.Finally, strangeness measurements in the hadronic decay channels will allow to explore the low multiplicity region to have a complete view of strangeness enhancement with multiplicity.A large statistics is expected, allowing for a high p T -reach, studies in narrow centrality bins and an extension to multistrange hadrons of the elliptic flow measurements.
Figure 1 .
Figure 1.Conceptual design of the NA60+ experimental apparatus.The figure represents the set-up adapted to low-energy collisions, with a thinner hadron absorber and the muon spectrometer relatively closer to the target.
Figure 2 .
Figure 2. Schematic layout of the silicon planes inside MEP48.
Figure 3 .
Figure 3. Left: simulated dimuon mass spectrum in central Pb-Pb collisions at E = 40 GeV per nucleon.Right: medium temperature evolution vs √ s NN in central Pb-Pb collisions. | 2,097.2 | 2023-01-01T00:00:00.000 | [
"Physics"
] |
Mimetic gravity: mimicking the dynamics of the primeval universe in the context of loop quantum cosmology
Mimetic gravity can be described as a formulation capable of mimicking different evolutionary scenarios regarding the universe dynamics. Notwithstanding its initial aim of producing a similar evolution to the one expected from the dark components of the standard cosmology, a recent association with loop quantum cosmology could also provide interesting results. In this work, we reinterpret the physics behind the curvature potential of mimetic gravity description of loop quantum cosmology. Furthermore, we also test the compatibility of our formulation for a Higgs-type field, proving that the mimetic curvature potential can mimic the dynamics from a Higgs inflationary model. Additionally, we discuss possible scenarios that emerge from the relationship between matter and mimetic curvature and, within certain limits, describe results for the primeval universe dynamics obtained by other authors.
Introduction
Loop quantum gravity (LQG) is an attempt to quantize gravity by performing a nonperturbative quantization of general relativity (GR) [1] at kinematic level that has been showing progress during the last few years. Mainly, its cosmological description called loop quantum cosmology (LQC), see [2][3][4] for a dedicated review. LQC overcomes the kinematic character of LQG through cosmological dynamics. Moreover, it naturally solves the initial singularity problem by replacing it with a bounce for, at least, the most common cosmological models [5].
Effective LQC is a compelling proposal because it results in regular solutions. It does not matter if we are analyzing the primordial universe evolution from matter, curvature or scale factor angle, the solutions do not diverge [6]. a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>In order to reproduce LQC results, many approaches have been tested, including the ones with massless fields, different potential shapes and so on. Among them, a recent work from Langlois et al. [7] showed how to recover the Effective LQC dynamics through a class of scalar-tensor theories, in which the Mimetic Gravity (MG) of Chamseddine and Mukhanov was included (see, in particular, [8,9]). The remarkable feature in this description is the treatment of curved space-times. The strategy employed to incorporate curvature provided a new window we intend to explore.
The MG description of LQC dynamics can be interpreted in such a way that enables us to follow the evolution of a scalar field whose potential is intrinsically coupled to a curved background. Because its nature and the fact that the Higgs field is the only scalar field currently observed, a Higgstype field is a perfect candidate to test our approach. Furthermore, the possibility of relating the mimetic field to the Higgs mechanism presented in [10] and [11] fortify our idea.
In this work, we aim to emphasize how powerful and versatile the mimetic formulation of LQC is. First, the MG curvature potential is interpreted as the geometric response to the presence of matter onto spacetime, which allows the study of the universe evolution without considering the field potential directly. Next, we analyze the implications regarding the different interpretations of the curvature role in the universe dynamics. Furthermore, once the general solution for the Hubble parameter is obtained, we will show that the dynamics of the universe during the inflationary period can be described within the framework of the mimetic representation of LQC. Although it is possible through MG to mimic any scalar field, here we will show that the potential for curvature in the mimetic description of LQC can produce the same evolution for the Hubble parameter as that derived from the scenario known as Higgs Inflation (HI).
The paper is organized as follows. In Sect. 2, we summarize only the essential aspects of LQC and MG, emphasizing the universe evolution during the early times. In Sect. 3, we expose our interpretation of the result presented in Langlois et al. [7] and how we construct an alternative evolutionary scenario by applying our formalism. We also show how the mimetic curvature potential must behave to reproduce the HI dynamics. Therefore, from the viewpoint of the dynamics described by the Hubble parameter, the inflationary Higgs phase can be perfectly mimicked by the curvature potential introduced by Langlois et al. [7]. Besides, we use this potential to adjust the HI and effective mimetic LQC energy scales, displaying the compatibility between them. We conclude Sect. 3 with two Sects. (3.3.1 and 3.3.2) that discuss possible interpretations of the curvature potential from MG representation of LQC. In fact, we have shown that it is possible to recover, within certain limits, previous analyzes of [12] (for k = 0) or that studied in [13] (for k = 1). Finally, we highlight the most relevant implications of our proposal in Sect. 4.
Overview
The following computations were developed using the natural unity system. Consequently, the velocity of light c and reduced Planck constanth are unitary (c =h = 1). Besides, the Newtonian gravitational constant G, Planck length Pl , Planck mass m Pl and reduced Planck mass M Pl are related by m Pl = −1 Pl = (
Loop Quantum Cosmology
The key element that makes LQG different from other approaches of quantum gravity is the holonomy introduction. The Ashtekar connection A i a and its conjugated momentum E a i 1 are the canonical variables of LQG [7,14] whose forms are given by where x i refers to space coordinates and γ 0.2375 represents the Barbero-Immirzi parameter [15,16]. The variables p and c are defined with respect to the scale factor a and its time derivativeȧ as being N the lapse function. However, instead of trying to implement A i a or c as quantum operator, the holonomy (as a function of A i a ) is the one defined as fundamental operator [7], resulting in the so-called holonomy corrections.
LQC incorporates the quantization scheme and techniques from LQG and applies them for homogeneous and isotropic space-times [7,17]. Hence, LQG can be considered a canonical quantization of gravity, meanwhile, LQC corresponds to a canonical quantization of homogeneous and isotropic space-times [17]. In LQC, the holonomy is considered around a loop with square shape due to the symmetries of Friedmann-Robertson-Walker (FRW) space-times. Because p is the variable related to E a i , it is the one promoted to area operator, presenting a discrete spectrum [7]. Thus, there is a minimum area value, usually referred to as ∆ = 2 √ 3πγ 2 Pl that limits the size of the loop as a fundamental structure of space-time. The relation between ∆ and the face physical area |p| is described byμ 2 = ∆ /|p| [18]. This procedure determines the loop area, being namedμ-scheme [2,14].
In this quantum cosmological scenario, the universe dynamics is determined by the Effective LQC version of the Friedmann equations and continuity equation. The LQC effective Friedmann equation can be obtained from the evolution of the observable p, which corresponds to its equation of motion, dictated bẏ where C H = d 3 xNH is the Hamiltonian constraint. First, the Hamiltonian receives a classical treatment in which it acquires the shape [7] H eff = − 3p 3/2 8πG∆ γ 2 sin 2μ c + with π ϕ and V (ϕ) representing the momentum and potential of the matter content defined by the scalar field ϕ, respectively. Second, as the Hamiltonian constraint is weakly equal to zero, plugging (4) into (3) is going to result iṅ At this point, it is clear how the sine function restricts the relation between the matter energy density ρ and its critical value ρ c to the range 0 ρ/ρ c ≤ 1 [19]. Here, ρ presents the classical form and where ρ Pl is the matter energy density at Planck scale. Finally, from (5), it is straight to obtain the Hubble parameter 3 The equation (8) provides strong implications regarding the LQC evolutionary scenario for the very early times. As we regress in the universe history, the matter energy density is growing more and more. In the Hot Big Bang classical solution, ρ goes to infinite due to its inverse relationship with time, resulting in the so-called initial singularity. However, from the LQC perspective, the universe undergoes a bounce phase in which the matter content is compressed until ρ achieves a value close to ρ Pl [18]. This can be directly observed from equation (8), where the minimum point of the function H (H = 0) occurs for ρ = ρ c . Thereupon, the relation ρ/ρ c = 1 defines the turn point in scale factor evolution (ȧ = 0) that means a change in the evolution of universe itself. Consequently, instead of a singularity characterized by an infinite energy density, in LQC, there is a big bounce when the energy density achieves a range close to Planck scale determined by ρ c [12,18,19].
Another key point to remember is that all physical fields are considered regular at LQC bounce for strong curvature singularities in FRW models. As a result, any matter field used in LQC context must obey the usual expressions for the state equation (EoS) and Klein Gordon equation where P is the matter pressure and w represents the state parameter [6,12]. Moreover, regardless of theory, the continuity equation must be satisfied, which means the matter energy density obeys the expressioṅ After the bounce, the standard LQC universe undergoes a period called super-inflation [2,6]. During this phase, the Hubble parameter is extremely dynamical, going from H = 0 to its maximum value when ρ = (1/2)ρ c . Nevertheless, this phase should not last long (less than a Planck second) in order to avoid significant changes in the evolution of a and ϕ. Furthermore, this rapid growth implies a large friction term in (10) which takes time to slow down until a point the potential energy is capable of dominating the universe dynamic and producing a slow-rolltype evolution [6].
In principle, a bounce phase could enable the field potential to climb the potential well [13], which would naturally provide the initial condition for the field starts to roll down as expected from the standard inflationary scenario. Indeed, the super-inflation should end with the universe in a suitable state for the beginning of inflation. However, the ratio regarding the kinetic and potential energies seems to determine the qualitative features of dynamical evolution when the initial data is established. Depending on how greater the kinetic energy is regarding the potential energy, the sorter the super-inflation phase is going to be. Moreover, producing an inflationary period with a nearly constant Hubble parameter, in agreement with the full LQC statementḢ = 0 at the end of super-inflation, requires specific adjustments for a bounce with the kinetic energy density as the dominant component [6] (a common assumption found in LQC literature).
The standard Effective LQC Hamiltonian constraint (see equation (4)) implies that quantum gravitational effects are negligible for values of ρ much smaller than ρ Pl [18], which enables to recover the standard Friedmann equation for flat FRW space-time Once both (8) and (11) are already determined, it is straight to obtain the acceleration equation for standard LQC by computing the time derivative of (8) and summing with (8) itself, obtaining the expressioṅ In summary, from Effective LQC approach, the effective Hamiltonian results in a modified Friedmann equation that only differs from the classical one by a quadratic term of the energy density ρ 2 , besides the universe underwent a bounce phase followed by an inflationary period regardless the matter content assumed. Notwithstanding, between these two stages, a super-inflation period is expected to take place [17]. During super-inflation, the universe is in a super-accelerated stageḢ > 0, meanwhile, along inflationary epoch, H obeys the relationḢ < 0 [12]. In this scenario, the gravity presents a repulsive behavior in the deep Planck regime due to quantum geometry [18], whose effects are negligible for sufficiently small values of ρ (ρ/ρ c → 0), recovering the classical dynamics of standard cosmology.
Mimetic gravity description for loop quantum cosmology
The mimetic gravity provides a unified geometric description of the universe evolution without any extra dark component [20]. Despite being recently proposed by Chamseddine and Mukhanov in [21] as a way to simulate the dark matter behavior, MG can also overcome cosmological singularities issues through the limiting curvature concept [8,9]. Furthermore, the mimetic representation has been extended to reproduce a plethora of different frameworks (see [20,22,23] for further discussions).
Basically, the MG formulation was built under the concept of disformal transformations as a consequence of GR invariance under diffeomorphism transformations [24,25]. This kind of transformation enables to parameterize g µν as a function of an auxiliary metricg µν and a scalar field ϕ, the mimetic field, like [20,26] From (15), two fundamental features emerged. First, the invariance of g µν under a conformal transformation ofg µν likeg µν → Ω (t, x) 2gµν . And second, the consistence condition that ϕ must satisfy. These properties can be directly related to the two equivalent formulations of MG: Lagrange multiplier and singular disformal transformations [27]. The first one enables to incorporate the condition (16) at the level of the action through a Lagrange multiplier. Meanwhile, the second formulation highlights the mapping g →g, ϕ as a singular disformal transformation in which ϕ corresponds to a new degree of freedom in the gravitational sector [20,25].
Here, we are considering MG as a different way to write the effective terms of LQC dynamics as implemented in [7], which was the theme of many works in the literature (for example [28][29][30][31][32]). The LQC effective Friedmann equation (8) is reproduced by constructing an action whose dynamical variables are a, N and ϕ. This action must satisfy the requirement of invariance under time reparametrization which, for a flat FRW space-time, can be achieved through the expression [7] The MG procedure consists in setting L like a function of the Hubble parameter F (H) whose form is defined as an ansatz to obtain the Hamiltonian density, where N continues as a Lagrange multiplier like in LQC 2 , α is a constant and π ϕ = a 3 Nφ and π a = αa 2 arcsin − 3H 4πGα (19) are the momenta of the scalar field and scale factor, respectively, satisfying Note that instead of p and c, in MG description, a and π a correspond to the pair of non-trivial canonically conjugated variables together with the pair ϕ and π ϕ [7]. Next, a similar procedure to the one previously presented in 2.1 is applied to obtain (8). However, in this case, the energy density and critical energy density are given by Regarding the equivalence with Effective LQC dynamics, the critical energy density from (21) and (7) will be equivalent only if α obeys the relation In [7], the generalization for curved space-times is performed by adding a term related to the curvature parameter k in the action and expanding the flat case definition of L by introducing the curvature dependence within Lagrangian as a potential term V k (a). Consequently, the Hamiltonian density changes and acquires the following form It is important to note that the formulation for the curvature mimetic gravity, developed by [7], was analyzed by [32] in terms of whether the curvature is identified with a multiple of the Planck scale. To answer this question the authors analyze if such a relationship can hold in the context of Bianchi I models. The conclusion of the authors is that in the case of Bianchi I spacetime the Hamiltonian for curvature mimetic gravity cannot be interpreted as an effective Hamiltonian arising from loop quantization. However, as emphasized by [34], it is unclear if such a limitation may exist for a curvature potential that reproduces the cosmological background dynamics similar to that derived in the group field theory approach to quantum gravity.
Another key point to consider is the instability issue afflicting high derivative mimetic models due to the presence of gradient/ghost. This makes difficult to obtain a stable model capable of reproducing the LQC equations (see, e.g., [35][36][37]). Notwithstanding, it is interesting to evaluate if there are healthy features that can emerge from the MG in order to reproduce the universe dynamics within the scope of the LQC. This is the direction that we intend to discuss the reinterpretation of the MG curvature potential in the next section.
Reinterpreting the potential term of mimetic gravity
To begin with, from equation (21), the absence of a potential term in the matter density comes from the assumption of a massless scalar field and/or the simplicity argument of defining V (ϕ) = 0 that makes easy to perform the quantization process [7]. On the other hand, if we consider a fundamental field non-minimally coupled to gravity, then it would naturally be induced a mixing between the kinetic term of the scalar field and the metric field (here represented as a curvature potential in the MG description of LQC).
In the following computations, we will replace V k (a) by V k (ϕ) in order to make clear our interpretation of the MG curvature potential as a direct response of the matter presence curving the space-time. Moreover, the reverse idea can also be applied, the matter content adapting itself according to the space-time curvature, emphasizing their intrinsic relation. To put this in another way, V k (ϕ) could correspond to the signature of the non-minimal coupling of a fundamental field to the curvature represented by V k (a) at the level of LQC.
This way of describing the primordial universe seems to be a natural interpretation within the scope of LQG since, in this theory, space-time and quantum fields are not distinct components. That is, the space-time we perceive on a large scale is an image generated by quantum fields that 'live on themselves'. Thus, we proceed by reinterpreting equation (26). The strategy applied in [7] was to introduce the curvature by adding a potential term that only depends on a and k in the gravitational part of the Lagrangian. Meanwhile, the field potential was neglected. Accordingly, we propose to change the field potential from matter sector to gravitational sector as a different way of interpreting the curvature role. First, (26) is rewritten as where the kinetic term is and V k (ϕ) is the field potential related with curvature. After, we define an effective energy density as returning to the primary form of the effective Friedmann equation Splitting the kinetic contribution from the potential one could be a strange arrangement at first look. However, this setup enables to treat the field as technically massless from the matter Hamiltonian point of view, once its "effective mass" contribution could be interpreted as an effect of the non-minimal coupling to gravity. About holonomy corrections that characterize space-time deformations, it will not have any actual difference because they are computed from both gravitational and matter sectors.
The balance among the contribution of the components from ρ eff during universe evolution needs to be adjusted. This is performed by respecting the changes from LQC energy range during the primordial universe evolution and the usual requirements for the occurrence of an inflationary period. From (25), by isolating the term with the sine function, we conclude that ρ eff is the amount to be compared to ρ c . During the bounce, (8) is recover for ρ eff = ρ kin , just as the flat case. However, it could have happened an equilibrium between the two terms related to curvature, V k (ϕ) = ka −2 , which seems to be a reasonable assumption since the universe motion must stop at the bounce point.
In [38], the energy range between (1/2)ρ c and 0 was pointed as the most suitable period to explore the slow-roll approximation. A similar statement can be found in [12] since the onset of the usual inflationary evolution is considered after the universe had achieved (1/2)ρ c . Likewise, we are considering the outset of inflationary period around ρ eff (1/2)ρ c , where we replaced the energy density from LQC by the mimetic form ρ eff . In analogy with the standard slow-roll approach, the kinetic energy will be much smaller than the potential term associated with curvature, reducing (27) to Analyzing the effective energy density ρ eff ≈ 3M 2 , we note how it decreases with expansion, like expected.
Just as the universe evolves, the increasing scale factor makes the universe geometry becomes flat by diluting any signal of curvature, in agreement with current data from Planck satellite [39]. Therefore, there had been a moment in which the field potential reached an energy range comparable with the kinetic one, sealing the inflationary epoch. At this point, quantum corrections should be negligible, turning (27) into Once we obtain equation (33) showing the role of the mimetic potential on the universe dynamics as described by the Hubble parameter, we can choose any inflationary field to be described through the mimetic potential V k (ϕ) − k/a 2 . In other words, since inflation under the LQC will occur in the interval 0 ρ eff /ρ c 1/2, it is enough to choose a scalar field able of producing inflation and to verify if the energy scales of the inflationary field can be adequately mimicked by LQC in terms of the formulation described by [7] for the curvature mimetic potential.
Although it is possible to do this analysis with the inflaton, as the scalar field responsible for producing inflation, our choice will fall on the model called Higgs Inflation. There are two reasons for this choice: (1) the only fundamental scalar field with experimental counterpart is the Higgs field, (2) the inflationary version of the Higgs field corresponds to a field not minimally coupled with gravity, a characteristic that seems interesting within the scope of MG. It is this scenario that we will analyze in Sect. 3.2.
Curvature potential mimicking the dynamics of Higgs inflation
The Higgs field playing the role of inflaton, the usual scalar field associated with the standard slow-roll inflation, is an idea that has been discussed since the first inflationary models were developed, as can be seen in [40]. Notwithstanding, HI describes inflation as a chaotic scenario in which a Higgs field is coupled with the curvature through large values of self-coupling λ and non-minimal coupling ξ parameters [41][42][43]. Basically, HI reproduces the successful flat potential of slow-roll approximation by coupling a primordial version of the current Higgs field with the Ricci scalar. Besides, ξ , λ , and the relation between them are only determined by cosmological observations [43][44][45][46].
The HI universe dynamics can be expressed by the action (see [43]) where the subscript J means Jordan frame and V (h) is the potential of Higgs field background h constructed like which is the usual Higgs potential from Standard Model of Particle Physics in the unitary gauge (2H † H = h 2 ). Meanwhile, the term ξ h 2 R corresponds to the non-minimal coupling of the scalar field to curvature. After electroweak symmetry breaking, the scalar field acquires a non-zero vacuum expectation value (VEV) v = 246 GeV and so M and ξ are then related by M 2 Pl = M 2 + ξ v 2 . Moreover, as discussed in [43], the term ξ v is negligible compared to M for most situations covered by the inflationary Higgs scenario. Therefore, once the parameters M and M Pl differ for the non-zero VEV of h = v, we can consider M M Pl .
Due to the complexity of working with the mixing terms in action (34), the usual procedure is to get rid of the nonminimal coupling to gravity by changing the variables through a conformal transformation from Jordan's frame (the standard one) to Einstein's frame. This transformation has the following form where allowing to write the action in the Einstein frame as The conformal transformation produces a non-minimal kinetic term for the Higgs field. Nevertheless, it is possible to obtain a canonically normalized kinetic term through a new field χ satisfying (see [43,47]) here the apostrophe represents the derivative with respect to h. It is important to pay attention that h does not change after the conformal transformation, the redefinition (39) is just a way to recover the standard form of the slow-roll action, Where the potential described in terms of χ, V (χ) = V (h)/Ω 4 , leads to a change of the Friedman equation that can be written as On the other hand, equation (39) can be integrated, resulting in (see [47]) Since HI is built under the requirement ξ 1, if the non-minimal coupling is chosen to be in the range 1 ξ M 2 Pl /v 2 then equation (42) corresponds to the conformal transformation Ω 2 = e 2χ/ √ 6M Pl . Thus, the potential V (χ) is given by with V 0 = λ M 4 Pl /4ξ 2 . Note that the potential (43) is exponentially flat for large values of χ, which enables to reproduce an evolution analogously to standard slow-roll inflation in Einstein frame. Therefore, the Friedmann equation presents the form where it was assumed χ 3/2 M Pl . Furthermore, it has been shown (see, e.g., [48][49][50]) that HI is also in agreement with the most recent estimates obtained through the WMAP and Planck satellites from the cosmic microwave background (CMB) radiation. In particular, CMB normalization requires ξ 50000 √ λ . Moreover, as discussed by [49], for ξ ∼ 10 3 , HI is a graceful way to relax to the standard model vacuum.
In [7], they establish a link between LQC and MG. Meanwhile, the works [10] and [11] open the possibility to explore MG with the Higgs mechanism. Here, we intend to close this triangle by using MG as the bridge between Effective LQC and HI. We are going to emphasize the intermediary character of the mimetic approach as the one capable of mimicking Effective LQC dynamics besides incorporating the matter-curvature relation from HI. In other words, we intend to answer the following questions: (a) Could curvature mimetic gravity be used to describe the same evolution that HI provides? (b) Are the energy scales of LQC, within MG framework, compatible with the energy scales of HI? (c) If we get affirmative answers to the two previous questions, what form should the curvature potential take?
First of all, considering the equivalence between these two approaches, we can match (12) with (44) that is, we can map the behavior of V k (ϕ) − ka −2 to mimic the inflationary phase. In terms of energy scale, the inflation occurs in the interval 0 ρ eff /ρ c ≤ 1/2, thus, (45) represents the equality at the onset of inflation. Due to its structural construction, we are considering MG as a LQC description in the Einstein frame. For χ end 0.94M Pl , the potential V (χ) represented in (43) reduces nearly seventy percent of its initial value. Hence, instead of V (χ) ≈ V 0 we will compute the Friedmann equation with V (χ) ≈ 0.287V 0 and compare it to once the quantum gravitational effects should be negligible at this point. As a result, the effective energy density for this period will be determined by After, we plug (45) in (47) and obtain which is in agreement with the LQC requirement of ρ eff ρ c in order to recover the classical FRW evolution at the end of the inflationary period.
It is important to note that in [10] was obtained a massive graviton through a Brout-Englert-Higgs (BEH) mechanism in which one of the four scalar fields used was the mimetic gravity field. Notwithstanding this procedure was performed to avoid the appearance of a ghost mode. Note that there is a straight relation between the mimetic field and the Higgs field once BEH is involved. Further, it was also highlighted the strong coupling of the mimetic field with the graviton in scales close to the Planck one, which means a non-minimal coupling between matter and curvature as well as we have been exploring along this work.
On the other hand, the relationships derived above show that it is possible to describe the primordial universe evolution in a unified LQC-HI scenario using as a connection the MG formalism. In this case, the inflationary Higgs field could be 'mimicked' from the curvature potential. For that reason, the answers to questions (a) and (b) placed above are 'yes' to both. To answer the question (c), we should first establish the relation with HI. Note that to exist a perfect mapping between the curvature potential and V (χ) during the inflationary phase, the equation it must be satisfied as a tracking condition. To put it another way, since the relation (49) comes from the equality between (30) and (44), it corresponds to a requirement of the 'validity of the mimicry' of the HI scenario through the curvature potential of the MG. Note that V k (ϕ) can be adjusted to allow satisfying equation (49) throughout an inflationary phase characterized by V (χ), while controlling ρ eff to be within the usual LQC values. Because (49) is a second-degree equation, it has two solutions whose evolution regarding χ is presented in Figure 1. The physical solution is in red. Meanwhile, the black line describes the non-physical evolution in which the effective potential is growing as the value of χ decreases. Once the relation ρ eff /ρ c versus χ is obtained, it is possible to see how the mimetic potential must behave to produce a dynamic similar to that produced by V (χ).The vertical green line corresponds to the end of inflationary epoch defined at χ end 0.94M Pl . Figure 2 exposes the mimetic character of V k (ϕ) − ka −2 regarding the behavior of the Hubble parameter given by HI. The evolution of the potential V (χ) as a function of the Higgs field (χ) is presented in the y1 − x1 axes (in red). The behavior that the mimetic potential must have to produce the same H(t) function of the HI scenario is presented in x2 − y2 axes (in blue). The dynamic evolution of the universe is the same in both cases so that V k (ϕ) − ka −2 can adequately mimic the HI scenario. During the Higgs phase, the scale factor of the universe grows by a number of e-folds Fig. 2 The evolution of HI potential and MG curvature potential. The behavior of V k (ϕ) − ka −2 mimics the same dynamics, represented by the evolution of the Hubble parameter, as that obtained by the HI scenario. The evolution of potential V (χ) as a function of the field χ is presented in the y1x1 axes (in red). The axes y2x2 (in blue) show the evolution of V k (ϕ) − ka −2 as a function of ρ eff /ρ c . The vertical green line indicates χ end 0.94M Pl which represents the end of the inflationary era within the HI approach (see, for example, [41,43]). See that χ end 0.94M Pl corresponds to ρ eff /ρ c ≈ 0.072, value that is in agreement with the LQC requirements for the end of inflation (see, for example, [16]).
N 60 ( [50]). Therefore, the term k/a 2 is diluted and naturally the mimetic potential V k (ϕ) → V (χ) after the end of inflation.
Possible relations between matter and curvature in the mimetic gravity representation of loop quantum cosmology
In standard LQC, the relation between matter and curvature is not directly explored, once the sectors are linked but they are not analyzed together as a pair. Since holonomy corrections arise with the area discretization of space-time in area gaps ∆ due to the discrete curvature of Ashtekar connection, the changes affect only the gravitational part of the Hamiltonian. A similar statement can be applied regarding the scalar field whose possible self-interaction may not influence the gravitational sector at all [13]. Therefore, in [13], they concluded that the evolution during quantum regime is not affected by the introduction of curvature.
Here, we will demonstrate the fundamental role played by the curvature as an essential dynamic element of the MG description of LQC. Considering the definition (29), we are going to show that depending on how the energy density is interpreted, the results can change considerably. Once, despite ρ eff being the one to follow the LQC energy range evolution, it may or may not be the amount chosen to satisfy the continuity equation (11). To clarify this, it is important to have in mind that (11) refers to the matter content. However, we need to specify what amount is playing this role, once assuming only the kinetic term from the start will not have any potential term to drive inflation later. Below, the first case exposed is a straight analogy with the definition (6), nevertheless, instead of V (ϕ), we have V k (ϕ) as the matter component in the total energy density. In the second case, we use ρ eff directly, assuming also the curvature term ka −2 related to universe geometry as part of the total energy density. In Appendix A we provide more details about the validity of the usual continuity equation considering terms related to the curvature. First of all, we define a new variable ρ as from which the Effective Friedmann equation (27) can be written in the form Here, ρ represents the total matter energy density. We are considering that the curvature mimetic potential term 3M 2 Pl V k (ϕ) mimics the dynamics of the matter field potential. Therefore, ρ must satisfy the continuity equation (11). After, we repeat the process presented in Sect. 2.1 for (51) which results iṅ Then we sum (51) with (52), obtaining the expressioṅ Finally, from (29) and (50) we can also write (53) aṡ Despite the fact that the works [7] and [13] considered different versions of the LQC Hamiltonian to describe a curved FRW space-time, they could provide similar expressions. Indeed, from Effective Friedmann equation [13] if we replace the approximations ρ 1 (p) ≈ 3/(8πGa 2 ) and ρ 2 (p) ≈ ρ c + 3/(8πGa 2 ) by its analog amounts considering (26), ρ 1 = 3M 2 Pl k/a 2 and ρ 2 = ρ c + 3M 2 Pl k/a 2 , assuming again (50) as matter energy density, we are going to recover (51). Further, the equation (52) can be seen as a simplified version of its analog expression obtained in [13], whose expression contains more terms and includes all respective terms from (52), except for 6M 2 Pl ka −2 . Thus, if we consider MG as a skillful tool to deal with the dynamics involved with the LQC, it is possible to reanalyze different scenarios presented in the literature, within the scope of the LQC, obtaining their results by means of a mimetic potential.
As previously stated, the conceptual elegance of LQG does not come only from the way it constructs a quantum theory of gravity from general relativity and quantum mechanics, but also from the simplicity of considering that the universe was initially composed only of quantum fields. However, these fields do not live in space-time, they live on one another so that the space-time that we perceive today it is a blurred and approximate image of one of these fields: the gravitational field. These aspects can be assessed in some way through the mimetic formalism if we consider it as a tool that, through the mimetic potential, allows to explore the quantum effects on the dynamics of the primeval universe.
3.3.2 Case II: V k (ϕ) − ka −2 as part of the total matter energy density In this case, the matter energy density corresponds to equation (29). Therefore, the term 3M 2 is the one that describes the behavior of the matter potential. As we maintain the structure presented in (29), the Effective Friedmann equation is given by (30). With this in mind, the procedure is similar to the one performed in Sect. 2.1, however, ρ eff becomes the variable that needs to obey the continuity equation, and also the state equation P eff = wρ eff . As a result, the time derivative of H iṡ An interesting aspect of this equation is that the superinflation regime will only depend on ρ eff , in particular, within the range ρ c /2 < ρ eff < ρ c . See that ρ eff has two components according to equation (29) that are ρ kin and the curvature mimetic potential. However, in this interval, the superinflation evolution can happen for any value w > −1, witḣ H > 0. These values cover a wide range of possible fields (or combination of fields), since some Galileon fields (w > 1), scalar fields without potential (w = 1), dust-like behavior (w = 0), until fields with w −1, similar to the cosmological constant. Thus, to some extent, the super-inflation phase lies in range ρ c /2 < ρ eff < ρ c for the MG description of LQC as well as the usual LQC.
On the other hand, from the sum of (30) with (57), we havė The standard inflation occurs whenä > 0 which is equivalent toḢ + H 2 > 0. When ρ eff = ρ c /2 we haveḢ = 0 that set the end of the superinflationary phase (or transition time). In the interval 0 < ρ eff < ρ c /2 we haveḢ < 0 anḋ H + H 2 > 0 and so the universe lies in the normal inflationary phase.
Through equation (58) it is possible to verify that inflation occurs if the condition is respected. As pointed by [12], in usual LQC, even fields with non-negative constant state parameters are capable of driving the universe to an inflationary phase, for example, radiation can satisfy the condition given by equation (59). However, in our case, the main regulator of inflation is the mimetic potential embedded in ρ eff . That is, the phase called normal inflation is dominated by the mimetic potential term. If it has the form given in Figure 2, then the value of w will be adjusted to that specific field causing inflation to occur in the usual way. Another point to note is that (54) and (58) are identical for k = 0. Further, despite equations (14) and (58) share the same structure, the later contains a richer physics to be explored. With the effective energy density, the original form of the equations of motion related to the flat case is recovered. However, the curvature is intrinsically intricate as a fundamental element to describe the MG version of Effective LQC.
Final remarks
Recently, [7] presented an interesting formulation of mimetic gravity under fundamental aspects of loop quantum cosmology. One of their contributions was the introduction of a mimetic curvature potential that preserves all the healthy properties of LQC. In this work, we discuss alternative ways of using this mimetic potential. At first, we demonstrate that the mimetic potential can produce the same dynamics of the so-called Higgs inflation field. The energy scales of HI scenario are properly mimicked and connected to the LQC energy scales during inflation.
In a second moment, we evaluate what should be the form of the effective mimetic potential (V k (ϕ) − k/a 2 ) to produce the identical evolution as HI does. Next, we show possible scenarios that may emerge from the relationship between matter and curvature potential within MG framework, results similar to those derived by other authors within the scope of the LQC.
It is important to mention that a recent paper [51] analyzes the cosmology of the primordial universe through the Standard Model of Particle Physics perspective. The authors present a bounce model with the standard Higgs boson whose contraction phase is characterized by an EoS with w > 1. At the bounce, w reaches large negative values (w −1), followed by an inflationary phase for w = −1 with nearly 60 e-folds, the same number of e-folds of the HI studied here.
The MG representation offers a simpler alternative scenario for the bounce phase since it preserves all the healthy properties of the usual LQC. On the other hand, the original formulation of MG provided an interesting alternative to evaluate the dark matter content of the universe, since the dark components are treated as geometric effects [20].
Our main motivation for exploring MG's curvature potential is that it does not introduce major modifications to the usual LQC structure, as discussed above. Just like the mimetic dark matter-gravity model can be considered a minimal extension of GR [25]. Moreover, within certain limits, it reproduces the formulation studied by [12] (k = 0) and the scenario presented in [13] (k = 1) depending on how the mimetic potential is considered in the dynamical equations. This is in agreement with the statement already related to usual Mimetic Gravity that discourses about obtaining different cosmological solutions through the suitable choice of the mimetic potential (see [20] and [26] for more details).
In a certain respect, the effective mimetic potential V k (ϕ)− k/a 2 can be grouped in different ways into the LQC equation, somewhat similar to the cosmological constant originally introduced by Einstein on the geometric side of the general relativity field equations and reinterpreted in the 1980s as a fluid (and therefore moved to the side of the energymomentum tensor) able to produce the current acceleration of the universe. | 9,692.2 | 2019-04-01T00:00:00.000 | [
"Physics"
] |
Modification of Critical Current Density Anisotropy in High-Tc Superconductors by Using Heavy-Ion Irradiations
The critical current density Jc, which is a maximum value of zero-resistivity current density, is required to exhibit not only larger value but also lower anisotropy in a magnetic field B for applications of high-Tc superconductors. Heavy-ion irradiation introduces nanometer-scale irradiation tracks, i.e., columnar defects (CDs) into high-Tc superconducting materials, which can modify both the absolute value and the anisotropy of Jc in a controlled manner: the unique structures of CDs, which significantly affect the Jc properties, are engineered by adjusting the irradiation conditions such as the irradiation energy and the incident direction. This paper reviews the modifications of the Jc anisotropy in high-Tc superconductors using CDs installed by heavy-ion irradiations. The direction-dispersion of CDs, which is tuned by the combination of the plural irradiation directions, can provide a variety of the magnetic field angular variations of Jc in high-Tc superconductors: CDs crossing at ±θi relative to the c-axis of YBa2Cu3Oy films induce a broad peak of Jc centered at B || c for θi < ±45◦, whereas the crossing angle of θi ≥ ±45◦ cause not a Jc peak centered at B || c but two peaks of Jc at the irradiation angles. The anisotropy of Jc can also modified by tuning the continuity of CDs: short segmented CDs formed by heavy-ion irradiation with relatively low energy are more effective to improve Jc in a wide magnetic field angular region. The modifications of the Jc anisotropy are discussed on the basis of both structures of CDs and flux line structures depending on the magnetic field directions.
Introduction
High-T c superconductors have attracted considerable research activity, especially for electric power applications at high magnetic fields and temperatures, because the zero-resistive current and the high superconducting transition temperature T c enable us to operate zero-resistance devices at liquid-nitrogen temperature. Nowadays, coated conductors based on biaxially textured REBa 2 Cu 3 O y (REBCO, RE: rare earth elements) thin films have been significantly developed as second generation high-T c superconducting tapes and have become commercially available now [1,2].
The critical current density J c in magnetic field (in-field J c ), which is a maximum current density with zero-resistivity, is the most important parameter in REBCO-coated conductors for the practical applications. The absolute values of J c for REBCO-coated conductors, however, have still remained below the practical level for high magnetic field applications [3]. In addition, the electronic mass anisotropy in the layered structure of CuO 2 planes for high-T c superconductors induces a large anisotropy of J c against a magnetic field orientation [4], which gives rise to obstacles to the superconducting magnet applications: a minimum in the magnetic field angular variation of J c , which is usually located at the magnetic field B parallel to the c-axis, limits the operation current [5,6].
The in-field J c can be controlled by immobilization of nano-sized quantized-magneticflux-lines (flux lines) penetrating into superconductors in a magnetic field. The motion of The in-field Jc can be controlled by immobilization of nano-sized quantized-magnetic-flux-lines (flux lines) penetrating into superconductors in a magnetic field. The motion of flux lines is suppressed by crystalline defects and impurities in the specimen, which are called pinning centers (PCs). Thus, artificially embedding crystalline defects as effective PCs is just a key strategy to improve the in-field performance of superconductors [1,3,7]. For the last fifteen years or so, doping of non-superconducting secondary phases such as BaMO3 (M = Zr, Sn, Hf, etc.) and RE2O3 has been attempted to form those into effective PCs in REBCO thin films [8][9][10][11][12].
The flux pinning effect depends on the shape (dimensionality), orientation, size, and distribution of PCs. In particular, the dimensionality of PCs significantly affects the feature of flux pinning, as shown in Figure 1. For example, one-dimensional PCs such as columnar defects (CDs) exhibit a preferential direction for the flux pinning: the strong flux pinning occurs in the magnetic field direction along their long axis. Three-dimensional PCs such as nano-particles, on the other hand, have the morphology with no correlated orientation for flux pinning, resulting in the isotropic pinning force against any direction of magnetic field. These features of PCs play an important role in the modification of the Jc properties in REBCO films: those parameters of PCs such as their shape and size, should be designed to meet the requirements for each application.
Swift-heavy-ion irradiation to high-Tc superconductors produces amorphous CDs of damaged material parallel to the projectile direction through the electron excitation process rather than the nuclear collision process. The CDs produced by the irradiation effectively work as one-dimensional PCs [13][14][15]. The orientation of one-dimensional PCs determines the preferential direction of flux pinning [13,16]. Therefore, heavy-ion irradiation can be expected to modify the anisotropy of Jc in high-Tc superconductors by tuning the irradiation direction. In addition, the size and shape of CDs strongly depends on the electronic stopping power Se, which is defined as energy loss of the incident ion per unit length via electronic excitation in the target material [17]: continuous CDs with thick diameter are formed at higher Se than a certain value and discontinuous CDs with thin diameter are located at intervals along the ion path at lower Se [18][19][20]. In particular, discontinuous CDs may provide more effective flux pinning in a wide magnetic field angular range, because the ends of discontinuous CDs can act as PCs even in magnetic field directions tilted from their long axis [21,22]. Thus, the discontinuity of CDs is also one of the important factors for the modification of the Jc anisotropy in high-Tc superconductors, as well as the direction-dispersion of CDs. A major advantage of using heavy-ion irradiation for the formation of CDs is that any CD configuration can be prepared by tuning the irradiation energy and the incident direction [23,24], independently from a fabrication process of samples (see Figure 2): the pinning structure can be efficiently designed to meet the requirements for different applications, which would be valuable for the development of high-performance coated conductors. In addition, unique pinning structures architected by the irradiations may enable us to find new physics of flux line dynamics. Therefore, heavy-ion irradiation to high-Tc A major advantage of using heavy-ion irradiation for the formation of CDs is that any CD configuration can be prepared by tuning the irradiation energy and the incident direction [23,24], independently from a fabrication process of samples (see Figure 2): the pinning structure can be efficiently designed to meet the requirements for different applications, which would be valuable for the development of high-performance coated conductors. In addition, unique pinning structures architected by the irradiations may enable us to find new physics of flux line dynamics. Therefore, heavy-ion irradiation to high-T c superconductors can provide the design criteria for the supreme pinning landscape making the most of the potential for flux pinning, which leads to J c close to the theoretical limit of critical current density, i.e., the pair-breaking critical current density. superconductors can provide the design criteria for the supreme pinning landscape making the most of the potential for flux pinning, which leads to Jc close to the theoretical limit of critical current density, i.e., the pair-breaking critical current density. In this paper, we describe the results of the modification of the Jc properties in REBCO thin films and coated conductors, which were obtained by our studies through heavy-ion irradiation under various irradiation conditions. Most of previous works of other researchers using heavy-ion irradiation have focused on the improvement of Jc at B || c where Jc usually shows the minimum [13][14][15]18,19]. On the other hand, heavy-ion irradiation effects over a wide magnetic field angular range have not been well studied so far. By contrast, we focus especially on modification of the Jc anisotropy in high-Tc superconductors by using heavy-ion irradiation: our aim in this review is to improve Jc in all magnetic field angular range from B || c to B || ab by using CDs and to explore breakthroughs for strong and isotropic pinning landscape in REBCO coated conductors. To meet the aim in this paper, we selected Xe ions as the irradiation ion species: the Xe-ion irradiation to REBCO thin films can provide large increase of Jc without heavily damaging crystallinity even at a large amount of doses, 5.0 × 10 11 ions/cm 2 [24] and easily enables us to tune the morphology of CDs through the adjustment of the irradiation energy at a tandem accelerator of Japan Atomic Energy Agency (JAEA) used in our works. Firstly, we present the reduction of the Jc anisotropy by using the direction-dispersed CDs, which are introduced by controlling the irradiation direction. Secondly, we report the influence of CDs tilted at small angle(s) relative to the ab-plane on the Jc properties near B || ab, which is one of key factors to improve Jc in all magnetic field directions. In particular, we show the influence of CDs along the ab-plane on Jc at B || ab by preparing an in-plane aligned a-axis-oriented YBCO film. Finally, we clarify the potential of discontinuous CDs for flux pinning in comparison with continuous CDs, where the morphology of CDs is controlled by the irradiation energy.
Experimental
The samples used in our works were mostly c-axis oriented YBCO thin films and GdBCO coated conductors. The c-axis oriented YBCO thin films were fabricated by a pulsed laser deposition (PLD) technique on (100) surface of SrTiO3 single crystal substrates. The thickness of the films was about 300 nm. The GdBCO coated conductor, on the other hand, was fabricated on an ion-beam-assisted deposition (IBAD) substrate by a PLD method (Fujikura Ltd., Tokyo, Japan). The thickness of GdBCO layer is 2.2 µm and the self-field critical current Ic of this tape with 5 mm width is about 280 A. The samples were cut from the tape of the GdBCO coated conductor. The Ag stabilizer layer on the superconducting layer was removed by a chemical process. The YBCO thin films and the samples cut from the GdBCO coated conductor were patterned into a shape of about 40 µm wide and 1 mm long micro-bridge before the irradiation. In this paper, we describe the results of the modification of the J c properties in REBCO thin films and coated conductors, which were obtained by our studies through heavyion irradiation under various irradiation conditions. Most of previous works of other researchers using heavy-ion irradiation have focused on the improvement of J c at B || c where J c usually shows the minimum [13][14][15]18,19]. On the other hand, heavy-ion irradiation effects over a wide magnetic field angular range have not been well studied so far. By contrast, we focus especially on modification of the J c anisotropy in high-T c superconductors by using heavy-ion irradiation: our aim in this review is to improve J c in all magnetic field angular range from B || c to B || ab by using CDs and to explore breakthroughs for strong and isotropic pinning landscape in REBCO coated conductors. To meet the aim in this paper, we selected Xe ions as the irradiation ion species: the Xe-ion irradiation to REBCO thin films can provide large increase of J c without heavily damaging crystallinity even at a large amount of doses, 5.0 × 10 11 ions/cm 2 [24] and easily enables us to tune the morphology of CDs through the adjustment of the irradiation energy at a tandem accelerator of Japan Atomic Energy Agency (JAEA) used in our works. Firstly, we present the reduction of the J c anisotropy by using the direction-dispersed CDs, which are introduced by controlling the irradiation direction. Secondly, we report the influence of CDs tilted at small angle(s) relative to the ab-plane on the J c properties near B || ab, which is one of key factors to improve J c in all magnetic field directions. In particular, we show the influence of CDs along the ab-plane on J c at B || ab by preparing an in-plane aligned a-axis-oriented YBCO film. Finally, we clarify the potential of discontinuous CDs for flux pinning in comparison with continuous CDs, where the morphology of CDs is controlled by the irradiation energy.
Experimental
The samples used in our works were mostly c-axis oriented YBCO thin films and GdBCO coated conductors. The c-axis oriented YBCO thin films were fabricated by a pulsed laser deposition (PLD) technique on (100) surface of SrTiO 3 single crystal substrates. The thickness of the films was about 300 nm. The GdBCO coated conductor, on the other hand, was fabricated on an ion-beam-assisted deposition (IBAD) substrate by a PLD method (Fujikura Ltd., Tokyo, Japan). The thickness of GdBCO layer is 2.2 µm and the self-field critical current I c of this tape with 5 mm width is about 280 A. The samples were cut from the tape of the GdBCO coated conductor. The Ag stabilizer layer on the superconducting layer was removed by a chemical process. The YBCO thin films and the samples cut from the GdBCO coated conductor were patterned into a shape of about 40 µm wide and 1 mm long micro-bridge before the irradiation.
The heavy-ion irradiations with Xe ions were performed using the tandem accelerator of JAEA in Tokai, Japan. Tuning of the discontinuity of CDs along the c-axis can be controlled by the irradiation energy. The values of S e for the Xe-ion irradiation energies above 200 MeV are above 2.9 keV/Å, which is above the threshold value of S e = 20 keV/nm to create continuous CDs along the c-axis over the whole sample thickness for YBCO [17].
Thus, the irradiation with 200 MeV Xe ions was performed to install continuous CDs into YBCO thin films. In addition, the Xe-ion irradiation with 270 MeV was applied in order to create continuous CDs for GdBCO coated conductors, where the projectile length was longer than the thickness of 2.2 µm. Discontinuous CDs, on the other hand, were formed into YBCO thin films and GdBCO coated conductors by the irradiation with 80 MeV Xe ions, where the value of S e is below 20 keV/nm: the radius of CDs strongly fluctuates along the ion path and CDs are shortly segmented at intervals in their longitudinal direction when the S e is lower than the threshold value, as shown in Figure 3 [19,20,25]. All of the irradiation energies used in our works are enough for the projectile ranges to exceed the thickness of the samples: the incident ions pass through the superconducting layer completely. when the Se is lower than the threshold value, as shown in Figure 3 [19,20,25]. All of the irradiation energies used in our works are enough for the projectile ranges to exceed the thickness of the samples: the incident ions pass through the superconducting layer completely.
The direction of CDs was adjusted by controlling the incident ion beam direction tilted off the c-axis by θi, which was always directed perpendicular to the bridge direction of the sample (see Figure 4). When the irradiation directions are dispersed, the fluence in each irradiation direction is calculated by dividing the total fluence by the number of the irradiation directions. The fluence of the irradiation is often represented as a matching field Bφ: Bφ is the magnetic field where the density of flux lines is equal to that of CDs, e.g., the fluence of 4.84 × 10 10 ions/cm 2 corresponds to Bφ = 1 T.
It should be noted that the introduction of irradiation defects causes a lattice distortion of the host matrix, which affects the superconducting properties such as critical temperature (Tc). The strain induces the oxygen vacancies [26], resulting in the reduction of Tc: the value of Tc decreases when the fluence of the irradiation increases [24]. The strain also affects the Jc properties through the influence on Tc: Jc decreases largely, when the influence of the strain increases excessively. Therefore, the irradiation fluences were adjusted to avoid heavy damage to the crystallinity in our works. The direction of CDs was adjusted by controlling the incident ion beam direction tilted off the c-axis by θ i , which was always directed perpendicular to the bridge direction of the sample (see Figure 4). When the irradiation directions are dispersed, the fluence in each irradiation direction is calculated by dividing the total fluence by the number of the irradiation directions. The fluence of the irradiation is often represented as a matching field B ϕ : B ϕ is the magnetic field where the density of flux lines is equal to that of CDs, e.g., the fluence of 4.84 × 10 10 ions/cm 2 corresponds to B ϕ = 1 T. The cross sections of the irradiated samples were observed by conventional transmission electron microscopy (TEM) with a JEM-2000 EX instrument (JEOL, Tokyo, Japan) operating at 200 kV. The thin TEM specimens were prepared by a focused ion beam method using an Quanta 3D system (FEI, Hillsboro, Oregon, USA). The Jc properties were measured through the transport properties by using a four-probe method. The Jc was defined by a criterion of electric field, 1 µV/cm. The transport current was always perpendicular to the magnetic field and the c-axis (maximum Lorentz force configuration). The magnetic field angular dependences of Jc were evaluated as a function of the angle θ between the magnetic field and the c-axis of the samples (see Figure 4). It should be noted that the introduction of irradiation defects causes a lattice distortion of the host matrix, which affects the superconducting properties such as critical temperature (T c ). The strain induces the oxygen vacancies [26], resulting in the reduction of T c : the value of T c decreases when the fluence of the irradiation increases [24]. The strain also affects the J c properties through the influence on T c : J c decreases largely, when the influence Quantum Beam Sci. 2021, 5, 16 5 of 21 of the strain increases excessively. Therefore, the irradiation fluences were adjusted to avoid heavy damage to the crystallinity in our works.
The cross sections of the irradiated samples were observed by conventional transmission electron microscopy (TEM) with a JEM-2000 EX instrument (JEOL, Tokyo, Japan) operating at 200 kV. The thin TEM specimens were prepared by a focused ion beam method using an Quanta 3D system (FEI, Hillsboro, Oregon, USA). The J c properties were measured through the transport properties by using a four-probe method. The J c was defined by a criterion of electric field, 1 µV/cm. The transport current was always perpendicular to the magnetic field and the c-axis (maximum Lorentz force configuration). The magnetic field angular dependences of J c were evaluated as a function of the angle θ between the magnetic field and the c-axis of the samples (see Figure 4).
Modification of J c Around B || c by Controlling Heavy-Ion Irradiation Angles
Heavy-ion irradiation can introduce CDs in any direction in a controlled manner, so we can install CDs at the magnetic field angles where the J c shows a minimum, one by one: the material processing with heavy-ions is one of effective ways to modify the J c anisotropy in high-T c superconductors, which enables us to push up overall J c , as shown in Figure 5. The cross sections of the irradiated samples were observed by conventional transmission electron microscopy (TEM) with a JEM-2000 EX instrument (JEOL, Tokyo, Japan) operating at 200 kV. The thin TEM specimens were prepared by a focused ion beam method using an Quanta 3D system (FEI, Hillsboro, Oregon, USA). The Jc properties were measured through the transport properties by using a four-probe method. The Jc was defined by a criterion of electric field, 1 µV/cm. The transport current was always perpendicular to the magnetic field and the c-axis (maximum Lorentz force configuration). The magnetic field angular dependences of Jc were evaluated as a function of the angle θ between the magnetic field and the c-axis of the samples (see Figure 4).
Modification of Jc Around B || c by Controlling Heavy-Ion Irradiation Angles
Heavy-ion irradiation can introduce CDs in any direction in a controlled manner, so we can install CDs at the magnetic field angles where the Jc shows a minimum, one by one: the material processing with heavy-ions is one of effective ways to modify the Jc anisotropy in high-Tc superconductors, which enables us to push up overall Jc, as shown in Figure 5. We first examined the influence of bimodal angular distribution of CDs consisting of CDs crossing at ±θi relative to the c-axis on the Jc properties in a wide magnetic field We first examined the influence of bimodal angular distribution of CDs consisting of CDs crossing at ±θ i relative to the c-axis on the J c properties in a wide magnetic field angular range [27,28]. Figure 6 shows the magnetic-field angular dependence of J c normalized by the self-magnetic-field critical current density J c0 for YBCO thin films with the crossed CDs, which were installed by 200 MeV Xe ion irradiation with B ϕ = 2 T (c10-2: θ i = ±10 • , c25-2: θ i = ±25 • , c45-2: θ i = ±45 • , p06-2: parallel CD configuration of θ i = 6 • , and Pure: unirradiated samples). The magnetic field was rotated in the splay plane where the two parallel CD families are crossing each other, as shown in Figure 4. All the irradiated samples show an additional peak of the normalized J c around B || c (θ = 0 • ) for lower magnetic fields: the values of the normalized J c are enhanced around B || c compared to the unirradiated one. This indicates that CDs with any crossing angle work as effective PCs, pushing up the J c around B || c. The influence of the crossing angle of CDs is evident in the shape of the additional peak around B || c: the width of the normalized J c peak becomes broader when the crossing angle is larger. Therefore, the bimodal angular distribution of CDs can expand the magnetic field angular range where the normalized J c increases, by controlling the crossing angle.
fields: the values of the normalized Jc are enhanced around B || c compared to the unirradiated one. This indicates that CDs with any crossing angle work as effective PCs, pushing up the Jc around B || c. The influence of the crossing angle of CDs is evident in the shape of the additional peak around B || c: the width of the normalized Jc peak becomes broader when the crossing angle is larger. Therefore, the bimodal angular distribution of CDs can expand the magnetic field angular range where the normalized Jc increases, by controlling the crossing angle. Figure 6. Magnetic field angular dependence of Jc normalized by the self-magnetic-field critical current density Jc0 for YBCO thin films with the crossed CDs (c10-2: θi = ±10°, c25-2: θi = ±25°, c45-2: θi = ±45°, p06-2: parallel CD configuration of θi = 6°, and Pure: unirradiated samples). Reprinted with permission from [28], copyright 2016 by IOP.
It is noteworthy that the crossover phenomenon from the broad-plateau-like behavior to the double peak emerges on the normalized Jc around B || c for c45-2 when the magnetic field increases across the matching field of Bφ = 2 T: the normalized Jc more rapidly reduces at B || c with increasing magnetic field, which results in a dip structure at B || c for c45-2 at 2 T, as shown in Figure 6. In general, the Jc peak in the magnetic field angular dependence of Jc is a sign of long-axis correlated flux pinning of CDs. Their-longaxis correlated flux pinning is maintained up to higher magnetic fields [29,30]. For the crossing angle of θi = ±45°, by contrast, the influence of the long-axis correlated flux pinning is weakened at B || c, since the directions of CDs are far from the c-axis direction. It is noteworthy that the crossover phenomenon from the broad-plateau-like behavior to the double peak emerges on the normalized J c around B || c for c45-2 when the magnetic field increases across the matching field of B ϕ = 2 T: the normalized J c more rapidly reduces at B || c with increasing magnetic field, which results in a dip structure at B || c for c45-2 at 2 T, as shown in Figure 6. In general, the J c peak in the magnetic field angular dependence of J c is a sign of long-axis correlated flux pinning of CDs. Their-long-axis correlated flux pinning is maintained up to higher magnetic fields [29,30]. For the crossing angle of θ i = ±45 • , by contrast, the influence of the long-axis correlated flux pinning is weakened at B || c, since the directions of CDs are far from the c-axis direction. Thus, the dip behavior at B || c is a sign of disappearance of their-long axis correlated flux pinning at B || c.
The effective magnetic field angular region for flux pinning of CDs is described by a trapping angle ϕ t , at which flux lines begin to be partially trapped by CDs [4]. The general formula of ϕ t is expressed as: where ε p is the pinning energy of CDs and ε l is the line tension of flux lines. The line tension energy of flux lines in anisotropic superconductors is given by the following equation: where Θ is the angle between the magnetic field and the ab-plane, ε 0 is a basic energy scale, γ is the mass anisotropy, and ε(Θ) = (sin 2 Θ + γ −2 cos 2 Θ) 1/2 [4]. The trapping angle ϕ t is experimentally estimated as the difference in the angle between the peak value and the minimum one on the magnetic field angular dependence of J c [31]. For p06-2, the value of ϕ t is~55 • at B < B ϕ , which is estimated from Figure 6a. Using this value of ϕ t as the trapping angle of CDs parallel to the c-axis approximately and γ = 5 together with equations (1) and (2), the value of ϕ t for CDs tilted at θ i = 45 • is about 37 • . Therefore, CDs tilted at θ i = 45 • hardly contribute to trapping flux lines at B || c: CDs tilted at θ i = 45 • does not work as their-long-axis correlated PCs for B || c.
The bimodal angular distribution of CDs for θ i ±45 • gives rise to the drop in J c at the mid-direction of the crossing angle. Secondly, we investigated the flux pinning properties for a trimodal angular distribution of CDs consisting of CDs crossing at θ i = 0 • and ±45 • (referred to as the "standard" trimodal-configuration), in order to obtain high J c with no drop over a wide magnetic field angular region [32]. In addition, another geometry for the trimodal configuration was prepared, where a splay plane defined by the three irradiation angles is parallel to the transport current direction (referred to as "another" trimodalconfiguration), as shown in Figure 7: the two trimodal configurations enable us to elucidate the influence of the splay plane direction on the J c properties directly. Figure 8 shows the magnetic field angular dependence of normalized J c by J c0 (= j c ) at several magnetic fields from 1 T up to 5 T for YBCO thin films with the trimodal angular configurations of CDs. A large enhancement of j c centered at B || c can be seen for all the irradiated samples. In particular, both the trimodal angular configurations show a much broader peak with larger j c than that of the parallel CD configuration. It should be noted that there is no drop of j c at B || c for both the trimodal configurations. This result indicates that the three parallel CD families tilted at θ i = 0 • and ±45 • effectively work as strong PCs in each irradiation direction: flux pinning at B || c where CDs tilted at θ i = ±45 • slightly contribute to trapping flux lines, is reinforced by CDs along the c-axis. Sketch of CDs dispersed in geometry of "another" trimodal-configuration, where a splay plane defined by the three irradiation angles is parallel to the transport current direction. Reprinted with permission from [32], copyright 2016 by IOP. Figure 7. Sketch of CDs dispersed in geometry of "another" trimodal-configuration, where a splay plane defined by the three irradiation angles is parallel to the transport current direction. Reprinted with permission from [32], copyright 2016 by IOP.
Interestingly, the behaviour of j c around B || c strongly depends on the direction of the splay plane for the trimodal configuration of CDs: the j c of another trimodal-configuration shows a peak at B || c, whereas standard one exhibits not so much a peak as a plateaushaped curve. In addition, the height of j c peak for another tirmodal-configuration is higher than the value of j c at B || c for standard one. For standard trimodal-configuration, sliding motion of flux lines occurs along the tilted CDs at B || c because of the splay plane parallel to the Lorentz force, resulting in the reduction of the pinning efficiency [33]. The crossed CDs for another trimodal-configuration, by contrast, suppress the motion of flux lines efficiently, since flux lines move across the crossed CDs by the Lorentz force. Thus, the splay plane parallel to the transport current direction provides stronger flux pinning at B || c, like planar PCs. Furthermore, the j c of another trimodal-configuration is the highest even when the magnetic field is tilted from the c-axis. This is probably due to the entanglement of flux lines induced in a mesh of the splay plane tilted from the magnetic field, where the motion of flux lines is suppressed [34]. These results suggest that the direction of the splay plane is one of key factors for flux pinning of direction-dispersed CDs, as well as the degree of the direction-dispersion [32,35]. . Sketch of CDs dispersed in geometry of "another" trimodal-configuration, where a splay plane defined by the three irradiation angles is parallel to the transport current direction. Reprinted with permission from [32], copyright 2016 by IOP. Figure 8. Magnetic-field angular dependence of Jc normalized by the self-magnetic-field critical current density Jc0 for YBCO thin films with various CD configurations (Pure: unirradiated samples, Para: parallel CD configuration of θi = 0°, Standard: standard trimodal-configuration, and Another: another trimodal-configuration). Reprinted with permission from [32], copyright 2016 by IOP.
Modification of Jc Anisotropy by Controlling Number of Heavy-Ion Irradiation Directions
We further increased the number of the directions of CDs by controlling the irradiation directions (see Figure 9), in order to spread the strong pinning effect of CDs over a wider magnetic field angular range. Figure 10 shows the magnetic field angular depend-
Modification of J c Anisotropy by Controlling Number of Heavy-Ion Irradiation Directions
We further increased the number of the directions of CDs by controlling the irradiation directions (see Figure 9), in order to spread the strong pinning effect of CDs over a wider magnetic field angular range. Figure 10 shows the magnetic field angular dependence of J c and n-values for YBCO thin films with direction-dispersed CDs, where the number of directions of CDs was applied from one to five every 30 degrees [36]. The n-value is estimated from a linear fit to empirical formula of electric field (E) versus current density (J), E~J n in the range of 1 to 10 µV/cm. The n-value is equivalent to U 0 / k B T (U 0 : pinning potential energy) [37,38], representing thermal activation for flux motion. When the number of CD directions is increased, the angular region with high J c is more expanded. Note that the height of the J c peak at B || c declines, since the density of CDs decreases with increasing number of irradiation directions in this work. Thus, the large directiondispersion of CDs is effective for the enhancement of J c over a wider magnetic field angular region centered at B || c.
The J c around B || ab, on the other hand, does not seems to be affected by flux pinning of the direction-dispersed CDs: both J c and n-value at θ = 90 • rather tend to decrease with increasing the degree of the direction-dispersion of CDs. One of the reasons for the reduction of J c at B || ab by the introduction of CDs is the damage on the superconductivity and/or the ab-plane-correlated PCs [16]. It should be noted that the sample of Quintmodal contains CDs crossing at ±30 • relative to the ab-plane (i.e., θ i = ±60 • ); nevertheless, the crossed CDs do not seem to contribute to the pinning interaction around B || ab. Figure 11 represents the magnetic field angular dependence of J c for YBCO thin films including bimodal angular configurations of CDs with θ i = ±30 • and ±60 • relative to the c-axis, respectively [39]. The crossing angle of ±30 • relative to the c-axis induces the enhancement of J c over a wide angular region centered at B || c. The crossing of CDs at ±30 • relative to the ab plane, i.e., θ i = ±60 • , by contrast, is ineffective in pushing up the J c at the middirection of the crossing angle, i.e., at B || ab, whereas the peak of J c emerges at θ = ±60 • . These results indicate that the flux pinning around B || ab is hardly affected even by CDs tilted toward the ab-plane, which significantly differs from the flux pinning of CDs at B || c. Thus, the flux pinning of CDs around B || ab is a new issue for the complete reduction of the J c anisotropy.
tion of Jc at B || ab by the introduction of CDs is the damage on the superconductivity and/or the ab-plane-correlated PCs [16]. It should be noted that the sample of Quintmodal contains CDs crossing at ±30° relative to the ab-plane (i.e., θi = ±60°); nevertheless, the crossed CDs do not seem to contribute to the pinning interaction around B || ab. Figure 11 represents the magnetic field angular dependence of Jc for YBCO thin films including bimodal angular configurations of CDs with θi = ±30° and ±60° relative to the c-axis, respectively [39]. The crossing angle of ±30° relative to the c-axis induces the enhancement of Jc over a wide angular region centered at B || c. The crossing of CDs at ±30° relative to the ab plane, i.e., θi = ±60°, by contrast, is ineffective in pushing up the Jc at the mid-direction of the crossing angle, i.e., at B || ab, whereas the peak of Jc emerges at θ = ±60°. These results indicate that the flux pinning around B || ab is hardly affected even by CDs tilted toward the ab-plane, which significantly differs from the flux pinning of CDs at B || c. Thus, the flux pinning of CDs around B || ab is a new issue for the complete reduction of the Jc anisotropy. Reprinted with permission from [37], copyright 2013 by IEEE. Figure 10. Magnetic-field angular dependence of Jc (upper, (a)) and n-value (lower, (b)) for YBCO thin films with various CD configurations (Unimodal: parallel CD configuration with θi = 0°, Trimodal: trimodal-configuration with θi = 0° and ±30°, and Quintmodal: quintmodal-configuration with θi = 0°, ±30° and ±60°). The arrows indicate the peaks or the shoulder on the n(θ) curve for Quintmodal. Reprinted with permission from [37], copyright 2013 by IEEE. Figure 11. Magnetic-field angular dependence of Jc at temperature of 60 K and magnetic field of 3 T to 7 T for YBCO thin films with CDs crossing at (a) θi = ±30° and (b) ±60° relative to the c-axis, respectively. Reprinted with permission from [36], copyright 2018 by IOP.
Modification of Jc Around B || ab by Controlling Heavy-Ion Irradiation Directions
A significant enhancement of Jc at B || c has been caused by the introduction of artificially PCs, which is much higher than Jc at B || ab now [40]. Thus, the improvement of Jc at B || ab has been required at the next step, in order to increase overall Jc. The influence of CDs on the flux pinning at B || ab, however, has not been well studied so far, because the Jc at B || ab is the highest innately due to the electronic mass anisotropy in high-Tc superconductors [4] and the introduction of CDs is generally difficult in the direction close Figure 11. Magnetic-field angular dependence of J c at temperature of 60 K and magnetic field of 3 T to 7 T for YBCO thin films with CDs crossing at (a) θ i = ±30 • and (b) ±60 • relative to the c-axis, respectively. Reprinted with permission from [36], copyright 2018 by IOP.
Modification of J c Around B || ab by Controlling Heavy-Ion Irradiation Directions
A significant enhancement of J c at B || c has been caused by the introduction of artificially PCs, which is much higher than J c at B || ab now [40]. Thus, the improvement of J c at B || ab has been required at the next step, in order to increase overall J c . The influence of CDs on the flux pinning at B || ab, however, has not been well studied so far, because the J c at B || ab is the highest innately due to the electronic mass anisotropy in high-T c superconductors [4] and the introduction of CDs is generally difficult in the direction close to the ab-plane. In contrast, heavy-ion irradiation can be an effective tool even for exploring the flux pinning effect of CDs at B || ab, because CDs can be installed in any direction by adjusting the irradiation direction.
GdBCO-coated conductors were irradiated with 270 MeV Xe-ions, where the irradiation angle Θ i relative to the ab-plane was controlled in the range from ±5 • to ±15 • relative to the ab-plane in order to install crossed CDs around the ab-plane [41]. The cross-sectional TEM image of the GdBCO-coated conductor irradiated at Θ i = ±10 • , Figure 12, shows the formation of continuous CDs along the irradiation directions. At the bottom part of the GdBCO layer, by contrast, some CDs become thinner and indicate angular dispersion in the irradiation directions. This is due to smaller value of S e than the threshold value of 20 keV/nm for the formation of continuous CDs [17,42], because the S e changes from 29.1 to 7.40 keV/nm through the GdBCO layer for the oblique irradiation at Θ i = 10 • . Figure 13 shows the magnetic field angular dependence of J c for the irradiated samples with Θ i = ±5 • , ±10 • , and ±15 • , respectively. The CD crossing-angles of Θ i ≤ ±15 • significantly affect the magnetic field angular variation of J c around B || ab. The introduction of crossed CD at Θ i = ±15 • provides a triple peak of J c centered at B || ab, where a large J c peak exists at B || ab and the other two J c peaks emerge around θ = 75 • and 105 • , independently each other. This behavior is in contrast to the case of CDs crossing at θ i ≤ ±30 • relative to the c-axis, which shows a single peak of J c centered at B || c, as represented in Figures 6 and 11. As the crossing-angle of Θ i decreases, the two divided peaks of J c at ±Θ i overlap with the central J c peak at B || ab: a single peak centered at θ = 90 • occurs for the crossing angles of Θ i ≤ ±10 • . In particular, the crossing angle of Θ i = ±5 • provides the large and sharp J c peak at B || ab, showing the highest value of all the samples at B || ab. To our knowledge, it is the first confirmation that CDs contribute to the improvement of J c at B || ab.
relative to the c-axis, which shows a single peak of Jc centered at B || c, as represented in Figures 6 and 11. As the crossing-angle of Θi decreases, the two divided peaks of Jc at ±Θ i overlap with the central Jc peak at B || ab: a single peak centered at θ = 90° occurs for the crossing angles of Θi ≤ ±10°. In particular, the crossing angle of Θi = ±5° provides the large and sharp Jc peak at B || ab, showing the highest value of all the samples at B || ab. To our knowledge, it is the first confirmation that CDs contribute to the improvement of Jc at B || ab. (1) and (2): φt ~ 6.6° for Θi = 5°, φt ~ 8.7° for Θi = 10°, and φt ~ 11.9° for Θi = [25]. The ϕ t for CDs tilted at small angle of Θ i , on the other hand, can be evaluated by substituting the value of ϕ t~6 5 • for Θ i = 90 • and γ = 5 together with equations (1) and (2): ϕ t~6 .6 • for Θ i = 5 • , ϕ t~8 .7 • for Θ i = 10 • , and ϕ t~1 1.9 • for Θ i = 15 • . Thus, the trapping angles of CDs tilted toward the ab-plane becomes very small: flux lines are hardly trapped along CDs when the magnetic field direction is displaced from the direction of CDs even slightly. In particular, the trapping angle for CDs tilted at Θ i ≥ 10 • is smaller than the CD tilt-angle Θ i , suggesting that the tilted CDs hardly affect the flux pinning at B || ab. Therefore, CDs tilted at Θ i ≥ 10 • and the ab-plane correlated PCs provide flux pinning independently. The CDs tilted at Θ i = 5 • , on the other hand, can fully contribute to the improvement of J c at B || ab, because the trapping angle exceeds the value of Θ i .
Modification of J c at B || ab by Heavy-Ion Irradiation along the a-Axis
An in-plane aligned a-axis-oriented YBCO film offers an excellent opportunity for further exploration into the influence of CDs on the flux pinning at B || ab, since we can easily install CDs along the ab-plane with the ion-beam normal to the film [43]. We prepared the in-plane aligned a-axis-oriented YBCO film by a PLD technique with an ArF excimer laser, where a (100) SrLaGaO 4 substrate with Gd 2 CuO 4 buffer layer was used to promote the in-plane orientation of YBCO thin film [44]. The film was patterned into the shape of a microbridge so as to make the bridge direction parallel to the b-axis, where transport current can be applied along the ab-plane (see Figure 14). Both the in-plane-aligned texture of the film and the experimental arrangement enable us to remove the extra effect such as the interlayer Josephson current and the channel flow of flux lines along the CuO 2 plane, providing deeper insights on the nature of flux pinning of CDs along the ab-plane. promote the in-plane orientation of YBCO thin film [44]. The film was patterned into the shape of a microbridge so as to make the bridge direction parallel to the b-axis, where transport current can be applied along the ab-plane (see Figure 14). Both the in-planealigned texture of the film and the experimental arrangement enable us to remove the extra effect such as the interlayer Josephson current and the channel flow of flux lines along the CuO2 plane, providing deeper insights on the nature of flux pinning of CDs along the ab-plane. Figure 14. Sketch of the experimental arrangement using the in-plane aligned a-axis-oriented YBCO film. Reprinted with permission from [43], copyright 2019 by IEEE.
The in-plane aligned a-axis-oriented YBCO thin film showed good a-axis orientations without other orientations for the X-ray θ-2θ diffraction pattern, as shown in Figure 15. In addition, X-ray diffraction φ scanning using the (102) plane of the YBCO film before the irradiation indicated two-fold symmetry, since strong peaks stood out at around 90° and 270° in the inset of Figure 15. Therefore, the in-plane aligned a-axis-orientated microstructure can be confirmed on the film used in this work. Figure 14. Sketch of the experimental arrangement using the in-plane aligned a-axis-oriented YBCO film. Reprinted with permission from [43], copyright 2019 by IEEE.
The in-plane aligned a-axis-oriented YBCO thin film showed good a-axis orientations without other orientations for the X-ray θ-2θ diffraction pattern, as shown in Figure 15. In addition, X-ray diffraction ϕ scanning using the (102) plane of the YBCO film before the irradiation indicated two-fold symmetry, since strong peaks stood out at around 90 • and 270 • in the inset of Figure 15. Therefore, the in-plane aligned a-axis-orientated microstructure can be confirmed on the film used in this work.
A cross-sectional TEM image of the in-plane aligned a-axis oriented YBCO film after the irradiation with 200 MeV Xe ions is shown in Figure 16a. The straight CDs along the aaxis are elongated through the thickness of the YBCO film. Figure 16b shows the plan-view TEM image of the a-axis oriented YBCO thin film after the irradiation. The CDs formed by the ion beam along the a-axis are roughly elliptical in shape, whereas CDs parallel to the c-axis are usually circular [17,45]. In general, the shape of CDs depends on the direction of the incident ions relative to the crystallographic axes in high-T c superconductors, because the anisotropy of thermal diffusivity causes more severe irradiation damage for the creation of CDs along the aand/or the b-axis [17].
The in-plane aligned a-axis-oriented YBCO thin film showed good a-axis orientations without other orientations for the X-ray θ-2θ diffraction pattern, as shown in Figure 15. In addition, X-ray diffraction φ scanning using the (102) plane of the YBCO film before the irradiation indicated two-fold symmetry, since strong peaks stood out at around 90° and 270° in the inset of Figure 15. Therefore, the in-plane aligned a-axis-orientated microstructure can be confirmed on the film used in this work. Figure 15. X-ray diffraction θ-2θscan of the in-plane aligned a-axis oriented YBCO thin film before the irradiation. Inset: X-ray φ scan using (102) plane of the YBCO thin film before the irradiation. Reprinted with permission from [43], copyright 2019 by IEEE.
A cross-sectional TEM image of the in-plane aligned a-axis oriented YBCO film after the irradiation with 200 MeV Xe ions is shown in Figure 16a. The straight CDs along the a-axis are elongated through the thickness of the YBCO film. Figure 16b shows the planview TEM image of the a-axis oriented YBCO thin film after the irradiation. The CDs formed by the ion beam along the a-axis are roughly elliptical in shape, whereas CDs parallel to the c-axis are usually circular [17,45]. In general, the shape of CDs depends on the Figure 17 represents the magnetic field dependence of Jc at 72 K for the a-axis oriented YBCO film before and after the irradiation. The Jc at B || c is reduced by the introduction of CDs along the a-axis, especially for high magnetic fields. The CDs along the a-axis hardly interact with flux lines at B || c, since the CDs are perpendicular to the magnetic field direction. Moreover, CDs perpendicular to magnetic field direction create easy channel for flux lines to creep along the length of the CDs [46]. In addition to these deterioration effects, the irradiation damage to the host matrix causes the pronounced reduction of Jc at B || c. Figure 17 represents the magnetic field dependence of J c at 72 K for the a-axis oriented YBCO film before and after the irradiation. The J c at B || c is reduced by the introduction of CDs along the a-axis, especially for high magnetic fields. The CDs along the a-axis hardly interact with flux lines at B || c, since the CDs are perpendicular to the magnetic field direction. Moreover, CDs perpendicular to magnetic field direction create easy channel for flux lines to creep along the length of the CDs [46]. In addition to these deterioration effects, the irradiation damage to the host matrix causes the pronounced reduction of J c at B || c.
The introduction of CDs along the a-axis, on the other hand, hardly reduces the absolute value of J c at B || a, even though the J c is affected by the local irradiation damage to the CuO 2 planes as well as the J c at B || c. It should be noted that the normalized J c by J c0 increases after the irradiation, especially for high magnetic fields (see the inset of Figure 17). This behavior suggests that CDs contribute to the flux pinning at B || ab. For low magnetic fields, by contrast, the pinning effect of CDs along the a-axis is hardly visible even on the normalized J c . This is attributed to the presence of the naturally growth defects such as stacking faults in the film: Such pre-existing defects act as ab-plane correlated PCs both before and after the irradiation, which obscures the pinning effect of CDs, especially for low magnetic fields.
YBCO film before and after the irradiation. The Jc at B || c is reduced by the introduction of CDs along the a-axis, especially for high magnetic fields. The CDs along the a-axis hardly interact with flux lines at B || c, since the CDs are perpendicular to the magnetic field direction. Moreover, CDs perpendicular to magnetic field direction create easy channel for flux lines to creep along the length of the CDs [46]. In addition to these deterioration effects, the irradiation damage to the host matrix causes the pronounced reduction of Jc at B || c. Figure 17. Magnetic field dependence of Jc at B || c and at B || a in the a-axis oriented YBCO thin film before and after the irradiation. Inset: Jc normalized by self-field critical current density Jc0 as a function of magnetic field along the a-axis. Reprinted with permission from [43], copyright 2019 by IEEE.
The introduction of CDs along the a-axis, on the other hand, hardly reduces the absolute value of Jc at B || a, even though the Jc is affected by the local irradiation damage to the CuO2 planes as well as the Jc at B || c. It should be noted that the normalized Jc by Jc0 increases after the irradiation, especially for high magnetic fields (see the inset of Figure 17). This behavior suggests that CDs contribute to the flux pinning at B || ab. For low Figure 17. Magnetic field dependence of J c at B || c and at B || a in the a-axis oriented YBCO thin film before and after the irradiation. Inset: J c normalized by self-field critical current density J c0 as a function of magnetic field along the a-axis. Reprinted with permission from [43], copyright 2019 by IEEE.
Modification of the J c Anisotropy by Controlling the Heavy-Ion Irradiation Energy
The modification of the J c anisotropy in high-T c superconductors is sensitive to direction-dispersions of CDs, as mentioned in the previous sections. Another way to modify the J c properties by CDs is to tune the morphologies of CDs. Especially for the morphology of short segmented (i.e., discontinuous) CDs, the ends of the discontinuous CDs can provide a variety of additional pinning effects: the ends of the segmented CDs can trap flux lines in magnetic field tilted from their long axis [21,22] and the existence of gaps in the segmented CDs can suppress thermal motion of flux lines, as shown in Figure 18. Furthermore, the volume fraction of CDs relative to the superconducting area can be minimized for discontinuous CDs, since CDs are shortly segmented: the reduction of the volume fraction of the crystalline defects suppresses the degradation of the superconductivity associated with the introduction of PCs, leading to the improvement of the absolute value of J c in a whole magnetic field angular region [19,20]. For iron-based superconductors, the morphology of CDs formed by heavy-ion irradiation tends to be discontinuous, which induces the remarkable improvement of J c [47][48][49]. The morphology of CDs in high-T c superconductors can be tuned by adjusting the irradiation energy for heavy-ion irradiation. In addition, the pinning effect of discontinuous CDs can be compared directly with that of continuous ones under same irradiation conditions except for the irradiation energy: the heavy-ion irradiations with different irradiation energies enable us to clarify the superiority of discontinuous CDs in the flux pinning effect over continuous CDs.
We first compared the flux pinning properties of discontinuous CDs with those of continuous ones when their long axis is parallel to the c-axis: GdBCO-coated conductors were irradiated with 80 MeV and 270 MeV Xe-ions along the c-axis, respectively [25]. For the irradiated sample with 270 MeV Xe ions, the straight and continuous CDs with the diameter of 4-11 nm penetrate the superconducting layer along the c-axis, as shown in Figure 3a. The value of S e calculated using SRIM code varies from 3.0 to 2.8 keV/Å through the superconducting layer with the thickness of 2.2 µm for the 270 MeV Xe-ion irradiation, so that the continuous CDs are formed over the whole sample. The 80 MeV Xe-ion irradiation, by contrast, produces short segmented CDs in their longitudinal direction along the c-axis, as shown in Figure 3b: the length of the segmented CDs with the diameter of 5-10 nm varies from 15 to 50 nm along their length, while the gaps between the segmented CDs is also variable, ranging between 15 and35 nm. The formation of discontinuous CDs is attributed to the value of S e changing from 2.0 to 1.4 keV/Å for the 80 MeV Xe ions into REBCO thin films [17,19,20]. morphology of CDs formed by heavy-ion irradiation tends to be discontinuous, which induces the remarkable improvement of Jc [47][48][49]. The morphology of CDs in high-Tc superconductors can be tuned by adjusting the irradiation energy for heavy-ion irradiation. In addition, the pinning effect of discontinuous CDs can be compared directly with that of continuous ones under same irradiation conditions except for the irradiation energy: the heavy-ion irradiations with different irradiation energies enable us to clarify the superiority of discontinuous CDs in the flux pinning effect over continuous CDs. Figure 18b). In addition, the ends of discontinuous CDs can trap flux lines in magnetic field tilted from their long axis, as shown in Figure 18a. These flux pinning effects of discontinuous CDs become more remarkable at lower temperature where a core size of flux line approaches the thin diameter of the discontinuous CDs. Moreover, discontinuous CDs more minimize the degradation of the superconductivity associated with the introduction of PCs compared with continuous CDs. Thus, the discontinuity of CDs can contribute to further enhancement of J c . is also variable, ranging between 15 and35 nm. The formation of discontinuous CDs is attributed to the value of Se changing from 2.0 to 1.4 keV/Å for the 80 MeV Xe ions into REBCO thin films [17,19,20]. Figure 19 shows the magnetic field angular dependences of Jc at 70 K and 84 K in GdBCO-coated conductors irradiated with 80 MeV and 270 MeV Xe ions, respectively. The 80 MeV irradiation causes higher Jc in all magnetic field directions compared to the 270 MeV irradiation, which becomes more pronounced at lower temperature of 70 K. The high Jc at B || c for the 80 MeV irradiation is attributed to the existence of gaps in the segmented CDs, which induce the suppression of thermal motion of flux lines (see Figure 18b). In addition, the ends of discontinuous CDs can trap flux lines in magnetic field tilted from their long axis, as shown in Figure 18a. These flux pinning effects of discontinuous CDs become more remarkable at lower temperature where a core size of flux line approaches the thin diameter of the discontinuous CDs. Moreover, discontinuous CDs more minimize the degradation of the superconductivity associated with the introduction of PCs compared with continuous CDs. Thus, the discontinuity of CDs can contribute to further enhancement of Jc. The superior flux pinning effect of discontinuous CDs can be further modified by tuning the direction-dispersion. We irradiated GdBCO coated conductors with 80 MeV Xe ions, where the incident ion beams were tilted from the c-axis by θi to introduce various kinds of direction-dispersed CDs: a parallel configuration composed of CDs parallel to The superior flux pinning effect of discontinuous CDs can be further modified by tuning the direction-dispersion. We irradiated GdBCO coated conductors with 80 MeV Xe ions, where the incident ion beams were tilted from the c-axis by θ i to introduce various kinds of direction-dispersed CDs: a parallel configuration composed of CDs parallel to the c-axis, bimodal angular configuration composed of CDs tilted at θ i = ±45 • relative to the c-axis, and trimodal angular configuration composed of CDs tilted at θ i = 0 • and ±45 • [50]. Figure 20a shows a cross-sectional TEM image of the GdBaCuO-coated conductor irradiated with 80 MeV Xe-ions at θ i = 0 • and ±45 • . The morphologies of CDs are schematically emphasized in Figure 20b. Interestingly, the 80 Me V Xe-ion beams create CDs with different morphologies depending on the irradiation angles of θ i : thick and elongated CDs are formed along the ion path at θ i = 45 • , whereas the 80 MeV ions at θ i = 0 • creates short segmented CDs along their length. In general, the morphology of CDs is determined by the value of S e , which is the energy transferred from the incident ions for the electronic excitation. A thermal spike model [51,52], which is one of models to interpret the formation of irradiation defects through the electronic excitation, can describes the direction-dependent morphologies of CDs in high-T c superconductors by considering the anisotropy of thermal diffusivity [17,50]. According to the thermal spike model, the energy of the electronic excitation is converted into the thermal energy of lattice, which is the source for the formation of irradiation defects. In high-T c superconductors, the thermal diffusivity along the c-axis is smaller than that along other crystallographic axes, which results in the suppression of a temperature spread in the planes containing the c-axis. Thus, the incident ion beam tilted from the c-axis causes more severe structural damage, resulting in the formation of elongated CDs with a thicker diameter. the value of Se, which is the energy transferred from the incident ions for the electronic excitation. A thermal spike model [51,52], which is one of models to interpret the formation of irradiation defects through the electronic excitation, can describes the directiondependent morphologies of CDs in high-Tc superconductors by considering the anisotropy of thermal diffusivity [17,50]. According to the thermal spike model, the energy of the electronic excitation is converted into the thermal energy of lattice, which is the source for the formation of irradiation defects. In high-Tc superconductors, the thermal diffusivity along the c-axis is smaller than that along other crystallographic axes, which results in the suppression of a temperature spread in the planes containing the c-axis. Thus, the incident ion beam tilted from the c-axis causes more severe structural damage, resulting in the formation of elongated CDs with a thicker diameter. Figure 21 shows the magnetic field angular dependence of Jc for GdBCO coated conductors irradiated with 80 MeV and 270 MeV Xe ions, where the irradiation angles are θi = 0° for the parallel CD configurations and θi = 0°, ±45° for the trimodal angular configuration, respectively. The trimodal angular distribution shows higher Jc values than the parallel CD configuration at 70 K under the same irradiation energy. This suggests that the direction-dispersion of CDs is more effective to enhance the flux pinning over a wide magnetic field angular region, as mentioned in Section 3.2. It is noteworthy that the trimodal angular configuration produced by 80 MeV Xe ions shows the highest Jc in all the CD configurations over the whole magnetic field angular region at 70 K. The 80 MeV trimodal configuration consists of short segmented CDs along the c-axis and elongated CDs crossing at θi= ±45°, as shown in Figure 20a. For B || c, the motion of double kinks of flux lines peculiar to one-dimensional PCs is suppressed by the gaps between the segmented CDs, as shown in Figure 18b. Furthermore, continuous CDs crossing at θi = ±45° assist in trapping the unpinned segments of flux lines, as shown in Figure 22a. The pinning of Figure 21 shows the magnetic field angular dependence of J c for GdBCO coated conductors irradiated with 80 MeV and 270 MeV Xe ions, where the irradiation angles are θ i = 0 • for the parallel CD configurations and θ i = 0 • , ±45 • for the trimodal angular configuration, respectively. The trimodal angular distribution shows higher J c values than the parallel CD configuration at 70 K under the same irradiation energy. This suggests that the direction-dispersion of CDs is more effective to enhance the flux pinning over a wide magnetic field angular region, as mentioned in Section 3.2. It is noteworthy that the trimodal angular configuration produced by 80 MeV Xe ions shows the highest J c in all the CD configurations over the whole magnetic field angular region at 70 K. The 80 MeV trimodal configuration consists of short segmented CDs along the c-axis and elongated CDs crossing at θ i = ±45 • , as shown in Figure 20a. For B || c, the motion of double kinks of flux lines peculiar to one-dimensional PCs is suppressed by the gaps between the segmented CDs, as shown in Figure 18b. Furthermore, continuous CDs crossing at θ i = ±45 • assist in trapping the unpinned segments of flux lines, as shown in Figure 22a. The pinning of kinks of flux lines is effective for further improvement of J c [53,54]. Therefore, the combination of discontinuous CDs and continuous ones crossing at θ i = ±45 • provides the enhancement of J c at B || c. Magnetic field angular dependence of Jc at magnetic field of 4 T and temperatures of (a) 84 K, (b) 77.3 K, and (c) 70 K for GdBCO coated conductors irradiated with 80 MeV and 270 MeV Xe ions, where the irradiation angles are θi = 0° for parallel CD configurations and θi = 0°, ±45° for trimodal angular configurations, respectively. The broken lines for (b) 77.3 K and (c) 70 K show the Jc properties of the unirradiated sample as reference data. Reprinted with permission from [50], copyright 2020 by the Japan Society of Applied Physics. There is a possibility that direction-dispersed CDs with "complete discontinuity" further provide a high and isotropic J c in high-T c superconductors. In fact, BaHfO 3. nanorods tend to grow discontinuously and to be widely dispersed in the directions, causing a significant improvement of J c in a wide magnetic field angular range for REBCO thin films [55,56]. The irradiation using lighter ions with lower energy, which provides lower S e for high-T c superconductors (e.g., Kr-ion irradiation with 80 MeV, where S e = 16.0 keV/nm), may produce discontinuous CDs even in directions tilted from the c-axis. However, there is a trade-off between the discontinuity of CDs and the thickness of CDs for the formation of CDs by heavy-ion irradiations: discontinuous CDs tend to be thin diameter [18,25], where the elementary pinning force of one segmented column with thin diameter becomes weak. Thus, the discontinuity of CDs does not always provide the strong pinning landscape for the heavy-ion irradiation process. The introduction of direction-dispersed, discontinuous, and thick CDs by the ion irradiation process can be the key to further making high J c fairly isotropic in high-T c superconductors.
Conclusions
We have systematically examined the modification of the anisotropy of J c in REBCO thin films by using heavy-ion irradiations: the morphology and the configuration of the irradiation defects were controlled by the irradiation conditions such as the irradiation energy and the incident direction. The direction-dispersed CDs were designed in REBCO thin films to push up the J c in the magnetic field angular region from B || c to B || ab, by controlling the irradiation directions. When the directions of CDs were extensively dispersed around the c-axis, the J c was enhanced over a wider magnetic field angular region centered at B || c. The J c at B || ab, on the other hand, was hardly affected even by CDs tilted toward the ab-plane, which is attributed to the strong line tension energies of flux lines around B || ab in the anisotropic superconductors. We demonstrated the improvement of J c at B || ab by the introduction of CDs, where the angle of CDs relative to the ab-plane were controlled down to Θ i = 5 • . These results suggest that direction-dispersed CDs can provide the isotropic enhancement of J c over all magnetic field angular region when the angles of CDs are matched with the anisotropic line tension energy of flux lines.
Another promising morphology of CDs, i.e., discontinuous CDs, which can be introduced by heavy-ion irradiation with relatively low energy, showed large potential for the enhancement of J c over a wide magnetic field angular region. In particular, the combination of the discontinuity and the direction-dispersion lead to further enhancement of J c : the gaps in discontinuous CDs provide the suppression of the motion of flux lines, while the direction-dispersion of CDs produces the strong flux pinning over a wide magnetic field angular region. | 15,470 | 2021-05-21T00:00:00.000 | [
"Physics",
"Materials Science"
] |
A rich structure related to the construction of analytic matrix functions
We analyse two special cases of $\mu$-synthesis problems which can be reduced to interpolation problems in the set of analytic functions from the disc into the symmetrised bidisc and into the tetrablock. For these inhomogeneous domains we study the structure of interconnections between the set of analytic functions from the disc into the given domain, the matricial Schur class, the Schur class of the bidisc, and the set of pairs of positive kernels on the bidisc subject to a boundedness condition. We use the theories of Hilbert function spaces and of reproducing kernels to establish these connections. We give a solvability criterion for the interpolation problem that arises from the $\mu$-synthesis problem related to the tetrablock.
Introduction
Engineering provides some hard challenges for classical analysis. In signal processing and, in particular, control theory, one often needs to construct analytic matrix-valued functions on the unit disc D or right half-plane subject to finitely many interpolation conditions and to some subtle boundedness requirements. The resulting problems are close in spirit to the classical Nevanlinna-Pick problem, but established operator-or functiontheoretic methods which succeed so elegantly for the classical problem do not seem to help for even minor variants. For example, this is so for the spectral Nevanlinna-Pick problem [13,22], which is to construct an analytic square-matrix-valued function F in D that satisfies a finite collection of interpolation conditions and the boundedness condition sup λ∈D r(F (λ)) ≤ 1 for all λ ∈ D.
This problem is a special case of the µ-synthesis problem of H ∞ control, which is recognised as a hard and important problem in the theory of robust control [16,17]. Even the special case of the spectral Nevanlinna-Pick problem for 2 × 2 matrices awaits a definitive analytic theory.
A major difficulty in µ-synthesis problems is to describe the analytic maps from D to a suitable domain X ⊂ C n or its closure X . In the classical theory X is a matrix ball, and the realisation formula presents the general analytic map from D to X in terms of a contractive operator on Hilbert space; this formula provides a powerful approach to a variety of interpolation problems. In the µ variants X can be unbounded, nonconvex, inhomogeneous and non-smooth, properties which present difficulties both for an operatortheoretic approach and for standard methods in several complex variables.
In this paper we exhibit, for certain naturally arising domains X , a rich structure of interconnections between four naturally arising objects of analysis in the context of 2 × 2 analytic matrix functions on D. This rich structure combines with the classical realisation formula and Hilbert space models in the sense of Agler to give an effective method of constructing functions in the space Hol(D, X ) of analytic maps from D to X , and thereby of obtaining solvability criteria for two cases of the µ-synthesis problem.
The rich structure is summarised in the following diagram, which we call the rich saltire 1 for the domain X .
S 2×2
Left S X SE ' ' P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P The objects are defined as follows: S 2×2 is the 2× 2 matricial Schur class of the disc, that is, the set of analytic 2× 2 matrix functions F on D such that F (λ) ≤ 1 for all λ ∈ D; S 2 is the Schur class of the bidisc D 2 , that is, Hol(D 2 , D), and for all z, λ, w, µ ∈ D, is positive semidefinite on D 2 and is of rank 1. The arrows in diagram (1.1) denote mappings and correspondences that will be described in Sections 4 to 7.
In this paper we consider the rich saltire for two domains X : the symmetrised bidisc and the tetrablock, defined below. Whereas S 2×2 and S 2 are classical objects that have been much studied, Hol (D, X ) and R have been introduced and studied within the last two decades in connection with special cases of the robust stabilisation problem. The maps in the upper northeast triangle of the rich saltire for a domain X do not depend on X .
The closed symmetrised bidisc is defined to be the set The tetrablock is the domain The closure of E is denoted byĒ. The symmetrised bidisc arises naturally in the study of the spectral Nevanlinna-Pick problem for 2 × 2 matrix functions. In a similar way, the tetrablock arises from another special case of the µ-synthesis problem for 2 × 2 matrix functions [22]. Define Diag def = z 0 0 w : z, w ∈ C and, for a 2 × 2-matrix A, µ Diag (A) = (inf{ X : X ∈ Diag, 1 − AX is singular}) −1 .
The µ Diag -synthesis problem: given points λ 1 , . . . , λ n ∈ D and target matrices W 1 , . . . , W n ∈ C 2×2 one seeks an analytic 2 × 2-matrix-valued function F such that F (λ j ) = W j for j = 1, . . . , n, and µ Diag (F (λ)) < 1, for all λ ∈ D. This problem is equivalent to the interpolation problem for Hol(D, E) studied in this paper; see [1,Theorem 9.2]. Here Hol(D, E) is the space of analytic maps from the unit disc D to E.
In the case of the symmetrised bidisc a number of components of the rich saltire for Γ were presented by Agler and two of the present authors in [3]. Aspects of the rich saltire for Γ were used in [3, Theorem 1.1] to prove a solvability criterion for the 2 × 2 spectral Nevanlinna-Pick interpolation problem. In this paper we give the final picture of the rich saltire for the symmetrised bidisc.
In the case of the tetrablock, with the aid of the rich saltire we obtain a solvability criterion for the µ Diag -synthesis problem. A strategy to obtain the solvability criterion is as follows. Reduce the problem to an interpolation problem in the set of analytic functions from the disc to the tetrablock, induce a duality between the set Hol(D, E) and S 2 , then use Hilbert space models for S 2 to obtain necessary and sufficient conditions for solvability.
The main result of this paper is the existence of the rich saltire, and the principal application thereof is the equivalence of (1) and (3) in the following assertion. Theorem 1.1. Let λ 1 , . . . , λ n be distinct points in D, let W 1 , . . . , W n be 2 × 2 complex matrices such that (W j ) 11 (W j ) 22 = det W j for each j, and let (x 1j , x 2j , x 3j ) = ((W j ) 11 , (W j ) 22 , det W j ) for each j. The following three conditions are equivalent.
(1) There exists an analytic 2 × 2 matrix function F in D such that
2)
and (2) There exists a rational function x : D → E such that This result is a part of Theorem 8.1, which we establish in Section 8, and [1, Theorem 9.2] (Theorem 3.1). The necessary and sufficient condition for the existence of a solution of the µ Diag -synthesis problem for 2 × 2 matrix functions with n > 2 interpolation points is given in terms of the existence of positive 3n-square matrices N, M satisfying a certain linear matrix inequality in the data, but with the constraint that N have rank 1. This kind of optimization problem can be addressed with the aid of numerical algorithms (for example, [14]), though we observe that, on account of the rank constraint, it is not a convex problem.
The paper is organized as follows. Sections 2 and 3 describe the basic properties of the symmetrized bidisc Γ and the tetrablock E respectively. They also present known results on the reduction of a 2 × 2 spectral Nevanlinna-Pick problem to an interpolation problem in the space Hol(D, Γ) of analytic functions from D to Γ, and on the reduction of a µ Diagsynthesis problem to an interpolation problem in the space Hol(D, E) of analytic functions from D to E. In Section 4 we construct maps between the sets S 2×2 and S 2 using the linear fractional transformation F F (λ) (z), λ, z ∈ D, for F ∈ S 2×2 . Relations between S 2×2 and the set of analytic kernels on D 2 are given in Section 5. Section 6 presents the rich saltire (6.1) for the symmetrised bidisc. The rich saltire for the tetrablock (7.1) is described in Section 7. Here we present a duality between the space Hol(D, E) and a subset of the Schur class S 2 of the bidisc. In Section 8 we use Hilbert space models for functions in S 2 to obtain necessary and sufficient conditions for solvability of the interpolation problem in the space Hol(D, E).
The closed unit disc in C will be denoted by ∆ and the unit circle by T. The complex conjugate transpose of a matrix A will be written A * . The symbol I will denote an identity operator or an identity matrix, according to context. The C * -algebra of 2 × 2 complex matrices will be denoted by M 2 (C).
The symmetrized bidisc G
The open and closed symmetrized bidiscs are the subsets The sets G and Γ are relevant to the 2×2 spectral Nevanlinna-Pick problem because, for a 2 × 2 matrix A, if r(·) denotes the spectral radius of a matrix, 3) Accordingly, if F is an analytic 2 × 2 matrix function on D satisfying r(F (λ)) ≤ 1 for all λ ∈ D then the function (tr F, det F ) belongs to the space Hol(D, Γ) of analytic functions from D to Γ. A converse statement also holds: every ϕ ∈ Hol(D, Γ) lifts to an analytic 2 × 2 matrix function F on D such that (tr F, det F ) = ϕ and consequently r(F (λ)) ≤ 1 for all λ ∈ D [8, Theorem 1.1]. The 2 × 2 spectral Nevanlinna-Pick problem can therefore be reduced to an interpolation problem in Hol(D, Γ). There is a slight complication in the case that any of the target matrices are scalar multiples of the identity matrix; for simplicity we shall exclude this case in the present paper.
The relation (2.3) scales in an obvious way: for ρ > 0, Theorem 2.1. Let λ 1 , . . . , λ n be distinct points in D and let W 1 , . . . , W n be 2×2 matrices, none of them a scalar multiple of the identity. The following two statements are equivalent.
5)
and h(D) is relatively compact in G.
Certain rational functions play a central role in the analysis of Γ.
In particular, Φ is defined and analytic on D × Γ (since |s| ≤ 2 when (s, p) ∈ Γ), Φ extends analytically to (∆ × Γ)\{(z, 2z,z 2 ) : z ∈ T}. See [7] for an account of how Φ arises from operator-theoretic considerations. The 1-parameter family Φ(ω, ·), ω ∈ T, comprises the set of magic functions of the domain G. The notion of magic functions of a domain is explained in [10], but for this paper all we shall need is the fact that Φ(D × Γ) ⊂ ∆ and a converse statement: if w ∈ C 2 and |Φ(z, w)| ≤ 1 for all z ∈ D then w ∈ Γ; see for example [9, Theorem 2.1] (the result is also contained in [6, Theorem 2.2] in a different notation).
A Γ-inner function is the analogue for Hol(D, Γ) of inner functions in the Schur class. A good understanding of rational Γ-inner functions is likely to play a part in any future solution of the finite interpolation problem for Hol(D, Γ), since such a problem has a solution if and only if it has a rational Γ-inner solution (for example, [15,Theorem 4.2] where bΓ denotes the distinguished boundary of Γ. By Fatou's Theorem, the radial limit (2.7) exists for almost all λ ∈ T with respect to Lebesgue measure. The distinguished boundary bΓ of G (or Γ) is theŠilov boundary of the algebra of continuous functions on Γ that are analytic in G. It is the symmetrisation of the 2-torus: bΓ = {(z + w, zw) : |z| = |w| = 1}. The royal variety R = {(2z, z 2 ) : |z| < 1} plays an important role in the theory of Γ-inner functions.
The tetrablock E
The open and closed tetrablock are the subsets and The tetrablock was introduced in [1] and is related to the µ Diag -synthesis problem. The following theorem was proved in [1, Theorem 9.2]. The following functions play a central role in the analysis of the tetrablock [1].
Definition 3.2. The functions Ψ, Υ : C 4 → C are defined for (z, x 1 , x 2 , x 3 ) ∈ C 4 such that x 2 z = 1 and x 1 z = 1 respectively by In particular Ψ and Υ are defined and analytic everywhere except when x 2 z = 1 and x 1 z = 1 respectively. Note that, for x ∈ C 3 such that x 1 x 2 = x 3 , the functions Ψ(·, x) and Υ(·, x) are constant and equal to x 1 and x 2 respectively. In this paper we will use the function Ψ to define certain maps in the rich saltire of the tetrablock. By [1, Theorem 2.4], we have the following statement.
By [1, Theorem 2.9], E is polynomially convex, and so the distinguished boundary bE of E exists and is theSilov boundary of the algebra A(E) of continuous functions on E that are analytic on E. We have the following alternative descriptions of bE [1, Theorem 7.1].
(i) x ∈ bE; (ii) x ∈ E and |x 3 | = 1; (iii) x 1 = x 2 x 3 , |x 3 | = 1 and |x 2 | ≤ 1; (iv) either x 1 x 2 = x 3 and Ψ(·, x) is an automorphism of D or x 1 x 2 = x 3 and By [1,Corollary 7.2], bE is homeomorphic to D × T. By a peak point of E we mean a point p for which there is a function f ∈ A(E) such that f (p) = 1 and |f (x)| < 1 for all x ∈ E \ {p}. for almost all λ ∈ T.
By Fatou's Theorem, the radial limit (3.5) exists for almost all λ ∈ T with respect to Lebesgue measure. Note that, for an E-inner function ϕ = (ϕ 1 , ϕ 2 , ϕ 3 ) : D → E, ϕ 3 is an inner function on D in the classical sense.
A finite interpolation problem for Hol(D, E) has a solution if and only if it has a rational Γ-inner solution -see Theorem 8.1.
A realisation formula
In this section we construct maps between the sets S 2×2 and S 2 . For Hilbert spaces H, G, U and V , an operator P such that and an operator X : V → U for which I − P 22 X is invertible, we denote by F P (X) the linear fractional transformation F P (X) := P 11 + P 12 X(I − P 22 X) −1 P 21 F P (X) is an operator from H to G.
The following standard identity is a matter of verification. be operators from H ⊕ U to G ⊕ V . Let X and Y be operators from V to U for which I − P 22 X and I − Q 22 Y are invertible. Then Proposition 4.2. Let H, G, U and V be Hilbert spaces. Let P = P 11 P 12 P 21 P 22 be an operator from H ⊕ U to G ⊕ V and let X : V → U be an operator for which I − P 22 X is invertible. Then if X ≤ 1 and P ≤ 1 we have F P (X) ≤ 1.
Proof. By Proposition 4.1, Then By assumption, X ≤ 1 and P ≤ 1, and so I − X * X ≥ 0 and I − P * P ≥ 0.
Thus, for F = F ij 2 1 ∈ S 2×2 , the linear fractional transformation F F (λ) (z) is given by Proposition 4.6. The map SE is well defined.
Remark 4.7. In Definition 4.5, when either F 21 = 0 or F 12 = 0, the function is independent of z, and so in general the map SE can lose some information about F . However, in the case of the symmetrised bidisc, no information is lost; see Remark 6.15.
5.
Relations between S 2×2 and the set of analytic kernels on D 2 Basic notions and statements on analytic kernels can be found in the book [4] and in Aronszajn's paper [11].
Let N and M be analytic kernels on D 2 , and let K N,M be the hermitian symmetric function on D 2 × D 2 given by We define the set R 1 to be .
Proof. By definition, for z, λ, w, µ ∈ D. Clearly both N F and M F are analytic. To prove that (N F , M F ) ∈ R 1 one has to check that K N,M is an analytic kernel on D 2 of rank 1. Clearly K N,M is analytic. By Proposition 4.3, for all z, λ, w, µ ∈ D. Therefore for all z, λ, w, µ ∈ D. Thus K N F ,M F is an analytic kernel on D 2 of rank 1. Therefore Proof. For every F = F 11 F 12 0 F 22 ∈ S 2×2 , the functions γ and η are given by for all λ, z ∈ D. Thus, N F (z, λ, w, µ) = 0, for z, λ, w, µ ∈ D, and so has rank 0. Furthermore for z, λ, w, µ ∈ D, which is independent of z and w. Hence M F is a kernel on D 2 . Clearly both N F and M F are analytic. It is easy to see that for all z, λ, w, µ ∈ D, which is independent of z and w. Thus K N F ,M F is an analytic kernel on D 2 of rank 1. Therefore (N F , M F ) ∈ R 1 .
By Propositions 5.1 and 5.2, the map Upper E is well defined.
5.2.
Procedure U W and the set-valued map Upper W : R 11 → S 2×2 . Let F ∈ S 2×2 be such that F 21 = 0. Then the kernel N F has rank 1. In this case Upper E maps into a subset R 11 of R 1 rather than onto all of R 1 . By the Moore-Aronszajn Theorem [4, Theorem 2.23], for each kernel k on a set X, there exists a unique Hilbert function space H k on X that has k as its kernel.
Let us describe the procedure for the construction of a function in S 2×2 from a pair of kernels in R 11 .
for all z, λ, w, µ ∈ D and a function Ξ ∈ S 2×2 such that Hence (N, M ) ∈ R 11 can be presented in the following form and so for all z, λ, w, µ ∈ D. The left hand side of (5.4) can be written as , and the right hand side of (5.4) has the form Thus the relation (5.3) can be express by the statement that the Gramian of vectors Hence there is an isometry Then, for z, λ ∈ D, we obtain the pair of equations Since L is a contraction, D ≤ 1 and for all z, λ ∈ D. Hence the first equation has the form Recall that, for the operator L, the linear fractional transformation Since L is a contraction, by Proposition 4.2 and Remark 4.4, and F L is analytic on D. Since A and Bλ(I H M − Dλ) −1 C are operators from C 2 to C 2 , F L is in S 2×2 . Then Ξ = F L has required properties.
The function Ξ constructed with Procedure U W is not necessarily unique since the functions f , g and v z,λ are not uniquely defined. The following proposition gives relations between different Ξ obtained using Procedure U W .
for all z, λ, w, µ ∈ D. Let Ξ 1 and Ξ 2 be constructed from (N, M ) using Procedure U W with the functions f 1 , g 1 , v 1 and f 2 , g 2 , v 2 , respectively. Then Proof. It is easy to see that f 2 = ζ f f 1 and g 2 = ζ g g 1 for some ζ f , ζ g ∈ T. By Theorem 5.5, Ξ 1 and Ξ 2 satisfy for all z, λ ∈ D. Hence and Since f 1 is a nonzero analytic function of 2 variables, the set of zeros of f 1 is nowhere dense in D 2 . Therefore Proposition 5.6 leads us to the following result.
Remark 5.16. The pair of kernels (N, M ) from Theorem 5.15 are known as Agler kernels for ϕ ∈ S 2 . There are papers with constructive proofs of the existence of Agler kernels. See for example [12], [20] and [21]. One can see that, for the Agler kernels (N, M ) for ϕ ∈ S 2 ,
Relations between Hol (D, Γ) and other objects in the rich saltire
The rich saltire for the symmetrized bidisc is the following.
S 2×2
Left S G SE ' ' P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P We will define maps of the rich saltire for G and describe connections between different maps in the diagram (6.1).
The following is trivial. Definition 6.6. The map Lower E G : Hol (D, Γ) → S 2 is given by for h ∈ Hol (D, Γ).
Proposition 6.7. The map Lower E G is well defined.
One can ask the question: which subset of S 2 corresponds to Hol (D, Γ)? (6.2) If h = (s, p) ∈ Hol (D, Γ) then, for any fixed λ ∈ D, the map is a linear fractional self-map f (z) = az+b cz+d of D with the property "b = c". To make the last phrase precise, say that a linear fractional map f of the complex plane has the property "b = c" if f (0) = ∞ and either f is a constant map or, for some a, b and d in C, We shall denote the class of such functions f in S 2 by S b=c 2 .
The map Lower W G : S b=c 2 → Hol (D, Γ) is given by for all ϕ ∈ S b=c 2 . By Proposition 6.8, Lower W G is well defined.
and so for all z, λ ∈ D. Thus Lower E G • Lower W G (ϕ) = ϕ for all ϕ ∈ S b=c 2 . Therefore Lower W G is the inverse of Lower E G .
Let us consider how the defined maps interact with each other.
Proposition 6.11. The following holds SE • Left N G = Lower E G .
Proof. Let h ∈ Hol (D, Γ). Then, by Proposition 6.1, for Left N G (h) = F ∈ S 2×2 , Proof. By Proposition 6.11, SE • Left N G = Lower E G and, by Proposition 6.10, Lower W G is the inverse of Lower E G . The results follow immediately.
for all z, λ ∈ D and Left S G (F ) = (tr F, det F ) = (2F 11 , F 2 11 − F 21 F 12 ). Thus for all z, λ ∈ D. Therefore, for all F ∈ S 2×2 such that However for an arbitrary F ∈ S 2×2 we may have Lower E G • Left S G (F ) = SE (F ) as the following example shows. . Then F ∈ S 2×2 . It is easy to see that and Remark 6.15. In Definition 4.5, when either F 21 = 0 or F 12 = 0, the function is independent of z, and so in general the map SE can lose some information about F . However, in the case of the symmetrised bidisc, no information is lost. For h = (s, p) ∈ Hol (D, Γ) such that s 2 = 4p, by Definition 6.6, Secondly, by Definition 6.
6.4. The map SW G : R 11 → Hol (D, Γ). Proof. By Proposition 5.7, Therefore, for (N, M ) ∈ R 11 , where Ξ ∈ S 2×2 is a function constructed by Procedure UW for (N, M ). The later set is independent of the choice of Ξ.
Relations between SW G and other maps in the rich saltire are the following.
Proof. By Corollary 6.12, It is obvious that Left N G • Lower W G (ϕ) ∈ S 2×2 . By Proposition 5.13,
Relations between Hol (D, E) and other objects in the rich saltire
The rich saltire for the tetrablock is the following.
S 2×2
Left S E SE ' ' P P P P P P P P P P P P P P P P P P P P P P P P P P P P P P We will define the maps of the rich saltire which depend on E and describe connections between the different maps in diagram (7.1). Proof. Consider first the case that x 1 x 2 = x 3 . By Proposition 3.3, |x 1 (λ)|, |x 2 (λ)| ≤ 1 for all λ ∈ D. Then the function
The map
is in S 2×2 and has the required properties (7.2) and (7.3), and moreover it is the only function with these properties. In the case that x 1 x 2 = x 3 , the H ∞ function x 1 x 2 − x 3 is nonzero, and so it has a unique inner-outer factorisation, say ϕe C = x 1 x 2 − x 3 where ϕ is inner, e C is outer and e C (0) ≥ 0. Let One can see that and |F 12 | = e Re 1 2 C = |F 21 | a. e. on T, F 21 is outer, and F 21 (0) ≥ 0. It follows that F is the only matrix satisfying the required properties (7.2) and (7.3).
Let us check that F ∈ S 2×2 . Clearly F is holomorphic on D. We must show that F (λ) ≤ 1 for all λ ∈ D. Let us prove that I − F (λ) * F (λ) is positive semidefinite for all λ ∈ D. It is enough to show that, for all λ ∈ D, the diagonal entries of I − F (λ) * F (λ) are non-negative and det (I − F (λ) * F (λ)) ≥ 0. Since |F 12 | = |F 21 | a. e. on T and a. e. on T. At almost every λ ∈ T, Let D 11 and D 22 be the diagonal entries of I − F * F . Since x(λ) ∈ E for λ ∈ D, by Proposition 3.3, for all λ ∈ D. Thus, for almost every λ ∈ T, By Proposition 3.3, for all λ ∈ D. Hence, for almost every λ ∈ T, for almost every λ ∈ T. Thus F (λ) ≤ 1 for almost every λ ∈ T, and so, by the Maximum Modulus Principle, F (λ) ≤ 1 for all λ ∈ D.
Definition 7.2. The map Left N E : Hol (D, E) → S 2×2 is given by (ii) Let us consider the following example: the function F on D which is defined by Clearly, F ∈ S 2×2 . Then and, by Definition 7.2, 7.3. The maps Lower E E : Hol (D, E) → S lf 2 and Lower W E : S lf 2 → Hol (D, E). Lemma 7.5. Let ϕ ∈ S 2 be such that ϕ(·, λ) is a linear fractional map for all λ ∈ D. Then ϕ can be written as , where a, b, c are functions from D to C, and b is analytic on D. Moreover, if c is analytic on D, then so is a.
Proof. Let ϕ ∈ S 2 be such that ϕ(·, λ) is a linear fractional map for all λ ∈ D. Then we can write where a, b, c, d are functions from D to C. Since ϕ ∈ S 2 , up to cancellation, ϕ(·, λ) does not have a pole at 0 for any λ ∈ D. Thus, without loss of generality, we may write ϕ(z, λ) = a(λ)z + b(λ) c(λ)z + 1 for all z, λ ∈ D. Moreover, since b(λ) = ϕ(0, λ) for all λ ∈ D, and so b is analytic on D.
Suppose c is analytic on D. Then for all z, λ ∈ D, and so a is analytic on D.
Definition 7.6. Let S lf 2 be the subset of S 2 which contains those ϕ for which ϕ(·, λ) is a linear fractional map of the form for all z, λ ∈ D, where c is analytic on D, and if a(λ) = b(λ)c(λ) for some λ ∈ D, then, in addition, |c(λ)| ≤ 1.
Proposition 7.7. Let ϕ be a function on D 2 . Then ϕ ∈ S lf 2 if and only if there exists a function x ∈ Hol (D, E) such that ϕ(z, λ) = Ψ(z, x(λ)) for all z, λ ∈ D.
By Proposition 7.7, the map below Lower E E is well defined.
One can use Proposition 7.7 to define the map Lower W E below.
(ii) for ϕ ∈ S lf 2 such that a = bc, and so ϕ(z, λ) = b(λ), z, λ ∈ D, Lower W E is the set map , where d is analytic and |d| ≤ 1 on D}.
Proposition 7.11. The following relations hold.
and so
Lower W E • Lower E E (x) = x.
Let us see how these maps interact with the other maps in the rich saltire (7.1).
Proposition 7.14. The equality Lower E E • Left S E = SE holds.
for all z, λ ∈ D. It follows that Lower E E • Left S E (F ) = SE(F ) for all F ∈ S 2×2 and so Lower E E • Left S E = SE as required.
The idea for SW E is that we want to follow Procedure UW with the application of the map Left S E to the function produced. The following proposition will facilitate this. Proof. By Proposition 5.7, a function F = for j = 1, . . . , n and k = 1, 2, 3. Write L as a block operator matrix where A, D act on C 2 , H respectively. Then, for j = 1, . . . , n and k = 1, 2, 3, we obtain the following equations From the second of these equations, for all j = 1, . . . , n and k = 1, 2, 3. Let Θ(λ) Since L is unitary and H is finite-dimensional, Θ is a rational 2 × 2 inner function. Hence the function x := (a, d, det Θ) is a rational E-inner function. We claim that x satisfies the interpolation conditions (8.2) x(λ j ) = (x 1j , x 2j , x 3j ) for all j = 1, . . . , n.
Theorem 8.1 gives us a criterion for the solvability of the interpolation problem find x ∈ Hol(D, E) such that x(λ j ) = (x 1j , x 2j , x 3j ) for j = 1, . . . , n. The process can be summarized as follows.
Procedure SW Let λ j and (x 1j , x 2j , x 3j ) be as in Theorem 8.1. Let z 1 , z 2 , z 3 be a triple of distint points in D, and N, M be positive 3n-square matrices such that rank (N ) ≤ 1 and the inequality (8.4) holds.
The criterion for the µ Diag -synthesis problem (Theorem 1.1) follows from Theorem 3.1 and Theorem 8.1. The tetrablock E is a bounded 3-dimensional domain, which is more amenable to study than the unbounded 4-dimensional domain Σ def = {A ∈ C 2×2 : µ Diag (A) < 1}. | 7,865.2 | 2016-08-07T00:00:00.000 | [
"Mathematics"
] |
TYPOLOGICAL SIMILARITIES OF PAREMIOLOGICAL UNITS ABOUT "LABOUR" (ON THE BASIS OF ENGLISH, RUSSIAN AND KAZAKH LANGUAGES)
Purpose of the study: The main task of the article is to represent the English, Russian and Kazakh proverbs and sayings’ equivalents about the labour. The semantic characteristics of the paremiological units’ on the thematic group "labour" in English, Russian and Kazakh languages are examined comparatively for the first time. Methodology: The semantic components of paremiological units about labour were analyzed, considering their general and specific characteristics by using the qualitative method. Until now three genetically not related paremiological units in English, Russian and Kazakh languages were not fully investigated in linguistics contrastively and comparatively. Component analysis, descriptive analysis, contextual and comparative analysis were used as a tool for investigation of the following work. Main findings: Paremiological units of three different people and from various group languages can have universal lexemes in their components. The similar paremiological units of several languages are considered as a typological phenomenon, because of having similar life stages and historical correlation. Studying the paremiological units, we have the opportunity to get acquainted with the peoples’ culture, the range of interests, worldview, and psychology of the people. Applications of this study: The results and conclusions of the work can be used in the preparation of language textbooks and seminars, teaching textbooks, and special courses in comparative typology and general linguistics in the future.As well as, the following work can be the beginning of the composing the trilingual dictionary in English, Russian and Kazakh languages. The novelty of the study: The proverbs and sayings about "labour" of unrelated languages in terms of comparative and contrastive study have so far been little studied, especially in linguistics, and such study of this field will reveal universal and nationally specific features between English, Russian and Kazakh proverbs, which will have practical and theoretical significance.
INTRODUCTION
Every nation's mode of life, mentality, folk, psychology, history, traditions and culture, the native land's nature and phenomena are expressed as words in the language, and these words form part of the vocabulary of that language (Kochemasova & Nazarova, 2016). The vocabulary demonstrates the development of language. Apart from the vocabulary, moral values, aesthetic education and worldview of a person, the centuries-old historical and social experience of our forefathers are more clearly evident in proverbs and sayings. That is why the proverbs and sayings are both а primary expression of the people's mind and the richness of the folk wisdom (Mukhammadieva, 2019).
Soviet folklorist G.L. Permyakov (1988) gives the following comment on proverbs: "First of all proverbs and saying is a language phenomenon, consisting of regular expressions similar to phrases. Secondly, they are logical units representing certain meanings. Thirdly, proverb and saying is an artistic miniature with the most remarkable example of truth data collection". Now, if we look at the Kazakh explanatory dictionary, the proverbs and sayings are determined as follows: "A proverb and saying is a common folk word which is used to preach someone"; "a proverb" is a short, imaginative, concise and rhymed utterance" (Daurenbekov, 2001).
Representatives of different nations perceive the surrounding world in different forms. Consequently, their vocabulary, proverbs, and sayings vary on a wide range. It is possible to meet similar proverbs and sayings, even though, they have their own national specific features. A comparative study of different peoples' proverbs and sayings helps to identify the unique features and languages' own specific characteristics. 2) In some cases the country has been a neighbourly country for centuries; Thus, proverbs and sayings of various countries, which are at different levels of development may be synonymous to each other because of break out a small community from the common languages such as Slavic, Baltic, German, Celtic, Roman, Iranian, Indian, Turkic languages; because of being a boundary state with which the economic and political interconnected relations have lasted for thousands of years; and because of historical events similarity experienced by representatives of certain nationalities, the paremiological units of the language may be similar in their semantic characteristics (Martinez,2006).
Apart from the foregoing reasons, paremiologist Permyakov G.L. proposed contradictory reasons for proverbs and sayings being similar. They are as follows: 1) Presence of paremiological similarity among people without common kinship; 2) Presence of paremiological similarity among the people who have no relations with each other; 3) Proverbs and sayings of countries that stay at different stages of their development are sometimes similar (Permyakov, 1988).
Therefore, proverbs and sayings of nations that are not genetically similar, proverbs and sayings of nations with no economic, political, or cultural relations, and even proverbs and sayings of countries with different levels of development may be identical (Martins, 2012).
By typological studying of proverbs and sayings of various nations, helps to identify their common features and differences, bringing together different national cultures, and establishing a mutual understanding among the peoples (Gasanova, Magomedova & Gasanova, 2016). Understanding the direct and transferred meanings of the proverbs and sayings and applying them in the best possible way is the basis of the process of intercultural communication.
The development of intercultural communication creates a framework for social cohesion and international cooperation. And this is one of the key factors of the globalization process or the formation of a single human society (Arsenteva, 2014). Therefore, research on paremiological units is essential not only for linguists but also for cultural experts.
METHODOLOGY
The analysis of the content is carried out through the qualitative method. Paremiological identification, definition analysis is carried out by the method of component analysis and the method of contextual analysis. The descriptive method is used to manage collected material, to maintain regularity in describing the material and to obtain detailed information on paremiological units; Comparative analysis is used for comparing the knowledge of the various historical stages of development, components of the English, Russian and Kazakh paremiological units.
RESULTS AND DISCUSSION
Basically, the proverbs and sayings are differentiated on a wide range of topics, whether in the alphabet or thematic, for instance, about a man, labour, science, education, friendship, laziness, wealth, life, and religion. Paremiological units are infinite. The most frequently used theme is proverbs about labour. V.I. Dhal believes that "any culture that represents basic values has its own keywords" (Dal, 1989), "labour" lexeme is one of the keywords in any culture, along with other common words.
These keywords form the specific mentality of the cultural structures, i.e., the concept of any culture that has the values of perception used in human thinking (Berikhanova, 2014). A word in order to get a conceptual status and to become a public word and a national word, it must be actively involved in phraseological units, proverbs and sayings. It should be in nominal density (Vezhbitskaya, 2001).
Among such phraseological and paremiological units, actively used word is "labour" and its antonym is "idleness". People's wisdom is the knowledge and experience of a human being. Proverbs and sayings are the peculiarities of the people's knowledge (Ayupova et al., 2014). They transfer the peculiarities of human relationships to life and solve the topical problems of society in non-traditional ways (Zakharova, 1999). Labour is very important in human life. A workman needs to have a skill, a desire to learn, and, that is why all these were found in folk art.
The paremiological units about "Labour" in Kazakh, Russian and English languages have also the same character and logical meaning as the relationship between things in real life. For instance, the following paremiological units can be attributed to the English paremiological basket, who likes accuracy and to do any work in time. If you sow, you will eat free).
As the above-mentioned paremiological units, the Russian people are proud of their proverbs and sayings, where they could describe the mind and the human moral, the mastery of speech. We can see from the above mentioned paremiological units of English, Kazakh and Russian languages that the proverbs and sayings of different nations are interrelated to each other, as well as, it is possible to find the similar meaning of paremiological units within one language. In general, a comparative study of the paremiological units of the English, Russian and Kazakh languages helps to identify the laws that occur within those proverbs and sayings (Meider, 1997).
Therefore, according to the research being done, we will try to look at the direct and transferred meanings of the paremiological units of the English language on the topic "Labour". For instance: He would eat the fruit must climb the tree. For the first time, the proverb appeared in James Kelly's collection "Scottish proverbs and sayings" in 1721 (Manser, 2002). A similar proverb was used at the end of the XVI century in G. Grange's work "The Golden Aphrodite", «Who will the fruit that harvests yields, must take the pain» (The Oxford Dictionary of Phrase and Fable, 2006).
The transferred meaning of the paremiological unit in the English language "He that would eat the fruit must climb the tree" interprets as "needs to work, in order to be the best master of any trade" or "needs to work strenuously (climb the tree), in order to earn a better life (fruit)". That is, in order to achieve a good result in sports, singing, drawing, authorship, and in general, to meet targets in any trade, you need to grind. The proverb: He who would eat the nut must first crack the shell is similar to the above mentioned English proverb, but a word "a fruit" is inverted by a word "a nut", and a word combination "to climb the tree" is inverted by a word combination "to crack the shell". The direct meaning of the proverb means that if you want to eat nuts, you need to crack the shell, whilst the transferred meaning of the proverb means that you have to make every effort to achieve your goal.
We will try to analyze the paremiological units of the Russian language, which is semantically similar to the above mentioned English proverbs. For instance: 1. Хочешьестькалачи, такнесидинапечи! (Word for word translation in English: If you would like to eat kalach (small white loaf), then do not just sit on a stove). Ancient Slavs used "Russian stoves" for bread and bakery in the living room. The "Russian stoves" were used to heat the house, to dry clothes and footwear over the stoves, even; to dry small fish, mushrooms, and berries. Even sometimes, the farmers were heated over the oven's steam and heated their bodies. Especially during the winter months, the elderly loved to sleep and to read fairy tales to their children on the oven. For sure, it does not always mean in a transferred meaning that you do not sit on the stove if you want to eat bread. It means: If you want to achieve a certain result, create something, conduct activities, work with passion and never give up (Tukhvatullina & Kapustina, 2018).
Безтруданевынешьирыбкуизпруда (Word for word translation in English: You cannot take out your fish from a pond without work). The direct meaning of the proverb signifies that you cannot get a fish out of the pond, without effort, while the transferred meaning of the proverb signifies that any good results require fulfilling a job, strength, and ambition. If you dream about something and do nothing in order to reach that object. If you show your laziness, and lying on a bed the whole day, it does not bring you any good results (Niezmeyova & Nurmatova, 2016). Therefore, the proverb repeats the semantics of English and Russian paremiological units, like the following: in order to achieve your dream, you need to work hard. For instance, if you want to learn English, you have to read and listen more to the target language. Always speak on that language, if there is an opportunity (Abdullaeva, 2018). In this case, the word "a fish" means "English", the words "reading", "listening" and "speaking" means "a labour". Usually, a person who complains about a heavy job uses this proverb in their speech.
There are also Kazakh proverbs, which are similar in meaning to the above mentioned English and Russian paremiological units. Two versed fragments of the poem "Octava" written by great Kazakh poet Abai Kunanbayev (1957): Еңбек етсең ерінбей, тояды қарның тілінбей (Word for word translation in English: If you work without Humanities & Social Sciences Reviews eISSN: 2395-6518, Vol 8, No 3, 2020, pp 1227-1233 https://doi.org/10.18510/hssr.2020.83125 1230 |https://giapjournals.com/hssr/index © Rsaliyeva idleness, your belly will be full) was remembered in the memory of the people without modification as aphorisms and these days it is widely used as a proverb by Kazakh people. Abai's "Octava" was written centuries ago, later it was spread from generation to generation in written form. Thanks to the preservation of the record data, these verses can refer to a proverb that still was not segregated from the poem "Octava". Contemporary research gives a cognitivesemantic characteristic to the notion "labour": "Labour explains as a mythological strength of a productive way of feeding" (Karymbaeva, 2010). The direct meaning of the paremiological unit explains if you work incessantly, you will be full, if you do not work, you will starve, while the transferred meaning of the paremiological unit has other meaning because of a metaphor. Most of the paremiological units are metaphorical. The metaphor in paremiological units plays an important role as a stylistic phenomenon in developing a person's idea. In the proverb: Еңбек етсең ерінбей, тояды қарның тілінбей (Word for word translation in English: If you work without idleness, your belly will be full) the part of the paremiological unit: тояды қарның тілінбей (your belly will be full) is compared with labour. Explaining the proverb as: only worked person is always full, the metaphor is based not only on the meaning of these words but also on another additional meaning that expresses other individual qualities and additional properties. In this case, the word combination тояды қарның тілінбей (your belly will be full) has another meaning except its first meaning. Thus, the use of metaphor in the paremiological units gives a different meaning to the proverb apart from its main meaning. The metaphor has a word meaning used to replace the second-word meaning and contains hidden subtitles that are used to determine the second-word meaning (Arnold, 2002). Essentially, metaphor is used to express extra and figurative expression of the paremiological units.
The proverbs: Кім жұмыс етсе, сол тоқ (Word for word translation in English: Whoever works, that is full. The meaning is: who works, that eats); Еңбек етсең -емерсің (Word for word translation in English: If you work, you will suckle. The meaning is: if you work, you will eat); Істесең, тістерсін (Word for word translation in English: If work, you will bite (eat); Қол ойнағанның аузы ойнар (Word for word translation in English: Who plays by hands, plays by mouth. The meaning is: If you work, you will eat) are synonymous with the following Kazakh proverb: Еңбек етсең ерінбей, тояды қарның тілінбей (Word for word translation in English: If you work without idleness, your belly will be full).
Herein, the proverb does not always mean its direct meaning: if you work, you will be sated, if you work, you will bite. This is an imaginative utterance of the paremiological unit. The Kazakh people's worldview, the results of different objects, phenomena and situations are presented descriptively in the words and word combinations as eмерсің (suckle, eat), тістерсің (bite), аузы ойнайды (play by mouth), and тоқ жүру (to be full). The proverbs convey the basic idea through the above-mentioned words and word combinations, referring to that; it is possible to achieve certain results, only through action. The use of such words and word combinations in the paremiological units makes the proverbs certainly effective, impressive and clarified. Thus, the above-mentioned thematic group about "Labour" in English, Russian and Kazakh languages does not fit each other in semantics within one language, but also they may be synonymous within other languages too. Although the nations are at different stages of development, even if they do not have the same group of languages, it is possible to notice that the proverbs' meanings are close to each other. The proverbs may vary in various forms in different languages. Kazakh people would say Кім жұмыс етсе, сол тоқ (Word for word translation in English: Whoever works, that is full. The meaning is: who works, that eats.), the Russians would say: Хочешь есть калачи, так не сиди на печи! (Word for word translation in English: If you would like to eat kolatch (small white loaf), then do not just sit on a stove!), while Englishmen would say: He that would eat the fruit must climb the tree (Shaimardanova et al., 2016).
Analysing the above-mentioned semantics of the synonymous paremiological units of the thematic group "Labour" in English, Russian and Kazakh languages, the following map of paremiological units has been presented (Wierzbiński, 2015). Certainly, paremiological units with similar semantics may be found in other proverbial dictionaries, but we use only the available materials and select similar meaningful proverbs in three languages in the following table (Ivanov& Petrushevskaia, 2015). | 3,983 | 2020-06-24T00:00:00.000 | [
"Linguistics"
] |
Uncovering the Nexus between Language Culture and Identity Difference in the Context of Ethnic Self- Experience
In this study of the language user, the connection between language culture and identity difference is examined from the ethnic context of language learning. This study aims to reveal the connection of those notions that is essential for language learners and will contribute to the development of language education. The research method of this study is autoethnography research. The author’s critical reflection as a reflective practitioner was used to elicit data by remembering and writing his personal experience. Then, the elicited data was analyzed using the theory of representation system, language culture, and identity difference. The findings of this study show that language, culture, identity, and difference have a robust linkage. The individual’s different culture and identity significantly affect the way he/she uses the language in certain contexts. Therefore, uncovering the interconnection of language culture and identity differences from critical self-reflection on ethnic-based experience may enrich a better understanding of the language learning process.
Introduction
Over the past 5 years, some of my perspectives and practices have changed. I now give greater attention to the global context and am acutely aware of how difficult it can be to gain a true understanding of another culture when working outside of it. Shared research agendas, collaboration across countries, joint publications, exchange programs, and on-going conversations may be strategies to help acquire insider knowledge. (Kosnik, 2005, p. 222) The quotation above is cited from the Krosnik work entitled the 'Balancing My Integrity and not Being Left Behind'. He asserted that his selfexperience is a form of representation from the cultural identity difference of a more internationally informed teacher educator and researcher. Based on that quotation, it can be determined that language use has a robust relation to culture, identity, and difference. According to Kramsch (1998), understanding the connection of language, culture, identity, and difference is essential for language education because of the unseparated relationship between language learning and cultural understanding. The notion of language, culture, identity and difference are linked each other forming the term of representation and meaning making. Thus, the process of linking those notions should formerly discuss an integral concept of the system of representation.
The elaboration of the term "meaning-making" form a representation will essentially reveal the interconnected concepts of language, culture, identity, and difference. Hall (1997b) affirms that representation plays an essential role in human communication since it brings the common-sense meaning to any objects being exchanged through a language in communication. Moreover, Hall (2013) emphasises that the representation is established from two different system, inluding mental representation and language, that bound together to emerge meaning. The first system, mental representation, is human set of concepts that innately used to represent any particular things, objects, images, people, and so forth. The second system, language, is used to internalize the meaning of specific concepts through joint conceptual map and set of signs in terms of meaning-making in communication. Thus, language can be described as a proses of sharing meaning from human mental representation and any surrounding object, including sounds, images, objects, and words that can carry "meaning" (Hall, 2013).
For example, the conceptual map of "a cute mammal with smooth grey fur and belly pocket living in eucalytus trees" refers the combination of sounds and letters that make up the word 'Koala'. This meaning from mental representation can only be expressed and conveyed by language through a set of signs called sound and words. This set of signs can be formed through written, spoken, and visual language. The representation of 'koala' in language communication is derived from indexical signs of the written form 'K-O-A-L-A' letter and the spoken form '/koʊˈɑː.lə/' sound. Also, the visual form of a Koala from its photograph or body shape that we call it as iconic signs.
Besides, language is interchangeably dynamic regarding its three system of representation, including the concept, object, and sign, in creating meaning. It means that the object or concept as a representation will not merely carry the meaning itself, but the context where the language used as a medium of communication that provides meaning (Kramsch, 1998). This context specifically belongs to certain communities or social groups that have shared conceptual map, sign system, and object representation that dovetail those shared representation and language. As a result, meaning is constructed and formed within certain communities even though it will never be static and fixed.
Based on liguistic relativity of Sapir-Whorf hypothesis, a language of object from a community will be perceived differently by other communities.
For example, colour representation in language denotes meaning in common culture and it is understood by the language user who have shared representation system in their culture. In contrast, it will probably have different meaning for other cultures that they do not denote the same system of representation.
Hence, the author will elaborate the interconnected relation between language culture and identity difference that affects the representation system for language learners. Firstly, the discussion of language construction and meaning making is presented to further analyse how people produce shared conceptual map, code, and signs through language to communicate within their community culture. Secondly, the robust linkage between language and culture is discussed to provide the understanding of complex relationship among them occuring in social life which creates the notion of identity and leads the emergence of difference as the result of those relation. Lastly, the reflection of the author's personal experience vignettes is previewed along with its analysis based on the reflective authoethnograpic within the framework of language culture and identity difference in depth.
Language and Culture Interconnections
The discussion of culture concept can be multi-directional because it requires an understanding of the surrounding aspects when determining an individual's culture. Hall (2013) states that you belong to certain culture if you are aware of the same worldviews as other individuals in a particular culture. On the same hand, Kramsch (1998) accentuates that individuals will belong to a particular social group such as family, ethnics, race, tribes, nations, when they share similar perceptions and views during their interpersonal communication and interaction within those groups. For example, the individual's worldviews of specific language that used as their first language will possess the same conceptual map and representation system that leads to the cognate interpretation of the language meaning.
Culture is considered because of interpersonal interaction among people within a social community. The human intervention through potential innate capability in producing meaning has forged certain cultures naturally (Kramsch, 1998). For instance, Keris (a Javanese traditional knife) naturally means a tool or weapon with wavy double-bladed dagger. In the framework of culture concept, it possesses the next level of interpretation.
Beside the superficial meaning, it can be determined as a symbol of an individual wealth and prosperity in Javanese culture. Having an expensive and rare Keris items will show that the owner has higher and special social status. In addition, it means that the owner may also have supernatural and divine power by collecting the sacred and hitoric Keris items because it is used for a deadly important short-distance combat in the past. Although the interpretation of Keris has been slightly changed in the modern era, it is still recognized as a mysthical item for Javanese. Therefore, culture has two functions of constraining and deliberating towards the meaning of Keris.
The meaning of Keris constrains in the form of basic definition of a traditional weapon, while it deliberates in the form of enhancing social status and containing magical power within Javanese social convention. Kramsch (2014) asserts that the culture functions of constraining and deliberating language have roles in social, historical, and imaginary aspects. Individual that belongs to a particular social group will have the same common beliefs, attitudes, and values, which realized as a common sense because they view the world similarly. As a result, they will construct the same linguistic code that creates the speech and discourse community.
Based on the aforementioned example of social convention in Javanese community, However, the differences of individual's worldviews will ditinguish their meaning-making system from other social groups' representation system. Hall (2013) explains that linguistic code is also a product of social convention which forges historically within the culture of particular social group. Consequently, the concept of individual difference will be discussed to highlight the way of language learner understand the cultures and linguistic code of the target language that are totally different. The concept will also be used to investigate how the language learner deals with the cultural and language difference in the process of language learning.
Thus, other groups that do not possess the same shared linguistic code will probably not have the same interpretation of language meaning-making and representation. As an illustration, the concept of 'rice' has numerous interpretations on its types according to Asian society or 'snow' types according to Inuit people because those communities have their own shared linguistic codes and representation system to the object of rice and snow that other communities might not possess. Consequently, individuals will merely perceive the codes, concepts or signs in representing objects based on their social group convention (Hall, 1997a). Consequently, every communities possess their unique cultural difference that creates boundary with other social groups.
Identity and Difference Linkages
The notion of cultural difference that distinguising certain social group to others leads to the emergence of identity concept. Identity divided into two perspectives, including essentialist and non-essentialist views (Woodward, 1997). The essentialist views identity as a fixed and unchanging material, "which do not alter across time". On the contrary, non-essentialist views identity as a social construction that is unfixed and fluid, which able to be reconstructed across conditions and situations on ad infinitum. For instance, the author as the person who born in Indonesia will never shift their identity as an Indonesian-born person based on the essentialist view.
However, the non-essentialist view believes that the author's identity will possibly change over time if the author lives, works or gets married in the other countries than Indonesia.
In addition, Woodward (1997) states that identity can also be identitifed by two system, symbolic marking through representation and social marking through inclusion or exclusion of certain social groups. The concept of those two systems have been formerly brought by Deleuze and Guattari (1987) as cited in Bright (2020). For instance, in symbolic marking, wearing black cloth and pant with a Blangkon, a traditional Javanese headgear worn by men, symbolizes the identity of Samin or Sedulur Sikep groups in North-Central Java, Indonesia. On the other hand, in social marking, the identity of Samin groups can be maintained by excluding or including that Samin group to other Javanese social groups based on its symbolic marking. For instance. Samin groups are considered as one of the Javanese communities in Central-North area because they are historically settled in the area for a long time ago. However, Samin groups will be marked by their all-black dress with a Blangkon which completely excluded with other common social groups in Central Java.
Culture shapes identity by giving others a favorable meaning of subjective form. Subjectivity, consciously and unconsciously, embraces who we are in our personal emotions and thoughts, and cultural contexts that make up our sense of belonging (Woodward, 1997). It builds the concept of self-identification that allows us to recognize whether we are in the "existing" or "becoming" stage of the concept of identity (Bright, 2020).
Beside social marking of inclusion and exclusion, self-identification concept has also become an essential part in determining identity difference. It applies to differentiate all characteristics of specific identity or concept into at least two opposing groups such as us/them, self/other, sameness/otherness. Difference concept distinguishes identity by marking the boundary as identity known as 'insider' against another defined as 'outsider' which creates distinctions with other social groups (Kramsch, 2014). In line, Hall (2011) argues that identity can be determined by political interests known as identity politics. In this classification, political power designs a particular perspective to recognize the identity of others by labeling and stereotyping any other group or community. For instance, the labelling of 'black' people around the world that leads to the discrimination over the race, such as aphartheid system in Africa. This political power that is potentially splitted the identity difference into insider and outsider groups become one of the classification systems through agency. Therefore, identity is defined by the process of identifying others who have either similarities or differences within the classification system through culture, subjectivity, and agency (Hall, 1997c). Difference is constructed by referring to those who are defined as the outsider or in terms of 'other' (Hall, 2013). It creates binary oppositions that can be viewed from negative or positive side depending on the social group's perspectives. Hall (1997a) convinces that there is always a power that operates between the two terms of binary oppositions which discriminating either the outsider or insider disproportionally. Hence, this study analyzes the concept of language and culture that are enormously connected with identity and difference in understanding their roles toward language learning process.
Method
The research method of this study was autoethnography research. It was selected in terms of examining the critical reflection of the author's personal experience by becoming reflective practitioners (Ellis, Adams, & Bochner, 2011). This method is a form of personal narrative exploring the author's life experiences (Liu, 2020). Autoethnography research will be also used to construct the author's reflective inquiry, respect personal experience, and emphasize knowledge of social construction (Stanley, 2019). Thus, the author analyzed his personal experience as a reflective inquiry towards the concept of language culture and identity difference.
In this study, self-experience exploration combined with social context explanation are presented to reveal understandings of how language and culture culture have formed the identity difference of language learners. The author firstly gleaned data from his personal writing, photograph, performance programs, certificates, and information collected by talking to colleagues to remember the experience that pertains to his ethnical identity as a language user and teacher (Stanley, 2019). Secondly, the author had already discussed the personal experience with his critical friends and colleagues during the study group session. Subsequently, the author constructed the interpretation and meaning-making from his critical experience by using the concept of representation system from Hall (2013), language culture from Kramsch (1998), and identity difference from Woodward (1997). The findings of this study provided self-reflection that reveal valuable insights about what shaped my language attitude and my teaching practice.
Findings
In analyzing the notions of language, culture, identity, and differences from the author's personal experience, the author primarily explains and considers his social and cultural background. Following the explanation, two vignettes of the author's life story are presented to depict the real-life experience. The author then analyzes the detailed personal experience using some of the underpinning concepts and theories proposed by the experts in language culture and identity difference in order to find out the relationship of those concept with the process of language learning.
The author was born and raised within Javanese culture in Central Java, Indonesia. In addition, the author is a member of the Islamic community, Indonesia's largest religious group. This story is based on the author's personal experience six years ago. By this time, the author had experience with several American and European volunteers teaching English at an Islamic boarding school in Salatiga and teaching Indonesian. They were mostly teenagers who have done their secondary level schools and prepared or pursued for their undergraduate study. Most of them joined the program as their very first experience visiting Asian country such as Indonesia. It could be a cultural shock and challenging time since the weather, food, and people were surprisingly different from what they used to encounter in their everyday life.
There are two vignettes that used to gaining data from the author personal experience related to the interactions with those volunteers, which have different culture and background. First vignette explains the different culture that appeared during the interaction with Javanese local people that bounds the identity of Western and Eastern people. Then, second vignette explores the story of language being used by Western people that might be considered inappropriate and impolite for Javanese. The reasons of choosing those vignettes deal with the aim of this study in which the author wants to explore his personal experience that shapes the understandings of his identity and difference and reveals the nexus between language culture and identity difference.
Encountering Western Culture Experience
Following the Javanese hospitality, the Western volunteers as the guests were invited by the chief of Islamic Boarding School which called as Kyai in his home. The term 'Kyai' refers to a scholar or guru on the Malay Peninsula and Brunei (Federspiel, n.d.). In this story, he has the highest social position in the boarding regarding the culture stratification. At the beginning, all of the volunteers were welcomed at the entrance gate by the students and guided them to come in Kyai's home. I was quite surprising when I saw them not taking off their shoes. Suddenly, I told them to put it off outside of the house. One of them asked, "Why should I take off my shoes while entering the house? Is it illegal anyway?" I realized that they are not accustomed with that rule of taking of the shoes while entering the house. As a Muslim, I will not wear my used shoes inside my house since it will potentially bring impurities which are considered as a prohibition to do praying in Islam if it contains any impurities in the place or the stuff that we use for praying. In addition, Javanese will perceive that habit as impolite action since they will not allow any dirty things entering the clean places if the house does not require any slipper to get into.
Afterwards, Kyai was approaching us when we were in his living room.
I stood up first to shake his hand by kissing it as a form of honor and respect to Kyai. Then, all voulunteers shoke his hand as usual without kissing it.
Someone asked, "Hi bro, why should you kiss his hand while you shook his hand?" he said. "Is it an obligatory rule here?" "No, not really, for some Javanese Muslims, including me, you have to do that to show respect for without any titles that are used before surnames or full names as a sign of respect. In contrast, the girl supposed that she was fine saying the right thing since she did not acquire any communicative competence in Javanese. In Javanese culture, we have three different levels of language stratification that can be used depending on the person that we are talking with, including ngoko, ngoko alus, krama madya, and krama alus.
Differences of Language Term in Javanese Culture
When I was in the program with foreign volunteers, my campus held a lunch inviting the Boarding Chief, students, and all volunteers in the campus hall.
One of the staff said, "monggo dipun dhahar Tumpeng meniko, isinya ada kering tempe, telur dadar, telur rebus, gudangan, ikan gereh dll" [let us eat the Tumpeng which consist of stir-fried soybean cake, omelette, steamed egg, Javanese salad with grated coconut, fried salted fish, and others]. Tumpeng is a cone-shaped yellow rice dish form traditional Javanese cuisine served with varied condiments that almost appears in all of Javanese tradition. In the lunch session, the committee provided several different types of Tumpeng that made the foreign volunteers amazed with the foods because every single Tumpeng served in the table has different condiments, purposes and philosophical values.
After getting the plate, a female volunteer asked, 'Hi, apakah kamu biasanya makan masakan ini ketika merayakan sesuatu?' [Hi, do you usually eat this cuisine when celebrating something?]. I said, "Yes we are, we only make this cuisine when we have a special event as a festive dish. This is so special when we talk about Javanese food. "What about that whole chicken? We used to grill the whole turkey for our Thanksgiving celebration back in my hometown," she said. She actually pointed out the whole chicken that Javanese called as Ingkung (a whole complete rooster cooked with Javanese curry). Ingkung also refers to any kind of roosters that cooked in a whole complete piece, including its feet and head. Both of cuisines was very important in Javanese culture since they were only used in special event and strongly related to any traditional ceremony in Java.
All foreign volunteers were definetely unfamiliar with the food terms used in Javanese language. The uncommon terms perceived by foreigners because of different prior knowledge and background. They never experience such traditional ceremony like in Javanese culture, although they also have other festive dishes served in celebrations. The identity of being Western also put them into the 'outsider' side when they involve in the Javanese culture as the 'insider'. The difference of opposite culture and identity forms different terms in language since they produce different mental representation and symbol or sign to make meaning based on the social convention.
Discussion
The interactions in the vignettes between the authors and all parties can be analysed within the notion of language, culture, identity, and difference. In the first vignette, I should have respect and honor to Kyai by kissing his hand because of the social stratification level in Javanese social values. This attitude is socially considered as a polite behavior and constructed as an identity of Javanese beliefs which essentially marks the author as a Javanese Muslim within the Eastern culture (Wilce, 2000). Furthermore, identity can be marked into two different system, symbolic and social system. The honoring behavior that I practised is recognized as a symbolic marking of representation system from Javanese politeness culture. The symbolic mark is maintained through exclusing the identity from the 'insider' into 'outsider'. In this case, I was understanding the situation why foreigners perceived that my polite behaviour of kissing Kyai's hand is an odd and illogical action. Their way of thinking is completely depicted the perspectives of 'outsider' or excluded social group of Javanese. Thus, they will not be able to accpet the values that Javanese believes in the behavior regarding politeness stratification.
Moreover, the Western volunteers' behaviors of still wearing their shoes
while entering the house and calling Kyai without any honorific salutation can be highlighted as the classificatory systems or binary opposition for distinguishing identity difference. It constructs two coupled terms in identity difference which are 'insider-outsider' and 'inclusion-exclusion' perspective. In my personal experience, difference was negatively interpreted since I was stereotyping that the Western people are very impolite due to their behavior that contrasts with Javanese values. The story of a volunteer wore his shoes inside the clean house and called Kyai's name without salutation was considered as impolite behaviors rearding Javanese beliefs. As a Javanese, I would consider how the way I speak to other people in my community since we have rules based on the language level stratification, how I behave appropriately based on Javanese conduct of etiquette, and how I respect people who have higher positions than me (Wilce, 2000). Moreoever, labeling people as Asians or Westerners is undoubtedly associated with a binary opposition of race, language, and color, which tends to be dicriminately politicized in some cases.
Regarding the insider perception on Javanese culture through identity recognition, Western volunteers as the outsider also automatically showed their identity as Indonesian learners who want to mingle with other local people. The use of target language was considered as the way how they represent their identity status as outsiders. It proved that identity can be shaped by the sense of subjectivity. During teaching classes with Western volunteers in Indonesia, I had a difficult time defining my self-identity from a subjective point of view. Therefore, I interacted with and contacted these volunteers, but retained Javanese values because I felt that they belonged to their own culture. As a result, these volunteers can classify myself as an Asian who support the oddly attitudes according to their perspectives.
These classification systems are considered as a marking system of identity through culture, subjectivity, and agencies, which leads to the notion of difference.
The second vignette explains the interesting interactions between the author, campus staff, and volunteers. In that story, the campus staff offered Javanese traditional cuisine called Tumpeng and Ingkung to the volunteers.
They definetely did not understand with the terms uttered by the staff even though they might know some parts or ingredients of the cuisine. The basic understanding of several parts shows that the Western volunteers could have shared conceptual map with the campus staff by looking the food condiments at the first sight. They probably have a commonsense knowledge of the food representation by referring to the iconic and indexical signs of soybean cake, omelette, salted fish, steamed egg, Javanese vegetarian salad, whole steamed chicken, etc. However, they do not acquire the symbolic signs for the words kering tempe, telur dadar, telur rebus, gudangan, gereh, ingkung and other condiments name because they did not share the code of language as the representation of the cuisine and condiments.
Then, the case of not having shared conceptual map and the language code or signs can be interpreted through the following example based on the second vignette. In the second vignette, the volunteers did not possess shared language with the staff since they were very confused when the staff asked them to grab the foods. In contrast, I could understand what the staff utterance means easily since we are in the same speech and discourse community which have the shared linguistic code for those terms. It is not the representation concepts or elements that give meaning, but the context and the way the community expresses these things in its language code. These codes were used to regulate translations between concepts and languages (Kramsch, 1998). Here, I and the campus staff are a Javanese which shared the language codes and mental representation for referreing soybean cake, omelette, salted fish, steamed egg, Javanese vegetarian salad, whole steamed chicken, and any other condeiments as Tumpeng, and addressing the whole yellow curry chicken as Ingkung. Thus, those two terms are recognized as Javanese signs that agreed by the Javanese ethnic as the result of social convention.
Based on the terms Tumpeng and Ingkung, which embody the synchronous view of Javanese culture, Kramsch (1998) argues that culture is always created by human intervention in nature. In this story, Tumpeng is made from a cone-shaped yellow rice and some other dishes that circling the cone as condiments. This interpretation of the dish, of course, had no other meaning before the Javanese intervened in its meaning socially and historically. The natural concept of the dish has changed over time.
Tumpeng is always served as a festive meal in almost all Javanese ceremonies and celebrations, especially in the Javanese Keraton (kingdom) tradition. As a result, Java has numerous types of Tumpeng used in religious events and purpose-based social practices. This social practice has created a common tradition and story between them. Therefore, both the historical and social aspects of Tumpeng are used, among other things, as an expression of Javanese culture and identity that differs from others. Therefore, I had the feeling that I belonged to Javanese beliefs, values and attitudes regarding the shared linguistic code and cultural interpretation of Tumpeng. From my personal experience, I should be very respectful when kissing Kyai's hand, as he has a higher position in Javanese social values. This polite attitude is socially constructed as the identity of Javanese beliefs and essentially identified me as a Javanese living within Eastern culture. According to Woodward (1997), identity can be characterized by both symbolic and social systems. Consequently, kissing Kyai's hand is considered as a symbolic sign of Javanese etiquette. This symbolic identity is maintained by social system through exclusion of other social groups. In this case, I understood why these Western volunteers felt that the habit of kissing other people's hand was so strange because they were identified as Javanese outsiders or excluded groups. Thus, they will not be able to recognize the value of what the Javanese believes in terms of politeness stratification.
Implications and Limitation
The analysis of this study has an implication towards the study of language culture and identity differences in the context of Javanese values and culture. It can be shown that understanding certain cultures will bring greater implementation of language learning for people with different background identities (Campbell, 2015). As a result, language learners will significantly develop their language acquisition through the process of meaning-making and representation of culture, identity, and difference. Therefore, the implication of this study is to suggest that the experience of living in the target language country will help the language learner acquire a better understanding of the target language.
However, there are still some limitations that appear in this study. The author's personal experience analysis focuses merely on the Javanese context in which the possibilities of the representation concept in language learning process were not revealed yet. Also, the different contexts other than Java might elicit different possibility of understanding language culture and identity difference since the author only refer to the past personal experience teaching in one of the Islamic boarding in Central Java, Indonesia. The role of intercultural learning can be different not only in the outer circle of the place, but also in other settings of the inner circle.
Therefore, further research on the impact of identity transformation and intercultural learning on language learning in context or in different contexts can be done to explore and find other possibilities.
Conclusion
In summary, the reflection and analysis of my personal experience reveals an understanding of meaning-making and representation system towards the notion of language, culture, identity, and difference. The vignettes discussion provides a complete portrayed situation in which language user and language learners communicate within the boundary of different language, culture, and identity. This interaction can be explored into several points, including the connection of language and culture that differentiate individual's identity from other people in forming difference. Regarding the discussion of this study, I can comprehensively understand the definition of "who I am", situate myself based on "where I am", and dawn on the reasons of "why I am". Therefore, understanding the essential of self-identification determines how we recognize ourselves and others in relation to the appropriateness of language use within varied cultural values and the classificatory system in the concept of difference. | 7,106.4 | 2021-12-31T00:00:00.000 | [
"Linguistics"
] |
Model Predictive Control with Variational Autoencoders for Signal Temporal Logic Specifications
This paper presents a control strategy synthesis method for dynamical systems with differential constraints, emphasizing the prioritization of specific rules. Special attention is given to scenarios where not all rules can be simultaneously satisfied to complete a given task, necessitating decisions on the extent to which each rule is satisfied, including which rules must be upheld or disregarded. We propose a learning-based Model Predictive Control (MPC) method designed to address these challenges. Our approach integrates a learning method with a traditional control scheme, enabling the controller to emulate human expert behavior. Rules are represented as Signal Temporal Logic (STL) formulas. A robustness margin, quantifying the degree of rule satisfaction, is learned from expert demonstrations using a Conditional Variational Autoencoder (CVAE). This learned margin is then applied in the MPC process to guide the prioritization or exclusion of rules. In a track driving simulation, our method demonstrates the ability to generate behavior resembling that of human experts and effectively manage rule-based dilemmas.
Introduction
Robotics is increasingly permeating diverse sectors, spanning both civilian and industrial applications, and is becoming integral to everyday life.Service robots are now prevalent in public spaces, interacting with individuals and delivering services.Within the field of robotics, autonomous driving emerges as a particularly dynamic area, garnering extensive research attention.
In robotics, adherence to rules varies from basic collision avoidance in navigation scenarios to compliance with complex traffic regulations in autonomous driving.These rules, established primarily for safety, must generally be upheld by robots while executing their tasks.However, it is essential to acknowledge that not all rules carry equal importance.Depending on the context, some rules may need to be prioritized over others or even disregarded.For example, in autonomous driving, scenarios may necessitate breaching certain rules-such as lane changes in dense traffic, decisions at yellow traffic lights, or crossing double yellow lines to avoid obstacles.These situations compel robots to make intricate decisions regarding rule compliance, presenting significant challenges in determining appropriate control inputs.
Model Predictive Control (MPC) stands out as a robust approach for autonomous control, recognized for its capabilities in online trajectory optimization [1].The core principle of MPC involves identifying optimal control inputs to minimize a predefined cost function, considering both inputs and anticipated future outputs.This method integrates an objective function characterizing the desired robot behavior and constraints mitigating undesirable actions.The efficacy of MPC is well-documented across diverse applications, such as the full-body control of humanoid robots [2][3][4].
Designing effective MPC controllers remains a significant challenge.Experienced operators can adeptly manage robots, yet encoding such expertise into MPC parameters is complex.For instance, expert drivers in autonomous driving must make continuous, complex decisions, such as whether to decelerate or change lanes in response to slowmoving vehicles.However, finding the appropriate MPC parameters to handle such varied scenarios is complex and computationally intensive.
Recently, imitation learning has emerged as a promising solution for robotic learning challenges [5,6].This approach derives near-optimal control strategies directly from human expert demonstrations, eliminating the need for manual policy or cost function design.Imitation learning excels in capturing complex policy functions that balance multiple considerations [7], learning the importance of various factors from expert behaviors to enable robots to replicate human actions.However, despite its advantages, imitation learning does not inherently ensure performance reliability.In scenarios where safety rules like collision avoidance are crucial, imitation learning may not consistently yield control actions that comply with essential safety norms, underscoring the paramount importance of rule adherence for robot safety and human protection.
In this paper, we address a control synthesis problem within a framework of prioritized rules, building on our previous research [8].We assumed inherent rule priorities and aimed to design a controller that accounts for these priorities to manage dilemmas effectively.Our methodology is grounded in the MPC framework, which, unlike purely deep learning-based approaches, integrates each rule as a constraint, thereby enhancing performance reliability.
We represent these rules using Signal Temporal Logic (STL) [9,10], a formalism allowing the precise specification of desired system behaviors, commonly applied in robotic task specifications [11][12][13][14][15]. STL is particularly suited for describing properties of real-valued signals in dense time scenarios, making it ideal for real-world robotic applications.
Instead of explicitly determining rule priorities, we adopted a learning approach to identify minimal acceptable levels of rule satisfaction, informed by expert demonstrations.This approach diverges from our earlier work [8] by employing a Conditional Variational Autoencoder (CVAE) [16].This technique helps discern essential rules and decide on adherence levels, facilitating selective compliance rather than strict obedience to all rules.The use of a CVAE is justified by its efficiency in handling uncertainties within data, providing a more effective solution compared to Gaussian process regression methods used in previous work [8].
Our hybrid approach combines deep learning with traditional MPC, guiding robots to emulate expert human behaviors in complex decision-making scenarios.
Related Work
Extensive research has explored trajectory optimization and Model Predictive Control (MPC) within the framework of temporal logic specifications, particularly Linear Temporal Logic (LTL).Mixed-integer linear programming (MILP) has been employed to generate trajectories for continuous systems subject to finite-horizon LTL specifications [17,18].Wolff et al. [19] extended this approach by encoding general LTL formulas into MILP constraints, accommodating infinite runs with periodic structures.Additionally, Cho et al. [20] investigated optimal path planning under synthetically co-safe LTL specifications, utilizing a sampling-tree and two-layered structure.
Recent advancements have integrated Signal Temporal Logic (STL) within MPC frameworks.Raman et al. [21] structured MPC to facilitate control synthesis from STL specifications using MILP, allowing for the calculation of open-loop control signals that adhere to both finite and infinite horizon STL properties while maximizing robust satisfaction.Sadigh et al. [22] introduced a novel STL variant incorporating probabilistic predicates to address uncertainties in predictive models, thereby enhancing safety assessments under uncertainty.Mao et al. [23] proposed a solution to handle complex temporal requirements formalized in STL specifications within the Successive Convexification algorithmic framework.This approach retains the expressiveness of encoding mission requirements with STL semantics while avoiding combinatorial optimization techniques such as MILP.
The integration of MPC with machine learning techniques has been pursued to address system identification challenges within MPC contexts [24][25][26].Lenz et al. [24] applied deep learning within MPC to derive task-specific controls for complex activities such as robotic food cutting.Carron et al. [25] presented a model-based control approach that utilizes data gathered during operation to improve the model of a robotic arm and thereby enhance the tracking performance.Their scheme is based on inverse dynamic feedback linearization and a data-driven error model, integrated into an MPC formulation.Lin et al. [26] compared deep reinforcement learning (DRL) and MPC for Adaptive Cruise Control (ACC) design in car-following scenarios.
Efforts have also been made to address the types of dilemmas introduced in our work.Tumova et al. [27] and Castro et al. [28] examined scenarios where not all LTL rules can be satisfied in path planning, seeking paths that minimally violate these rules.However, their approaches require predetermined weights among rules, contrasting with our method that learns directly from expert demonstrations.Urban driving dilemmas were specifically addressed by Lee et al. [29], who applied inverse reinforcement learning to capture expert driving strategies.
Imitation learning is emerging as a promising approach to robotic learning problems and has been widely applied to autonomous driving.Policies for autonomous vehicles have been learned from image or video datasets through Convolutional Neural Networks (CNNs) [30,31].Schmerling et al. [32] utilized a Conditional Variational Autoencoder (CVAE) framework to reason about interactions between vehicles in traffic-weaving scenarios, producing multimodal outputs.Additionally, some studies have applied learning approaches to MPC, where certain parameters of MPC are learned from data [8,33].Reinforcement learning has also been considered for autonomous driving, using CNNs to encode visual information [34].
System Model
We consider a continuous-time dynamical system described by the following differential equation: ẋt where x t ∈ X ⊂ R n x represents the state vector, u t ∈ U ⊂ R n u denotes the control input, and f is a smooth (continuously differentiable) function with respect to its arguments.Through employing a predefined time step dt, the continuous system in Equation ( 1) can be discretized as follows: where n represents the discrete time step, defined as n = ⌊t/dt⌋, and x 0 denotes the initial state.For a fixed horizon H, let x(x n , u H,n ) denote a trajectory generated from the state x n with the control inputs u H,n = {u n , . . ., u n+H−1 }.
A signal is defined as a sequence of states and control inputs: In addition to the definition provided in Equation (3), we use the notation ξ(n) to represent a signal starting from the discrete time step n, with a slight abuse of notation.
Signal Temporal Logic
Signal Temporal Logic (STL) is a formalism used to specify properties of real-valued, dense-time signals, and is extensively applied in the analysis of continuous and hybrid systems [9,10].A predicate within an STL formula is defined as an inequality of the form µ(ξ(t)) > 0, where µ is a function of the signal ξ at time t.The truth value of the predicate µ is determined by the condition µ(ξ(t)) > 0.
An STL formula is composed of boolean and temporal operations on these predicates.The syntax of STL formulae φ is defined recursively as follows: where φ and ψ are STL formulas, G denotes the globally operator, and U represents the until operator.
The validity of an STL formula φ with respect to a signal ξ at time t is defined inductively as follows: The notation (ξ, t) ⊨ φ indicates that the signal ξ satisfies the STL formula φ at time t.For example, (ξ, t) ⊨ G [a,b] φ implies that φ holds for the signal ξ throughout the interval from t + a to t + b.In discrete-time systems, STL formulas are evaluated over discrete time intervals.
One significant advantage of Signal Temporal Logic (STL) is its associated metric, known as the robustness degree, which quantifies how well a given signal ξ satisfies an STL formula φ.The robustness degree is defined as a real-valued function of the signal ξ and time t, calculated recursively using the following quantitative semantics: min Following our previous study [8], we introduce the notation (ξ, t) ⊨ (φ, r) to indicate that the signal ξ satisfies the STL formula φ at time t with a robustness slackness r, defined as Equation ( 18) asserts that the signal ξ satisfies φ with at least the minimum robustness degree r.The robustness slackness r serves as a margin for the satisfaction of the STL formula φ.As r increases, the constraints on the signal ξ to satisfy φ at time t become more stringent, while smaller values of r imply more relaxed constraints.Notably, when r < 0, it allows for the violation of φ.
Problem Formulation
This study aimed to solve a control synthesis problem using Signal Temporal Logic (STL) formulas [8].Let φ = [φ 1 , . . ., φ N ] represent a set of STL formulas, with their conjunction denoted as φ = φ 1 ∧ . . .∧ φ N .We define a cost function J over the state and control spaces, where J(x, u) measures the cost associated with a trajectory x and control sequence u.The control synthesis problem under STL for Model Predictive Control (MPC) is formulated as follows.
Problem 1.Given a system model as described in (2) and an initial state x 0 , with a planning horizon of length H, determine the control input sequence u H,t at each time step t that minimizes the cost function J(x(x t , u H,t ), u H,t ) while ensuring that the conjunction of STL formulas φ is satisfied: minimize u H,t J(x(x t , u H,t ), u H,t ) subject to (ξ(x t , u H,t ), t) ⊨ φ. (19) While this strict formulation ensures compliance with the STL formulas, our primary objective is to develop a control sequence that incorporates flexibility in rule compliance.To this end, we introduce robustness slackness values, denoted by r = [r 1 , . . ., r N ], which quantify the degree to which each STL formula is satisfied.In incorporating these robustness values, the MPC problem can be reformulated as follows [8].
Problem 2. Given the system model specified in (2), an initial state x 0 , and a horizon length H, compute the control input sequence u H,t at each time step t by solving the following optimization problem: minimize This enhanced formulation allows for a more flexible management of STL constraints, effectively addressing scenarios where it is not feasible to fully satisfy all STL formulas.The robustness slackness values are derived from expert demonstrations, based on the assumption that these experts have accurately assessed the priority and required compliance level of each rule.This learning is achieved through a deep learning approach.
Proposed Method
The proposed framework, illustrated in Figure 1, synergizes learning techniques with STL constraints to refine MPC, enabling it to more accurately mimic human expert behavior.By leveraging expert demonstrations, we learn robustness slackness values, which define the margins of rule compliance.A Conditional Variational Autoencoder (CVAE) [16] is utilized to estimate these robustness slackness values in novel scenarios.
In incorporating the robustness slackness values obtained through the learning process, the MPC method, designed under STL constraints, generates control sequences that respect the specified rules with a certain level of flexibility.To manage the nonlinear differential constraints characteristic of dynamical systems, we employ linearized models.Although this approach may introduce some approximation errors, it remains effective for practical applications.
Figure 1 presents an overview of the proposed learning-based MPC framework.Expert demonstrations are used to learn the lower bounds of robustness, referred to as robustness slackness, through a deep learning approach.These learned values inform the MPC method, which then calculates control sequences that take into account the STL rules.
Feature Description
We introduce a feature function, denoted as ϕ, which transforms a signal into a feature vector, mapping from the combined state and control spaces into the feature space: ϕ : R n x +n u → R n f .As illustrated in Figure 2, the control of the ego vehicle, V ego , is influenced by six nearby vehicles located in adjacent lanes.These vehicles are collectively referred to as
Learning Robustness Slackness from Demonstration
We consider a set of M demonstrated signals, denoted by Ξ = {ξ i } M i=1 , where each signal ξ i n = (x i n , u i n ) comprises the state x i n and control input u i n at time step n.The robustness degree r i,j is defined as the minimum value observed from the current time step n to the future time step n + H − 1 for the demonstration ξ i : where H denotes the control horizon length.The robustness degree r i,j serves as the robustness slackness for the signal over the horizon length H, starting from ξ i n , indicating the minimum permissible lower bound of robustness within this timeframe.Our CVAE model comprises the following three parameterized functions:
•
The recognition model q ν (Z|ϕ) approximates the distribution of the latent variable Z based on the input features.This is modeled as a Gaussian distribution, N (µ ν (ϕ), Σ ν (ϕ)), where µ ν and Σ ν represent the mean and covariance determined by the network.
•
The prior model p θ (Z|ϕ) assumes a standard Gaussian distribution, N (0, I), simplifying the structure of the latent space.
•
The generation model p θ (r|Z, ϕ) calculates the likelihood of robustness slackness based on the latent variable Z and the input feature ϕ.
Both the recognition model q ν (Z|ϕ) and the generation model p θ (r|Z, ϕ) are implemented as multi-layer perceptrons.
The training of our CVAE is guided by the Evidence Lower Bound (ELBO) loss function, initially formulated as To better accommodate the specific requirements of our application, we adapted the ELBO function and define the loss function as follows: where r i represents an element of the robustness slackness r, and λ is a scaling factor used to balance the terms.The Kullback-Leibler divergence (D KL ) measures the divergence between two probability distributions.We set λ = 1 and optimize parameters ν and θ by minimizing this loss function.
Model Predictive Control Synthesis
Previous work, such as that by Raman et al. [21], has shown that MPC optimization with STL constraints can be formulated as a mixed-integer linear program (MILP).This method introduces two encoding strategies: one that focuses on satisfying STL formulas and another, termed 'robustness-based encoding', that considers the robustness degree of the STL formulas.In our problem formulation, we manage each STL formula according to its defined robustness slackness using the robustness-based encoding method.
Let C φ j ,r j denote the encoded constraints for the STL formula φ j with robustness slackness r j .The combined encoded constraints are formulated as follows: where z φ , z φ j ∈ [0, 1] are Boolean variables, with z φ representing the satisfaction of all STL constraints and z φ j representing the satisfaction of an individual STL formula φ j .Note that z φ j = 1 only if ρ φ j − r j > 0; otherwise, z φ j = 0.The proposed algorithm is outlined in Algorithm 1.We extended our previous work [8] by incorporating a deep learning network approach.Inputs to the algorithm include a set of STL formulas φ 1 , . . ., φ N , the time of interest τ = [t 0 , t 1 ], the discretization time step dt, a control horizon H, an initial signal state ξ init , and demonstrated signals Ξ.
Initially, feature vectors and robustness slackness values (the lowest robustness degree for the horizon H) are pre-computed from demonstrations (line 1).The closed-loop algorithm, which determines the optimal strategy at each time step, runs over the time interval τ = [t 0 , t 1 ].Nonlinear dynamics are linearized with respect to the current signal state (line 4).The robustness slackness of the STL formula φ j for the input feature ϕ(ξ cur ) is predicted using the trained CVAE network (line 6).The predicted robustness slackness for each STL formula is denoted as r j .Based on the updated robustness slackness r j , each STL formula φ j is converted into mixed-integer programming constraints C φ j ,r j using the robustness-based encoding method (line 7), where C φ j ,r j consists of binary variables and linear predicates.In considering all STL constraints, dynamic constraints, and past trajectories, the optimal control sequence is computed over the time horizon H using a user-defined cost function (line 11).This procedure is repeated for the entire time interval τ.
Experimental Results
The proposed algorithm was implemented in a Python (version 3.10) environment, utilizing PyTorch (version 2.2.1) [35] for the deep learning components and Gurobi [36] as the optimization engine for MPC.Simulation experiments were conducted on a system equipped with an AMD R7-7700 processor and an RTX 4080 Super GPU.The Gurobi tool enabled solving the proposed MPC problem in approximately 0.11 seconds.
We conducted realistic simulations using the Next Generation Simulation (NGSIM) dataset [37] and the highD dataset [38], assuming that the drivers in these datasets possessed a certain level of expertise, making them suitable for "expert driver" demonstrations in our proposed approach.In the proposed method, obstacles were set as nearby vehicles.For generating training data, we utilized a combination of 70% data from the highD dataset and 30% from the NGSIM dataset.Data points from the NGSIM dataset that involved vehicles deviating from the track or causing collisions were excluded or modified.Additionally, data with normal speeds but no lane changes were partially removed to ensure a diverse set of training scenarios.
The CVAE network was trained with the following hyperparameters: a batch size of 64, a learning rate of 0.001, and 100 epochs.The future time horizon H was set to 16.
System Description
We modeled the dynamics of the vehicles on the track using a unicycle model.The state of the system at time t is described by x t = [x t , y t , θ t , v t ] T , where x t and y t represent the vehicle's position, θ t denotes the heading angle, and v t indicates the linear velocity.The control inputs are u t = [w t , a t ] T , with w t as the angular velocity and a t as the acceleration.The vehicle dynamics are expressed as follows: where κ 1 and κ 2 are constants.To facilitate the optimization process, we linearize the dynamics around a reference point x = [ x, ŷ, θ, v] T .The resulting linear system is derived as a first-order Taylor approximation of the nonlinear dynamics, given by where the matrices A n , B n , and C n are defined as
Collision avoidance (front vehicle): Slow down before the front vehicle: In these formulations, t a and t b are set to 6 and 12, respectively.Figure 5 illustrates the driving environment used to describe these STL rules.Note that in this figure, the ego vehicle is depicted in blue, the preceding vehicle in orange, and other vehicles in gray.The positions x t and y t and velocity v t correspond to the ego vehicle.The boundaries of the preceding vehicle at the x-y coordinates are denoted by x c,min , x c,max , y c,min , and y c,max .Similarly, x o,min , x o,max , y o,min , and y o,max represent the boundaries of other vehicles except the preceding one.The lane boundaries are denoted by y l,min and y l,max , while the track boundaries are represented by y t,min and y t,max .Here, v th represents the speed limit threshold for rule φ 4 .The final rule, φ 5 , mandates that the ego vehicle decelerate when approaching a preceding vehicle in the same lane.Parameters v u , t a , and t b are specific to rule φ 5 .
Simulation Results
Figure 6 presents the predicted robustness slackness r generated by the proposed CVAE network, alongside the control sequence produced by the MPC based on these predicted values.In the left subfigures indicating robustness slackness, negative degrees of satisfaction are marked with a red box.
In Figure 6a, the predicted robustness slackness suggests that rules φ 2 and φ 5 may be violated.It can be observed that the control sequence generated by the MPC results in the vehicle moving to the left lane (violating φ 2 ) and accelerating in the presence of a preceding vehicle (violating φ 5 ). Figure 7 demonstrates the application of the proposed method in the NGSIM road environment.The figure illustrates four different scenes, showing the predicted robustness slackness and the corresponding vehicle movements for each situation.For the lanekeeping rules φ 1 and φ 2 , if the robustness slackness value is less than or equal to a specified threshold (indicated by 'threshold' in the figure), it is evident that the ego vehicle attempts to change lanes.Conversely, if the robustness slackness value for φ 1 and φ 2 is greater than the threshold value, the proposed method may not initiate a lane change, depending on the specific situation (as illustrated in scene 4).Overall, the proposed method demonstrates the ability to drive efficiently-allowing the violation of some rules in certain situations-while maintaining safety in complex traffic conditions.
Collision experiments using the proposed approach were conducted across five test scenarios: two from the NGSIM dataset and three from the highD dataset.We compared the proposed method against five methods: LBMPC_STL [8], LSTM, TFN [39], and DQN [40].The LSTM method employs a naive LSTM encoder-decoder framework for imitation learning, while TFN utilizes a Transformer network for imitation learning.In the DQN method, the Q-network is modeled as a four-layer multi-layer perceptron with 12 discrete actions and receives input features.The DQN model was trained until convergence was achieved (1,000,000 episodes).Table 1 presents the number of successful trials for each method.The two methods with the highest number of successes for each scenario are highlighted in bold.The results clearly demonstrate that the proposed approach outperforms other methods in most test scenarios.DQN(1/2) refers to cases where half of the episodes (500,000 episodes) are used in the reinforcement learning stage.The average time steps for successful cases are shown in parentheses.
In the results presented in Table 1, reinforcement learning techniques (specifically DQN) exhibit a longer average time step compared to other methods due to the emphasis on stability in the design of the reward function.Additionally, there was no significant difference in average time steps (for successful cases) between the MPC techniques, including the proposed method, and the supervised learning techniques.Notably, the proposed method demonstrated a slightly shorter average time step compared to the other MPC technique, LBMPC_STL.
While the "average time step" cannot be an absolute criterion for evaluating the superiority of an algorithm's performance, when combined with the "collision rate", it indicates that the proposed method enables more stable and efficient autonomous driving compared to other methods.
Two key observations can be made from these results: • Model Predictive Control (MPC) demonstrates superior safety performance compared to reinforcement learning (DQN) and imitation learning approaches (LSTM, TFN).
•
The deep learning approach employed in the proposed method yields a better performance than the Gaussian process regression approach used in LBMPC_STL.
Conclusions
In this paper, we present a Model Predictive Control (MPC) method designed to manage dynamic systems while adhering to a set of Signal Temporal Logic (STL) rules.Unlike traditional approaches that enforce strict compliance with all rules, our method efficiently balances rule adherence by selectively disobeying certain rules to resolve dilemma situations where not all rules can be simultaneously satisfied.
The proposed method introduces the concept of robustness slackness, which represents the lower bound of the robustness degree, learned from expert demonstrations or data.By employing a Conditional Variational Autoencoder (CVAE) network, the controller adapts its behavior to prioritize different rules based on the context, emulating the decision-making processes of human experts.
Our contribution lies in the innovative approach of learning the satisfaction measure of rules using a deep-learning network, enabling robots to internalize and replicate the value systems of humans.This approach allows for more flexible and context-aware control, which is crucial for operating in complex and dynamic environments.
Figure 1 .
Figure 1.Overview of the proposed learning-based MPC framework.Expert demonstrations are utilized to learn the lower bounds of robustness, referred to as robustness slackness, through a deep learning approach.The learned values inform the MPC method, which then computes control sequences considering the STL rules.
Figure 2 .
Figure 2. Description of the ego vehicle and nearby vehicles in a track driving scenario.The ego vehicle (V ego ) is shown in blue.The diagram includes up to six nearby vehicles positioned in front and behind, across the left, center, and right lanes relative to the ego vehicle.
Figure 3 Figure 3 .
Figure3illustrates a demonstrated trajectory in a track driving scenario, depicting both the robustness degree and its lower bound for the time horizon H.The rule considered involves maintaining the first (lowest) lane, defined by the STL formula φ lane = (y ≤ y upper ) ∧ (y ≥ y lower ), where y represents the vertical position of the vehicle, and y upper and y lower are the upper and lower lane boundaries, respectively.An obstacle (or other vehicle), depicted as a striped black box, necessitates a lane change to proceed.The figure illustrates the variance between the robustness degree values and their corresponding lower bounds.
Figure 4 .
Figure 4.The CVAE network used to predict the robustness slackness.
Figure 5 .
Figure 5. Driving environment illustrating the defined STL rules φ.
(a) Predicted robustness slackness indicates a move to the left lane.(b) Predicted robustness slackness indicates a move to the right lane.
Figure 6 .
Figure 6.Snapshots of the proposed method applied to the NGSIM dataset.
Figure 7 .
Figure 7. Illustration of the proposed method's performance in NGSIM road environments.The figure depicts four different scenes, showing the predicted robustness slackness and the corresponding vehicle movements for each situation.
Table 1 .
Number of successful trials in collision experiments. | 6,554.8 | 2024-07-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Diffusiophoresis of a Weakly Charged Liquid Metal Droplet
Diffusiophoresis of a weakly charged liquid metal droplet (LMD) is investigated theoretically, motivated by its potential application in drug delivery. A general analytical formula valid for weakly charged condition is adopted to explore the droplet phoretic behavior. We determined that a liquid metal droplet, which is a special category of the conducting droplet in general, always moves up along the chemical gradient in sole chemiphoresis, contrary to a dielectric droplet where the droplet tends to move down the chemical gradient most of the time. This suggests a therapeutic nanomedicine such as a gallium LMD is inherently superior to a corresponding dielectric liposome droplet in drug delivery in terms of self-guiding to its desired destination. The droplet moving direction can still be manipulated via the polarity dependence; however, there should be an induced diffusion potential present in the electrolyte solution under consideration, which spontaneously generates an extra electrophoresis component. Moreover, the smaller the conducting liquid metal droplet is, the faster it moves in general, which means a smaller LMD nanomedicine is preferred. These findings demonstrate the superior features of an LMD nanomedicine in drug delivery.
Introduction
Diffusiophoresis refers to the spontaneous motion of a colloidal particle in response to a concentration gradient in a solution, especially in an electrolyte solution, as the particle motion there is much faster than in a non-electrolyte solution in general [1][2][3]. Due to the lack of undesirable Joule heating effect, it has been extensively used in drug delivery in particular, as concentration/chemical gradients are abundant in the human body and thus can drive the drug-carrying nanomedicines [4,5]. Among various nanomedicines used in drug delivery, the liposome has been the most popular one, which is a dielectric droplet enclosed by a bi-layer lipid membrane with therapeutic medicals dissolved in the interior fluid. Additionally, in microfluidic and nanofluidic operations [6], diffusiophoresis has been proposed to serve as an alternative driving force replacing the conventional electrophoresis in some specific applications [7][8][9][10][11][12][13][14][15][16]. Furthermore, diffusiophoresis of dielectric droplets has also been used in enhanced oil recovery (EOR) in the petroleum industry to increase the crude oil extraction rate by 20% to 40% [17]. Indeed, diffusiophoresis, especially when it is coupled with liquid droplets, provides a very versatile platform in various practical applications in colloid technology.
In addition to the above-mentioned dielectric droplet [18][19][20][21][22], however, there is a very important separate category of fluid droplets that has received vast research interest recently: the conducting liquid metal droplets (LMDs). Liquid metals can be considered as the softest materials with very high electrical conductivities. Hence, they are often regarded as perfectly conducting [23]. Liquid metals assume a liquid state at or near-room temperature. Among them, mercury has been the most well-known liquid metal in human history. However, due to its high volatility and toxicity, it is not suitable to be used in the human body [24]. Other liquid metals, especially gallium (Ga) and its alloys, have very low vapor pressure and toxicity. Gallium has a melting point of about 30 degrees Celsius. Gallium-based alloys such as EGaIn (75% Ga and 25% In) have a melting point of about 15 degrees Celsius. The alloy galistan (68.5% gallium, 21.5% indium and 10% tin) has a melting point as low as −19 degrees Celsius. Low melting point coupled with low toxicity and low vapor pressure make these gallium or gallium-based LMDs excellent candidates in drug delivery applications in the human body [25]. This is reinforced by the unique therapeutic effect of the Ga 3+ ions in particular, which can be released by the LMDs containing gallium [26]. Efficient transportation of these LMDs in drug delivery is a challenge to be reckoned with in order to enhance the overall performance of these emerging promising nanomedicines.
A moving liquid metal droplet used to be treated as a rigid solid particle. However, many fundamental phenomena observed in practice are inconsistent with this framework [27]. Solid experimental evidence has been demonstrated for an LMD in various phoretic motions under the influence of an imposed electric field that moves with characteristic fluidic features [27]. As a result, an in-depth investigation of the diffusiophoresis of a conducting LMD is crucial in its various practical applications, especially in drug delivery, where diffusiophoresis has been determined to provide a self-guiding electrokinetic environment to convey the nanomedicines toward the intended locations spontaneously like a cruise missile. This is because specific chemicals are often released from the injured or infected areas needing therapy in the human body, such as the calcium ions in bone crack [4]. Moreover, the concentration of therapeutics in the immediate neighborhood of these areas can be significantly increased as the concentration gradient guiding the motion of nanomedicines is strongest there, which is highly desirable in order to maximize the in vivo efficacy of therapy [28].
Compared with the conventional dielectric droplets such as liposomes, the surface of a conducting LMD is equipotential due to its extremely high conductivity, which means it cannot experience an electric driving force in the tangent direction of the droplet surface [29]. This implies that there is no motion-deterring electric Maxwell traction surrounding the conducting droplet [23]. Moreover, there is no surface tension-induced Marangoni effect for a conducting droplet such as an LMD considered here either, as there is no field gradient tangent to the droplet surface of any kind to induce it [27,30]. This has been experimentally demonstrated as well in a recent research paper regarding the electrophoretic motion of an LMD, among other phoretic motions [27]. Hence, there is no need to consider the impact of interfacial tension in the analysis of LMD phoretic motions. The phoretic motions of a conducting droplet are driven purely by the viscous hydrodynamic forces upon the droplet surface from both the exterior and the interior fluid flows.
There has been a surge of research efforts in the literature dedicated to diffusiophoresis in the last decade. Among them, Shin and his coworkers reported many excellent experimental explorations with valuable outcomes [10][11][12]17,28,31]. Lee and his coworkers, on the other hand, launched a series of theoretical studies on diffusiophoretic motions of dielectric droplets as well as soft particles recently, motivated by the possible applications in drug delivery in particular [32][33][34][35][36]. Very recently, the diffusiophoretic motion of a highly charged conducting droplet was investigated by Lee and his coworkers as well, focusing on the chemiphoresis component in particular, where the droplet motion is induced solely by the solute concentration gradient [30]. Many interesting features were detected there. In particular, it was discovered that a conducting droplet is superior to a dielectric droplet as a nanomedicine in terms of self-guiding itself to the intended area via diffusiophoresis. Here, we conduct a theoretical study on the diffusiophoresis of a conducting liquid metal droplet based on a versatile general analytical formula obtained under Debye-Hückel approximation by Lee and his coworkers recently [37]. We focus on the electrophoresis component in particular here in this study, which results from the internal electric field generated by the induced diffusion potential in the bulk electrolyte solution when the diffusivities of cations and anions are different [38,39]. This provides crucial information on the polarity dependence of the droplet motion as the ion species are abundant in the Molecules 2023, 28, 3905 3 of 17 human body and diffusion potential is known to be present in practice. The preferred polarity of the droplet surface as discussed above thus can be ensured in the fabrication stage, which can often be achieved via surface modifications. The charge condition of a conducting droplet surface is further assumed to be of constant surface charge density; in other words, it remains invariant with any varying electrokinetic environment, such as the electrolyte strength. This is also referred to as ideally polarizable, a condition generally adopted in the theoretical treatment of conducting droplets [23,40]. Note that this is not contradictory to the equipotential characteristic of a conducting droplet mentioned earlier, which simply means the electric potential is uniform on the surface of a conducting droplet at any instant. Certainly, it can be both equipotential and of constant surface charge density; they are two different but compatible concepts. As for the diffusiophoresis of a conducting droplet in general, Lee and his coworkers obtained an analytical formula under the Debye-Hückel approximation valid for a weakly charged dielectric droplet recently [41]. This formula is applicable to a conducting droplet situation as well, whose diffusiophoretic behavior in general has never been investigated before. Later on, a similar approximate analytical approach was reported by Ohshima as well, using a conducting mercury droplet as a specific demonstrating example [42,43]. The predictions there are essentially identical to those using the formula derived by Lee and his coworkers [41], as we shall present.
As mentioned above, we consider the weakly charged liquid metal droplet with constant surface charge density here to be consistent with the perfectly polarizable assumption normally adopted in theoretical treatments of highly conducting liquid droplet [23]. Electrostatically speaking, this means the ions in the suspending electrolyte solution are all "indifferent ions" which can suppress the electric double layer due to their presence in the outer diffuse layer, the so-called double layer suppression effect, but cannot penetrate through the inner compact Stern layer and reach the droplet surface and change the droplet surface charges accordingly [44]. Note that for a droplet with constant surface potential, which is rarely present in reality, the droplet surface charges keep on changing with varying κa values, which makes it difficult to analyze the electrokinetic response of the droplet to the varying electrostatic environment, indicated by the value of κa. This has been demonstrated and discussed in detail in corresponding analyses on the diffusiophoresis of a dielectric droplet, both with or without the presence of the induced diffusion potential recently by Lee and his coworkers [33,35].
The results provided here shed light on the fundamental electrokinetic behavior of a conducting LMD in drug delivery, especially its intrinsic difference from the conventional dielectric liposome droplets. This is crucial to the nanomedicine development in particular. Moreover, the portable nature of the analytical formula here provides highly desirable information on any specific systems of interest, which facilitates the possible need for researchers and practitioners in drug delivery involving nanomedicines of LMDs.
Theory
As shown in Figure 1, the diffusiophoretic motion of a weakly charged conducting liquid metal droplet (LMD) is considered in this study. For simplicity, symmetric binary electrolyte solution systems are further assumed. Extension to non-symmetric binary electrolyte solutions is straightforward. The droplet surface is mobile, ion-impenetrable, and remains spherical without deformation while in the diffusiophoretic motion, justified by the negligible hydrodynamic Weber number [33]. Moreover, as mentioned earlier, the droplet surface is assumed to remain equipotential as well, which is justified for liquid metal droplets due to their extremely high conductivities. All the charges would be uniformly distributed on the droplet surface as well, with no charges present in the interior, as elaborated in the Introduction section earlier.
uniformly distributed on the droplet surface as well, with no charges present in th interior, as elaborated in the Introduction section earlier. A concentration gradient ∇C is applied along the Z-direction, which drives th droplet in motion along the same direction as a consequence. Spherical coordinates (r, θ φ) are adopted with the origin located at the center of the moving droplet.
The governing equations consist of the Poisson equation governing the electri potential, Equation (1), the Nernst-Planck equation governing the ion migrations Equation (2), the corresponding conservation equation of ion flux as a constraint o possible mechanisms of ion migration, Equation (3), and the momentum equation governing the flow fields for fluids both exterior and interior to the droplet, Equation (4a and Equation (4b), respectively, which are subject to the incompressibility constraints Equation (5a) and Equation (5b), respectively [44]: A concentration gradient ∇C is applied along the Z-direction, which drives the droplet in motion along the same direction as a consequence. Spherical coordinates (r, θ, ϕ) are adopted with the origin located at the center of the moving droplet.
The governing equations consist of the Poisson equation governing the electric potential, Equation (1), the Nernst-Planck equation governing the ion migrations, Equation (2), the corresponding conservation equation of ion flux as a constraint of possible mechanisms of ion migration, Equation (3), and the momentum equations governing the flow fields for fluids both exterior and interior to the droplet, Equation (4a) and Equation (4b), respectively, which are subject to the incompressibility constraints, Equation (5a) and Equation (5b), respectively [44]: Molecules 2023, 28, 3905 5 of 17 Note that Equation (1) is the Gauss divergence theorem in electrostatics, where φ is the electric potential and ρ the space charge density, with ρ = ∑ 2 j=1 z j en j for a binary electrolyte solution and n j denoting the number density of ionic species j. Moreover, ε m is the electric permittivity of the ambient electrolyte solution. The first term in Equation (2) stands for the diffusion mechanism of the ion migration, the second term is the electro-migration due to the electrostatic Coulomb force, and the third term is the convective migration, with f j denoting the ion flux of ionic species j. Equation (4a,b) are hydrodynamic Stokes equations for the fluid velocity of the exterior electrolyte solution, v, and velocity of interior liquid metal, v I , valid for creeping flow regime, with P as the hydrodynamic pressure. Suffix I indicates the physical variables in the interior of the LMD in general. An extra electric body force term, −ρ∇φ, appears in Equation (4a) due to the presence of ions in the exterior electrolyte solution. Additionally, η m and η D are the viscosities of the ambient electrolyte solution and interior liquid metal, respectively. The details of the definitions of symbols can be found in the List of Symbols.
For a weakly charged droplet, Equation (1) can be replaced by a linear Helmholtz equation. Following the approach by O'Brien and White [45], a linear perturbation analysis is further adopted assuming that the system is only slightly perturbed. The problem is then converted to the one-dimensional dimensionless form in terms of the dimensionless radial distance r*. The resultant governing equations are shown below. The mathematical details can be found elsewhere [46].
where the operators are defined as below: where φ is the electric potential, a is the droplet radius, κa is the dimensionless reciprocal of Debye length indicating the thickness of the double layer. The larger the value of κa, the thinner the electric double layer surrounding the droplet. Moreover, ψ is the stream function representing the flow field of an axisymmetric system whose precise mathematical definition can be found in the List of Symbols. Suffix E refers to the equilibrium state, and the superscript * refers to the dimensionless quantities. δ refers to the perturbation amount of the variables after it. The definitions of the rest of the symbols are contained in the List of Symbols. Following similar mathematical treatments adopted elsewhere [42], it can be shown that the dimensionless diffusiophoretic mobility for a conducting droplet in general is as follows: and β is a dimensionless index defined as D 1 −D 2 D 1 +D 2 for a symmetric electrolyte solution, where D 1 is the diffusivity of cations and D 2 anions, which indicates the magnitude of the diffusion potential in the bulk electrolyte solution. β = 0 indicates the absence of the diffusion potential, such that only the chemiphoresis component is present. The ratio of the droplet viscosity to the exterior electrolyte solution is denoted as η fr . The detailed definitions of the rest of the symbols can be found in the List of Symbols or elsewhere [32,41]. In short, suffix E stands for the contribution from the electrophoresis component by the induced diffusion potential. The suffix D, on the other hand, indicates the contribution from the chemiphoresis component instead. Additionally, suffix HS stands for the contribution of the particle under consideration as a "hard (rigid) sphere", whereas suffix DL indicates the extra contribution from a liquid droplet with a mobile surface.
Note that although the above analytical formula is of a closed form, it is written in terms of the surface/zeta potential, ζ * , instead. Based on the two-dimensional Gauss divergence theorem at the droplet surface, however, it can be shown that a simple analytical relationship exists for a weakly charged conducting liquid metal droplet considered here:
Results and Discussion
We first verify the accuracy of the computation results obtained here based on the general analytical formula [41]. As the analytical formula adopted is derived under Debye-Hückel approximation, which is valid for weakly charged droplets only, we compare its predictions with the numerical results in the literature obtained for the arbitrarily charged conducting droplet in chemiphoresis first to verify its accuracy [30]. As shown in Figure 2, the agreement is excellent for a conducting liquid metal droplet with surface charge density set to 0.2, which corresponds to φ r = 0.1 at κa = 1, in an electrolyte solution of KCl (β = 0) with no involvement of diffusion potential. In other words, only the chemiphoresis component is present where the motion of ions is driven solely by the concentration gradient. Additionally, we have also compared with the results available in the literature reported by Ohshima [42] very recently for a weakly charged mercury drop in a KCl solution in particular. Excellent agreement is again observed, which is not shown here for brevity. Note that the droplet system considered there was assumed to be of constant surface potential instead. As a result, some analytical conversion between surface potential and surface charge density was conducted with Equation (21) in the Theory section. Further comparison with the classic results provided by Booth [47] was also made with excellent agreement, which is not shown here for brevity. We thus conclude that the computation results in this study are accurate and reliable and go on to explore the diffusiophoretic behavior of a weakly charged conducting liquid metal droplet based on them. which is not shown here for brevity. Note that the droplet system considered there was assumed to be of constant surface potential instead. As a result, some analytical conversion between surface potential and surface charge density was conducted with Equation (21) in the Theory section. Further comparison with the classic results provided by Booth [47] was also made with excellent agreement, which is not shown here for brevity. We thus conclude that the computation results in this study are accurate and reliable and go on to explore the diffusiophoretic behavior of a weakly charged conducting liquid metal droplet based on them. In general, the unit surface potential, ϕ r = 1, is regarded as a benchmark weakly charged condition of practical interest in the electrokinetics community. We thus convert the surface charge condition to constant surface charge density via Equation (21) with κa = 1 and ϕ r = 1, which yields 2.03 here. This serves as the electrostatic condition of a benchmark weakly charged conducting droplet under investigation here. We explore the chemiphoresis component first, where the droplet motion is induced solely by the concentration gradient of ions in the absence of the diffusion potential. In other words, β = 0, such as in the KCl solution, where the diffusivities of the cations and anions are nearly identical and are often treated as approximately zero [45]. In general, the unit surface potential, φ r = 1, is regarded as a benchmark weakly charged condition of practical interest in the electrokinetics community. We thus convert the surface charge condition to constant surface charge density via Equation (21) with κa = 1 and φ r = 1, which yields 2.03 here. This serves as the electrostatic condition of a benchmark weakly charged conducting droplet under investigation here. We explore the chemiphoresis component first, where the droplet motion is induced solely by the concentration gradient of ions in the absence of the diffusion potential. In other words, β = 0, such as in the KCl solution, where the diffusivities of the cations and anions are nearly identical and are often treated as approximately zero [45]. The electric driving force and the retarding hydrodynamic drag force are the two decisive factors to determine the ultimate droplet motion. The electric driving force is directly proportional to the local electric field surrounding the droplet surface and is closely related to the electrostatic environment there, characterized by the electrolyte strength here. The hydrodynamic drag force, on the other hand, is contingent upon the droplet viscosity following the Stokes law. As a result, it would be natural to consider the droplet mobility as a function of the electrolyte strength in the ambient solution for droplet of various viscosities, as shown in Figure 3 for the benchmark weakly charged conducting liquid metal droplets, focusing on the chemiphoresis (β = 0), where κ is the electrolyte strength and a is the droplet radius. Note that κa is a measure of the double layer thickness: The larger the value of κa, the thinner the double layer surrounding the droplet is, as the strong electrolyte strength in the bulk solution suppresses the double layer which is often referred to as the double layer suppression effect [44]. The bell-shape of mobility profiles in Figure 3 results from the fact that the very origin of chemiphoresis is the double layer polarization, which manifests itself at κa around unity. Local maxima are thus observed in Figure 2. At large κa, the surface charges are screened/sheltered by the large amount of counterions adjacent the droplet surface and lead to a significant reduction in effective surface charges. Eventually, the droplet ceases to move, as nearly chargeless condition is reached at very large κa. The electric driving force and the retarding hydrodynamic drag force are the two decisive factors to determine the ultimate droplet motion. The electric driving force is directly proportional to the local electric field surrounding the droplet surface and is closely related to the electrostatic environment there, characterized by the electrolyte strength here. The hydrodynamic drag force, on the other hand, is contingent upon the droplet viscosity following the Stokes law. As a result, it would be natural to consider the droplet mobility as a function of the electrolyte strength in the ambient solution for droplet of various viscosities, as shown in Figure 3 for the benchmark weakly charged conducting liquid metal droplets, focusing on the chemiphoresis (β = 0), where κ is the electrolyte strength and a is the droplet radius. Note that κa is a measure of the double layer thickness: The larger the value of κa, the thinner the double layer surrounding the droplet is, as the strong electrolyte strength in the bulk solution suppresses the double layer which is often referred to as the double layer suppression effect [44]. The bell-shape of mobility profiles in Figure 3 results from the fact that the very origin of chemiphoresis is the double layer polarization, which manifests itself at κa around unity. Local maxima are thus observed in Figure 2. At large κa, the surface charges are screened/sheltered by the large amount of counterions adjacent the droplet surface and lead to a significant reduction in effective surface charges. Eventually, the droplet ceases to move, as nearly chargeless condition is reached at very large κa. It is interesting to note that a conducting liquid metal droplet always moves up the chemical/concentration gradient with positive mobility toward the region of higher concentration of ions, regardless of the droplet viscosity ratios and electrostatic environment, indicated by κa. This is contrary to the corresponding behavior of a dielectric droplet, where negative mobilities are observed most of the time, except at small values of κa, in other words, either weak electrolyte strength or small droplet radius. This implies that a dielectric droplet tends to move down the chemical/concentration gradient of ions to the region with fewer ions [32]. The underlying electrokinetic mechanism is apparently the presence or absence of the electric Maxwell stress, the electrokinetic distinction between a dielectric droplet and an LMD conducting droplet. This fundamental difference in droplet moving directions is critically important in some practical applications of droplets such as in drug delivery. For instance, a nanomedicine made of a conducting droplet (a liquid metal droplet) is superior to a dielectric droplet (a liposome) in terms of self-guiding itself to the desired destination of the injured or infected region in the human body, which often releases specific chemicals to form a strong chemical gradient in its immediate neighborhood [4]. The electrokinetic reason for this tendency of droplets moving up the chemical gradient is very simple: without the help of the Maxwell traction down the chemical/concentration gradient of ions provided by the non-vanishing Maxwell stress tensor upon the droplet surface, the hydrodynamic drag force alone by the downward exterior chemiosmosis flow is simply no match to the intrinsic upward droplet motion induced electrostatically by its surface charges [32]. Moreover, the less viscous a conducting liquid metal droplet is, the faster it moves according to Figure 3, which is contrary to the situation of a dielectric droplet as well, where the viscosity dependence can be reversed under certain electrostatic circumstances due to the dominant tangential electric force upon the droplet surface [32]. For the conducting liquid metal droplet considered here, the stresses exerted upon the droplet surface in the tangential direction are of purely viscous nature as elaborated in the Theory section. Hence, the viscosity dependence of the droplet can be deduced purely from the hydrodynamic consideration, which is the following: The less viscous the droplet is, the faster it moves. Moreover, the smaller a conducting liquid metal droplet is, the faster it moves in general, as shown in Figure 2. This is because the retarding hydrodynamic drag force increases with increasing droplet size in general, according to the famous Stokes law [48].
On the other hand, the droplet moving direction in chemiphoresis is independent of the signs of the charges carried by the droplet. A positively charged droplet moves in the same direction as a negatively charged droplet. In other words, it is the intrinsic nature of the specific droplet under consideration, and there is no way to change it in chemiphoresis. Fortunately, in addition to the chemiphoresis component, there is yet another fundamental element in diffusiophoresis: the electrophoresis component. The electrophoresis component drives the droplet in motion via the electric field generated by the induced diffusion potential in the bulk solution. As mentioned earlier in the Introduction section, diffusion potential is an electric potential generated spontaneously in an electrolyte solution where the diffusivities of cations and anions are different [39,49]. This induced diffusion potential speeds up the slower ions and slows down the faster ions via Coulomb electrostatic force to ensure all the ions migrate at the same speed eventually, which is established instantly in practice [39]. In this way, the electro-neutrality of the system is maintained.
The effect of this induced diffusion potential is exactly like the externally applied electric field in conventional electrophoresis, except that it is simultaneously coupled with the chemiphoresis component in diffusiophoresis. The best merit of it is that it is dependent on the sign of charges carried by the droplet, i.e., its polarity. The impact of the electrophoresis component on a positively charged droplet is significantly different from a that on the negatively charged droplet, and very often drives them to move in opposite directions. This provides a large potential for possible manipulations of droplets by altering their surface charge conditions, both in the fabrication stage and in the operational stage.
For instance, the specific electrolyte environment has a profound impact on the surface charge condition of a liquid metal droplet of gallium (Ga), which is an extremely important star nanomedicine in treating some serious diseases [26]. The polarity of the gallium LMD has been determined to be pH-dependent in HCl solution ranging from negatively charged to positively charged in a continuous way [50]. Moreover, it also has been reported as negatively charged in some NaOH solutions [51]. This provides a way to artificially design the polarity of a gallium droplet in its fabrication stage by exposing it to HCl or NaCl solution, for instance. Note that whether there is an electrophoresis component present is contingent upon the specific electrolyte solution under consideration. There must be an induced potential in the electrolyte solution to begin with, such as the NaCl solution; then, the entire strategy of droplet manipulation can possibly work. It is not something that can be added to the system at will.
To further investigate the effect of the electrophoresis component in droplet diffusiophoresis, one has to explore the droplet motion in an electrolyte solution where induced potential is present, as shown in Figure 4, where the corresponding droplet mobility profiles in an NaCl solution is presented. The choice of NaCl is based on the fact that Na + and Cl − are two major ions in the human body as well as in seawater. Indeed, it has served as a benchmark system in the exploration of the electrophoresis component. The magnitude and direction of this diffusion potential in a symmetric electrolyte solution are indicated by a dimensionless β index defined as D 1 −D 2 D 1 +D 2 , where D 1 is the diffusivity of the cations, and D 2 is that of the anions. For the NaCl solution, β is equal to −0.208, which indicates that an induced electric field in the same direction of the chemical gradient is generated to speed up the slower cations and slow down the faster anions simultaneously. As a result, this negative electric field drives a positively charged conducting droplet down the chemical gradient instead, contrary to what happens in chemiphoresis shown in Figure 2. The electrophoresis component turns out to be dominant in determining the ultimate droplet motion here, and reverses the upward droplet motion in chemiphoresis completely. The larger the value of κa is, the slower the droplet moves in general, due to the impact of the upward moving tendency of the droplet by the coupled chemiphoresis component as shown in Figure 2. The overall negative mobilities translate to the undesirable droplet moving direction in drug delivery, as discussed earlier. The highly desirable self-guiding nature of diffusiophoresis would be completely lost. Hence, a positively charged conducting droplet should be avoided in practical applications such as an LMD in drug delivery.
As mentioned above, polarity dependence of droplet mobility is present in an NaCl solution. Here, we proceed further to explore the droplet motion for a negatively charged droplet instead. The interaction between the electrophoresis component and the diffusiophoresis component is very complicated, which leads to complicated polarity dependences of the droplet motion. As shown in Figure 5, where the corresponding mobility profiles for a negatively charged conducting LMD is depicted, the desired positive mobility up the chemical gradient is observed throughout the κa range investigated. The scenario remains the same regardless of the viscosity ratios. In other words, a negatively charged conducting LMD should always be chosen in drug delivery if the self-guiding nature of diffusiophoresis is of major concern, for instance, the conducting LMD of gallium in the form of Ga(OH) − 4 [27]. As both the electrophoresis and the chemiphoresis components tend to drive the droplet upward along the chemical gradient, the resulting droplet mobility is much larger than the corresponding cases in Figure 3, where only chemiphoresis is present. The dominance of the electrophoresis is very clear. This indicates that the involvement of the electrophoresis component is highly desirable in terms of enhancing the migration speed of the LMD nanomedicine, hence its overall therapeutic performance. As mentioned above, polarity dependence of droplet mobility is present in an NaCl solution. Here, we proceed further to explore the droplet motion for a negatively charged droplet instead. The interaction between the electrophoresis component and the diffusiophoresis component is very complicated, which leads to complicated polarity dependences of the droplet motion. As shown in Figure 5, where the corresponding mobility profiles for a negatively charged conducting LMD is depicted, the desired positive mobility up the chemical gradient is observed throughout the κa range investigated. The scenario remains the same regardless of the viscosity ratios. In other words, a negatively charged conducting LMD should always be chosen in drug delivery if the self-guiding nature of diffusiophoresis is of major concern, for instance, the conducting LMD of gallium in the form of Ga(OH) 4 − [27]. As both the electrophoresis and the chemiphoresis components tend to drive the droplet upward along the chemical gradient, the resulting droplet mobility is much larger than the corresponding cases in Figure 3, where only chemiphoresis is present. The dominance of the electrophoresis is very clear. This indicates that the involvement of the electrophoresis component is highly desirable in terms of enhancing the migration speed of the LMD nanomedicine, hence its overall therapeutic performance. Moreover, the corresponding droplet mobility profiles in the H2CO3 electrolyte solution are also examined, as the H2CO3 solution tends to yield a very large positive diffusion potential (β = 0.774) instead and hence has been proposed to be the driving force in the purification of water [10,12]. The positive β index implies an electric field up along the chemical gradient will be induced to slow down the faster cations and speed up the Moreover, the corresponding droplet mobility profiles in the H 2 CO 3 electrolyte solution are also examined, as the H 2 CO 3 solution tends to yield a very large positive diffusion potential (β = 0.774) instead and hence has been proposed to be the driving force in the purification of water [10,12]. The positive β index implies an electric field up along the chemical gradient will be induced to slow down the faster cations and speed up the slower anions in their migration down the chemical gradient. As shown in Figure 6, positive mobilities are observed all the way across the entire range of κa examined for a positively charged conducting LMD as expected, as both components yield positive mobilities in this case. For completeness, corresponding mobility profiles for a negatively charged droplet in H 2 CO 3 electrolyte solution are shown in Figure 7 as well. The situation is a bit complicated here due to the duality between the downward moving tendency of the electrophoresis component and the upwardly moving tendency of the chemiphoresis component. However, overall, the droplet still moves downward with negative mobility as the electrophoresis component induced by the large diffusion potential here dominates easily in this case.
Conclusions
Diffusiophoresis of a weakly charged conducting liquid metal droplet (LMD) is investigated theoretically based on the analytical formula derived under Debye-Hückel approximation. The findings are summarized as follows.
(1) Similar to a highly charged conducting droplet in general, a weakly charged conducting LMD always moves up along the chemical gradient to the region of higher solutes in chemiphoresis, contrary to a dielectric droplet where the droplet tends to move down the chemical gradient most of the time. The presence or absence of the motion-deterring electric Maxwell traction down the chemical gradient is determined to be responsible for this fundamental difference in droplet moving direction. In particular, this means a conducting LMD is inherently superior to a dielectric droplet in drug delivery if self-guiding of the droplet toward the injured or infected locations in the human body by diffusiophoresis is of major concern, as these locations often release specific chemicals in their neighborhood. (2) The sign of the charges carried by a conducting LMD is crucial in determining its ultimate moving direction in the presence of the diffusion potential via the electrophoresis component. A positively charged LMD tends to move in the opposite direction of a negatively charged one. This provides the maneuverability needed in the highly desirable self-guiding merit in diffusiophoresis. This is made possible by the appropriate choice of the electrolyte solution and the polarity of the droplet surface accordingly, if such a choice is allowed or possible.
Conclusions
Diffusiophoresis of a weakly charged conducting liquid metal droplet (LMD) is investigated theoretically based on the analytical formula derived under Debye-Hückel approximation. The findings are summarized as follows.
(1) Similar to a highly charged conducting droplet in general, a weakly charged conducting LMD always moves up along the chemical gradient to the region of higher solutes in chemiphoresis, contrary to a dielectric droplet where the droplet tends to move down the chemical gradient most of the time. The presence or absence of the motion-deterring electric Maxwell traction down the chemical gradient is determined to be responsible for this fundamental difference in droplet moving direction. In particular, this means a conducting LMD is inherently superior to a dielectric droplet in drug delivery if self-guiding of the droplet toward the injured or infected locations in the human body by diffusiophoresis is of major concern, as these locations often release specific chemicals in their neighborhood. Moreover, the smaller an LMD is, the faster it moves. This means a smaller LMD is preferable in drug delivery, as the migration speed of the LMD is enhanced this way.
The findings presented here provide crucial information on the diffusiophoretic behavior of a weakly charged conducting LMD, especially the electrophoresis component, which may be applicable to a highly charged LMD as well in terms of polarity dependence. Further investigation is necessary, though. The results provide guidelines in the fabrication stage of a conducting LMD nanomedicine in particular in terms of optimizing its overall therapeutic performance. Pe j Peclet number of ionic species j (Pe j = U 0 a/D j ) representing the reciprocal of its diffusivity in a dimensionless form p pressure (Nt/m 2 ) r r-coordinate in spherical coordinates (r, θ, ϕ) r * dimensionless r-coordinate in spherical coordinates, defined as r/a U diffusiophoretic velocity of the droplet under consideration n j0 ez j 2 /εk B T µ diffusiophoretic mobility of the particle defined as U ∇C µ* dimensionless diffusiophoretic mobility of the particle defined as µ* = µ U 0 φ 0 a ρ space charge density (coul/m 3 ) ρ fix uniform charge density in the outer porous layer of the soft particle (coul/m 3 ) σ surface charge density (coul/m 2 ) σ* dimensionless surface charge density defined as σa ε m φ 0 Φ one-dimensional version of the electric potential distribution (volt) ϕ ϕ-coordinate in spherical coordinates (r, θ, ϕ) Ψ * one-dimensional version of the stream function ψ stream function φ electric potential (volt) φ 0 thermal potential in a binary electrolyte solution (φ 0 = kT/z 1 e) ζ * dimensionless surface potential of the particle (φ r = φ/φ 0 ) Operators A n A n (x) ≡ ∞ 1 t r * 5−n + 1 r * 6−n E 5 [(x)r * ]e −(x)r * dr * , n ∈ integer C C ≡ 1 2 E 5 (κa) + E −1 (κa) E 2 operator defined as E 2 = ∂ 2 ∂r 2 + sin θ | 9,107 | 2023-05-01T00:00:00.000 | [
"Physics"
] |
Decomposability and Convex Structure of Thermal Processes
We present an example of a Thermal Process for a system of $d$ energy levels, which cannot be performed without an instant access to the whole energy space. This Thermal Process is uniquely connected with a transition between some states of the system, that cannot be performed without access to the whole energy space even when approximate transitions are allowed. Pursuing the question about the decomposability of Thermal Processes into convex combinations of compositions of processes acting non-trivially on smaller subspaces, we investigate transitions within the subspace of states diagonal in the energy basis. For three level systems, we determine the set of extremal points of these operations, as well as the minimal set of operations needed to perform an arbitrary Thermal Process, and connect the set of Thermal Processes with thermomajorization criterion. We show that the structure of the set depends on temperature, which is associated with the fact that Thermal Processes cannot increase deterministically extractable work from a state -- the conclusion that holds for arbitrary $d$ level system. We also connect the decomposability problem with detailed balance symmetry of an extremal Thermal Processes.
We present an example of a Thermal Process for a system of d energy levels, which cannot be performed without an instant access to the whole energy space. This Thermal Process is uniquely connected with a transition between some states of the system, that cannot be performed without access to the whole energy space even when approximate transitions are allowed. Pursuing the question about the decomposability of Thermal Processes into convex combinations of compositions of processes acting non-trivially on smaller subspaces, we investigate transitions within the subspace of states diagonal in the energy basis. For three level systems, we determine the set of extremal points of these operations, as well as the minimal set of operations needed to perform an arbitrary Thermal Process, and connect the set of Thermal Processes with thermomajorization criterion. We show that the structure of the set depends on temperature, which is associated with the fact that Thermal Processes cannot increase deterministically extractable work from a state -the conclusion that holds for arbitrary d level system. We also connect the decomposability problem with detailed balance symmetry of an extremal Thermal Processes.
I. INTRODUCTION
One of aims of quantum thermodynamics is to provide such a description of quantum systems interacting with environment that would enable assessment of their usefulness for tasks such as work extraction. Therefore, a question about possible transitions between quantum states, and their energy cost, lies in the center of interest. This question can be posed at a general, modelindependent level, when we neglect a precise structure of the system-environment interactions in favor of more general assumptions we impose on them (e. g. energy conservation), and aim at obtaining bounds imposed by quantum mechanics on the performance of quantum systems under these restrictions.
One of these generalized approaches can be expressed in the language of the resource theory of Thermal Operations [1] (see also [2]), where, apart from the assumption about the commutation of system-environment interactions with local Hamiltonians, we allow for free addition and erasure of environment state in equilibrium. When restricted to transitions between states diagonal in the basis of a local Hamiltonian, the allowed transformations are described by Thermal Processes -left stochastic matrices that preserve a Gibbs state. They act on vectors storing states eigenvalues. Thermomajorization criterion [1,3,4] brings an answer to the question about which states can be achieved from a given initial state under these assumptions and with defined amount of work.
The thermodynamical description of quantum diagonal states within the resource theory of Thermal Processes has appealing simplicity. However, a priori implementation of Thermal Processes requires access to an entire environment. Therefore, apart from unitarity and energy conservation, the only thermodynamically-motivated restriction is that the state of environment is a Gibbs one. Such an approach is clearly suitable to derive ultimate bounds, however it might be questionable of whether it can be called thermodynamics, since the latter not only poses limitations on efficiencies of heat engines, but also allows to achieve these limitations (at least in theory) with coarse grained operations, that refer only to several relevant macroscopic parameters, such as temperature or pressure.
Nevertheless, quite recently it was shown that the resource theory of Thermal Processes is indeed thermodynamics in the latter sense. Namely, in [5] it was proved that, for diagonal states, all transitions allowed by Thermal Processes can be obtained by having microscopic access only to a single qubit of the heat bath. The rest of the bath serves only for simple partial thermalization processes which require just weak coupling between bath and the system [6]. This is combined with changing of the Hamiltonian of the system. Thus, while the system (and the single additional qubit of the bath) have to be manipulated microscopically, the heat bath is treated just as in traditional thermodynamics. The proposed class of operations (called in [5] "coarse operations"), while fundamentally simple, may still be not optimal in practice. In particular, some processes on a single qubit system require quite a nontrivial sequence of manipulations on two qubits.
In contrast, in [7] M. Lostaglio proposes a straightforward implementation of qubit Thermal Processes by considering coupling of a system to a bath via Jaynes-Cummings interaction, and poses the question to what extent qubit Thermal Processes can be universal, i.e. whether a Thermal Process (TP) on higher-dimensional system can be decomposed into a convex combinations of sequences of TPs, where each of the TP acts non-trivially only on a selected pair of the energy levels of the system. This leads to a fundamental problem of specifying some basic TPs, such that: (i) they can be easily implemented physically, (ii) all transitions allowed by the resource theory of TPs can be obtained from these basic bricks. However, considerations in [7] turn to be based on the assumption of the reversibility of the so-called embedding map [8]. This assumption does not hold in general, unless the domain of the map is restricted. Therefore, the question about decomposability of TPs remained open. This assumption was dropped in the recent version of the paper [9], published in parallel with this manuscript, and decomposability of TPs into two level TPs was characterized with use of different methods.
In this paper we consider two ways of obtaining all transitions from the basic ones: through compositions of TPs and through convex mixing (possibly interlaced). Our main result is that there is no upper bound on a dimension of the basic bricks, i.e. for system with d energy levels, there must be a basic operation that involves all d levels. This holds even for approximate transformations, when we allow for the output state to differ from to goal state up to some small value in statistical distance. Note that this result is not in contradiction with [5], where thermalizations involve only two levels at a time, because there (unlike here) one also is allowed to change Hamiltonian of the system. The no-go example for composing TPs out of sequences of TPs acting actively on lower-dimensional subspaces leads to a question about the allowed transitions under operations restricted in this way: What states can be achieved from a given state diagonal in the basis of Hamiltonian of a d-level system, if the allowed operations can be composed as mixture of products of Thermal Operations each acting actively on at most d -levels of the system? The second part of this paper is a step to answering this problem by exploring the structure of the set of TPs through calculating and describing properties of extremal points of TPs of 3-level systems. It enables us to identify all the basic TPs, that allow to obtain arbitrary TP by compositions and mixtures for three level system.
When it comes to answering the above general question, the structure of d = 3 TPs suggests properties that, if proved general, may be crucial of determining the geometry of the set of TPs for arbitrary d, and identifying transitions allowed under the above-mentioned restrictions. Namely, for three level systems, one can determine all extremal TPs using a simple geometrical construction. Furthermore, the geometry of the set of TPs for three level systems changes at the single threshold temperature, where some of the extremal TPs cease to exist. We prove that this property is closely related with the prohibition of increasing deterministically extractable work from the system under TPs, and provide formulas determining values of threshold temperatures for arbitrary dlevel systems. Finally, we show that the structure of the set of extremal points of TPs might be highly simplified by the symmetry associated with the detailed-balance condition. Namely, we conjecture that every TP that is not self-dual with respect to this symmetry and that is not representable as a simple sum of TPs from subspaces of lower dimensions, cannot be expressed as a mixture of compositions of TPs from these subspaces.
II. PRELIMINARIA
We start with characterization of processes that describe transitions between states of system S with fixed Hamiltonian H S , that result from its interaction with bath R. Later, we will be interested in restrictions on allowed transitions between states of the system, which arise due to limitations we impose on the number of levels of the system that these processes can act actively on.
The interaction with environment is modeled by Thermal Operations. We consider system and bath with respective Hamiltonians We denote Gibbs states of the heat bath and the system by kT , where k is Boltzmann constant, and T is temperature. We now consider the following operations: we can apply to the initial state of the system ρ S and the Gibbs state of the heat bath ρ B β an arbitrary unitary U which conserves the total energy: [U, H S + H B ] = 0, and then trace out the bath. We obtain a trace preserving, completely-positive map on a system E(ρ) = Tr B U ρ ⊗ ρ β U † , where Tr B denotes partial trace over the environment.
It is visible that the map preserves the Gibbs state ρ S β . From the assumption of energy conservation it follows that elements ρ (ω) of the matrix ρ = ω ρ (ω) , such that ρ (ω) = n,m:En−Em=ω ρ n,m |E n E m |, are transformed independently: E(ρ (ω) ) = E(ρ) (ω) [10] (see also [11]). In particular, it shows that if one starts with ρ such that [ρ, H S ] = 0 (no coherences in the eigenbasis of the system Hamiltonian), one cannot obtain coherences through Thermal Operations. Therefore, we define the basic object of interest of the paper: Definition 1. Take states ρ and σ such that [ρ, H s ] = [σ, H s ] = 0, and eigenvalues in the eigenbasis of H S of these vectors are represented by vectors p and r, respectively. A Thermal Process is a stochastic map T : T p = r that corresponds to a Thermal Operation E(ρ) = σ.
From above it is visible that every TP can be represented as a left stochastic (i.e., with elements summing to 1 within each column), Gibbs preserving matrix T : T g = g, where g: g i = q i,0 / j q j,0 . Without loss of generality here and in the whole paper we assume that the ground state energy of the system is zero: E 0 = 0. We index rows and columns of matrices from 0 to d − 1. We will also use a shorthand notation q m,n = e −β(Em−En) . Conversely, every left stochastic, Gibbs preserving matrix leads to a Thermal Operation on a diagonal state [4]. Therefore, the set of TPs and a set of left stochastic, Gibbs preserving matrices are equal, and we focus on the latter.
III. A NON-DECOMPOSABLE THERMAL PROCESS IN AN ARBITRARY DIMENSION
Below we show that for a d level system, one can always find a pair of states p and r such that they are connected by a TP P (p) = r, and such that there is no other process connecting the states, and P cannot be decomposed into a convex combination of compositions of Thermal Processes, each acting on at most d − 1 dimensional subspaces.
In Section V we show that a state r cannot be achieved by 2 level TPs from p even approximatively: there exists > 0 such that all states r achievable from p by 2 level TPs satisfy ||r − r || ≥ .
We take (1) Note that, in order to assure that r represents a state, we have to assume One can always find temperature low enough such that the above is satisfied. In the following section we will provide examples of non-decomposable transitions for higher temperatures. Note also that r does not represent a Gibbs state g: q i,0 (see (2)). The Gibbs preserving condition ( applied to the zeroth row (i=0) implies then that other elements of this row are equal to 1 (i.e. ∀ j>0 P 0,j = 1). In turn, the stochasticity condition (∀ j d−1 i=0 P i,j = 1) applied to columns j > 0 implies then ∀ i>0,j>0 P i,j = 0. Then, the Gibbs preserving condition applied to rows i > 0 uniquely determines P i,0 , and every TP transforming p into r has to take a form Now we show that P cannot be decomposed as a composition of TPs, each acting on at most d−1 dimensional subspace. Every such decomposition would take a form P = AB, where both A and B are TPs. We will show below that if A and B are left stochastic and Gibbs preserving, then one of the matrices has to be equal to P . Therefore, it is impossible to decompose P into two TPs that act non-trivially on at most d − 1 dimensional subspaces. It follows that the above conclusion holds for a decomposition constructed as a product of an arbitrary natural number of TPs: If it was possible to decompose P into k TPs, each acting on a d − 1 dimensional subspace, then one could compress k − 1 of them into a matrix that will be a TP, but from the above we see that it has to be equal either to P or to an operation that acts trivially on 0 − th level. If it is an operation that acts trivially on 0-th level, it can be decomposed only to such operations. If it is equal to P , we proceed in decomposing it into TPs, at every step dividing the decreasing number of processes into two groups: one composed of one TP, and the other composed of remaining ones. In this way we see that for arbitrary natural k, for every decomposition of P into k TPs, it has to be of the form where TPs X i , Y j , for i = 1, . . . , m, j = 1, . . . , n and n + m = k − 1, act trivially on the 0-th level.
We begin to show that a decomposition P = AB leads to one matrix that acts trivially on 0-th level and one that is equal to P . Let us notice that the condition ∀ i>0,j>0 P i,j = 0 implies that the product of an i-th row (i > 0) of A and j-th column (j > 0) of B has to be zero. As these matrices can store only non-negative entries, this implies that ∀ i>0,j>0 A i,0 B 0,j = 0. Assume now that there is some k > 0 such that B 0,k = 0. This implies ∀ i>0 A i,0 = 0 so that ∀ i>0 A i,0 B 0,k = 0 can be fulfilled. But then, from the stochasticity condition applied to the 0-th column of A we have A 0,0 = 1, and, from Gibbs preserving condition applied to the 0-th row of A, ∀ j>0 A 0,j = 0. As we already saw before, this implies e −β∆i0 , which enforces B = P , and leads to the thesis. On the other hand, if there is no k > 0 such that B 0,k = 0, then, from Gibbs preserving condition applied to the 0-th row of B, we have B 0,0 = 1, which implies ∀ i>0 B i,0 = 0 from stochasticity condition applied to the first column of B. In order to have P = AB, we must Finally we will show that P is an extreme point of TPs, and therefore, cannot be formed as a convex combination of other TPs. The set of TPs is convex, which follows from its equivalence to the set of left stochastic, Gibbs-preserving matrices -both stochasticity and Gibbs-preserving properties are linear. One can easily show that, in a case of d × d TP, there are always 2d − 1 linearly independent restrictions on this process (arising from d stochastic conditions applied to the columns and d Gibbs-preserving conditions applied to the rows). As every linearly-independent restriction applied to the set of matrices can only increase by 1 a number of non-zero elements in every extremal point of the set of such matrices, every TP with less than d 2 − (2d − 1) = (d − 1) 2 zero elements is not an extreme point of the set [12]. One can therefore construct the set of all extremal points of TPs by fixing (d − 1) 2 elements to be zero, and continue fixing to zero more elements until the remaining ones are fixed by stochasticity and Gibbs-preserving conditions -a sign that the corresponding processes cannot be decomposed into a sum of other processes with at least (d − 1) 2 zero elements. As fixing P i,j = 0 ∀ i>0,j>0 implies values of the first row and the first column of P , it shows that P is an extremal point of the set of TPs for d level system.
Again, let us stress that the condition for all the elements of the matrix P to be non-negative implies that temperature has to be low enough to ensure
IV. STRUCTURE OF THE SET OF EXTREMAL THERMAL PROCESSES
By following the procedure outlined above, one can, in principle, find all extremal points of TPs for arbitrary dimension d. These extremal points will be further denoted as EPTP(d). Alternatively, one can apply a procedure of generating the whole set of extremal points from a trivial extremal point (identity), presented in [13]. In any case, obtaining this set explicitly is demanding for increasing local dimension d. At the end of this section, we point out a property of extremal points of TPs for three level systems that, if it holds for arbitrary d, would provide an intuitive, graphical way of obtaining extremal points of d level TPs.
For a 2 level system, the structure of the set is straightforward, with only two extremal points: where by Id(d) we denote the identity d × d matrix.
A. Structure of the set for d = 3 level systems.
For a d = 3 level system, the geometry of the set becomes temperature dependent (see Fig. 1). Below a threshold temperature T 0 = 1/kβ 0 defined by the following relation it can be expressed as whereas in the remaining regime The set of extremal points that are present for the whole spectrum of temperatures contains an identity matrix A 0 = Id(3) and two-level Thermal Processes (A 1 , A 2 and A 3 ): apart from extremal processes that can be expressed as products of two-level processes: The last member of EP T P (3) univ cannot be expressed in such a way: The remaining extremal points are present only in temperatures higher or lower than the threshold temperature T 0 of (4), which is associated with the requirement, coming from stochasticity of the matrices, that all of their elements take values from a range [0, 1]. For temperature above the T 0 we have four extremal points: . Some extremal points exist only in a selected temperature range. In zero and infinite temperatures, some processes coincide, which is indicated by connecting gray horizontal arrows and braces. In infinite temperatures (β = 0), extremal points are permutation matrices, in accordance with Birkhoff theorem [14]. All extremal points decomposable into a product of extremal points acting non-trivially on at most 2 levels are represented by green arrows. Red arrows are associated with processes valid below the threshold temperature, and a blue arrow corresponds to a process above this temperature. Brown color distinguishes a non-decomposable process present in the whole temperature range.
Below threshold temperature, all the above four points disappear, and instead a single extremal point emerges, which is the map P from the previous section: We have already shown that A 9 cannot be decomposed to a product of two level TPs, and that there exist states p and r such that A 9 p = r. The same remains true for maps A 10 , A 11 , A 12 and A 13 : if, for β ≤ β 0 and an arbitrary 0 ≤ a ≤ 1, one takes it is clear that the only TP R satisfying R(p) = r, has to have R 0,0 =0, and therefore is a convex combination of A 10 − A 13 . As no such a process can be constructed as a product of two-level TPs (A 1 −A 3 ), also these TPs lead to an example of operations allowed by Thermal Operations Resource Theory, that cannot be performed as a convex combination of processes that act non-trivially only on pairs of energy levels.
Therefore, we arrive at Proposition 1. For a 3 level diagonal system, the set of operations that, by mixtures and compositions, enables to perform an arbitrary transformation allowed by Thermal Operations, is {Id (3), A 1 , A 2 , A 3 , A 8 , A 9 } for temperatures that satisfy e −βE1 + e −βE2 ≤ 1, B. Detailed balance symmetry.
Below we point out a symmetry of extermal points of TPs that can be associated with detailed-balance condition. For a system with a Hamiltonian H, let us define a scalar product X|Y β between two observables X and Y by X|Y β = Tr[XY † ρ β ], with a Gibbs state ρ β = e −βH / Tr[e −βH ]. One defines a conjugate of an operator with respect to this scalar product: is a matrix storing on its diagonal values proportional to occupations in a Gibbs state. Selfduality with respect to such a scalar product (à = A) served as a definition of detailed balance for generator of dynamical semigroup [6,15,16]. The conjugation is linear and maps left stochastic and Gibbs preserving maps into themselves, and conserves the number of non-zero elements in their matrix representation. Furthermore, as the conjugation is its inverse, the orbits of maps associated with the conjugation are composed only of 1 or 2 elements. Therefore, all extremal points of TPs are mapped to extremal points of TPs. If it was not true, then we can could writeà = λà 1 +(1−λ)à 2 forà 1 =à 2 , 0 < λ < 1 and some extremal A, from which we would have A = λA 1 + (1 − λ)A 2 , which contradicts the thesis that A is extremal (asà 1 =à 2 implies A 1 = A 2 ).
Below we describe dual properties of TPs for the case d = 3. We see that extremal points of TPs from the set {A 1 , A 2 , A 3 , A 8 , A 9 , A 10 , A 13 } are self-dual with respect to this conjugation, while (A 4 , A 6 ), (A 5 , A 7 ), (A 11 , A 12 ) form pairs of extremal points of which one element is a conjugate of another. From the physical point of view, the conjugation of a TP reverses the direction of every transformation between levels of the system that the TP is defining. This is shown in Fig. 2, where self-dual and non self-dual extremal TPs for three level systems are grouped with respect to the ability of composing them from two level TPs. Note that, among elements that act non-trivially on all levels, there are no extremal TPs that are self-dual and can be decomposed as a sequence of extremal TPs from a lower dimensional space. Therefore we propose the following conjecture: Conjecture 1. If an extremal TP C for d dimensional space is decomposable into a sequence of extremal TPs A, B, each acting non-trivially on at most d − 1 dimensional space: C = AB, and C is not a direct sum of extremal TPs from lower dimensional subspaces, then C is not selfdual with respect to the conjugation associated with the operator scalar product.
Above, by demanding that C be not a direct sum of TPs from lower dimensional spaces, we account for cases of self-dual A and B acting on disjoint subspaces, trivially leading to a self-dual C. The main concern in describing the set of TPs for arbitrary d is the construction and characterization of structures that emerge with the increasing space dimension. Proving the above conjecture might be helpful in shedding more light onto this problem.
In the next section we present another useful property of extremal TPs for three level system -their connection to certain type of transformations of curves on the so called thermomajorozation diagrams.
C. Connection to thermomajorization diagrams.
The continuity of the transition between A 9 and A 10 − A 13 extremal points at β = β 0 is even more visible when one takes into account properties of states that are transformed by these extremal processes. In order to examine this, we invoke the notion of thermal order, associated with thermomajorization criterion.
Definition 2 (Thermo-majorization curve). Define a vector s = (q 00 , q 10 , q 20 , . . . , q d−1,0 ). For every state ρ commuting with H S , let a vector p represents occupations p i of energy levels E i , i = 0, 1, . . . , d − 1. Choose a permutation π on p and s, such that it leads to a non-increasing order of elements in a vec- k=0 ∪{0, 0}, connected by straight lines, defines a curve associated with the state ρ. We denote it by β(p) and call a thermomajorization curve of state ρ represented by p.
k=0 will be called elbows of a curve β(p). The curve is convex due to a non-increasing order of elements in d. Let us note that there might be more than one permutation leading to a creation of a convex curve β(p). The vector π(1, . . . , d) T will be called a β-order of p. It shows modification of the order of segments that had to be done in order to assure convexity of β(p).
All transitions between diagonal states under TPs are described by the following criterion: The proof of the above property for every extremal TP A can be expressed with the help of a matrix that FIG. 2: Extremal points of TPs for three level systems. Arrows indicate a transformation between selected levels occurring with non-zero probability. A conjugation Ai →Ãi is equivalent to redirecting arrows. Pairs of extremal points connected via the conjugation are contained within red frames. Note that none of the non-trivial self-dual three level extremal processes is decomposable into a sequence of three level processes.
will be denoted A s and that describes the transformation performed by the process A on slopes of the thermomajorization curve of an initial state. Below, we show the exact calculations for the case A = A 8 .
For a vector p, define an associated vector ∂p : ∂p i = p i q 0,i . It represents slopes of segments of β(p); ∂p i is a slope of a segment associated with the level i, with population p i . It can be easily shown that a map A s , associated with a map Ap = r and such that A s ∂p = ∂r, takes the form A s = M −1 ρ β AM ρ β . It satisfies A s =Ã T , and is a counterpart of A, in a sense that it satisfies stochasticity condition for every row: and Gibbs-preserving condition for every column: where third equalities in (21) and (22) come from (row) Gibbs-preserving and (column) stochasticity of A, respectively. Therefore, the thermal process A 8 : is associated with the following transformation of slopes of the segments of β(p): I: Extremal points Ai that map a state p with a given β-order to a state r with a fixed β-order, and such that all elbows of β(r) lie on β(p). Transitions performed by A9 in low temperatures can be achieved by {A10, A11, A12, A13} in high temperatures. For the transitions performed by A9, slopes of the last two segments of r are equal, henceforth the degeneration of the β-order. Information about all possible transformations stem from the above table and an observation that reversing β-order of p is reflected in the reversed β-order of r: e.g. for p with β-order (213) we obtain, through A1, a state r with β-order (123). .
By proceeding in the same way with all extremal points of TPs, we can verify that for each extremal TPs A i there exists a β-order such that for every p with this β-order, r = A i p has β-order dependent only on A i and β-order of p, and all elbows of β(r) lie on β(p) (see Table I). Some curves formed by the action of chosen extermal TPs on a state of β-order (2,1,3) are shown in Fig. 3.
Connection between A 9 and {A 10 , A 11 , A 12 , A 13 } is underlined by the fact that they transform states with the same order into each other (see Fig. 5). The difference is that, in lower temperatures, the condition q 1,0 + q 2,0 < 1 implies that two last segments of the state formed by the process maximizing slope on the first segment will be the same. This degeneration is reflected by the collapse of four extremal points A 10 , A 11 , A 12 , A 13 into a single one: It remains an interesting question whether generalization of Prop. 3 holds. I.e., if for arbitrary d, every extremal TP A can be matched to an initial state p such that all elbows of β(Ap) lie on β(p). If this was true, then it would be possible to calculate all extremal points of TPs for d dimensional systems directly from thermomajorization diagrams. Namely, for a selected temperature β it would be enough to investigate all thermomajorization curves with distinct and non-degeneratred β-order, for each curve determining the transformation that maps it to the curve with different β-order and whose all elbows lie on the initial curve. Every such a construction would be valid for a selected temperature range, therefore knowledge about values of threshold temperatures would be of a crucial importance. In the next section, we provide a construction determining the value of threshold temperatures for a given system Hamiltonian H.
D. Deterministic work extraction
Here we would like to point out a connection between temperature dependence of the structure of the set of TPs and deterministically extractable work. Threshold temperatures that indicate change in the convex structure are clearly associated with relations between sums over components of a partition function: i∈A q i,0 ≥ j∈B q j,0 , where A and B are disjoint set of indices, and A, B ⊂ {0, . . . , d−1}. Now, an incomplete sum of components of partition function is strictly related to min-free energy introduced in [4] to describe the deterministically extractable work from a given state. The latter is given by where Z is a partition function. Therefore, the order asserted by ) ) FIG. 4: Mapping between states of given β-order provided by non-decomposable extremal points of TPs, for a) high and b) low temperatures. A9 is low-temperature counterpart of A10, A11, A12, A13; slopes of last two segments of r = A9p are the same (for p of the β-order presented in the picture).
Braces mark a resulting degeneration of β-order of r. Connections between states provided by decomposable maps are not marked; they remain in agreement with Table I. has an operational consequence as it determines the order among some states, in terms of work that can be extracted from them. Namely, (26) is equivalent to where the states p A and p B are arbitrary states which occupy solely levels belonging to A and B, respectively. For example, the range of temperatures above the temperature T 0 of Eq. (4) is thus determined by the condition that the extractable work from ground state is greater than extractable work from state occupying second and third levels.
TPs cannot lead to a transition which increases deterministically extractable work, as such a transition would violate the thermomajorozation condition (Prop. 2). Therefore, if there is an extremal TP that transforms states with occupations on A set of levels to states with occupations on B set of levels (with A and B being non-empty disjoint subsets of {0, . . . , d − 1}), then we know that this TP cannot exist in the temperature regime in which a∈A q a,0 > b∈B q b,0 . On the other hand, for every pair of such disjoint sets A and B that admit a∈A q a,0 ≤ b∈B q b,0 for some temperature range one can always construct an extremal TP that transforms states occupying levels from the set A to states occupying levels from set B (we give the exact construction below). Therefore, if the sign of a∈A q a,0 − b∈B q b,0 for a given Hamiltonian depends on temperature, then a system with this Hamiltonian admits the extremal TP only in the temperature range in which it would not violate the principle of non-increasing of deterministically extractable work. Hence we arrive at that there is at least one extremal TP valid for β ≥ β 0 and invalid for β < β 0 , and at least one extremal TP valid for β ≤ β 0 and invalid for β > β 0 .
Construction of an extremal Thermal Processes associated with given threshold temperature.
Let us start with a term of the form a∈A q a,0 = b∈B q b,0 from the Proposition above. Let us divide sets A = {n} I, B = {m} J into subsets such that n and m are the smallest numbers from sets A and B, respectively, and I = {i 1 , . . . , i |I| } and J = {j 1 , . . . , j |J| }, and i k < i m if k < m, the same for set J. Then, as long as q n,0 + i∈I q i,0 ≥ q m,0 + j∈J q j,0 , it is always possible to construct a TP of the form This is because the Gibbs preserving condition applied to the n row demands (1 − i∈I q i,m )q m,0 + j∈J 1q j,0 + y = q n,0 , and, as long as y = q n,0 −q m,0 + i∈I q i,0 − j∈J q j,0 ≥ 0, one can always set the values of not-shown elements of the matrix such that the matrix is left stochastic and Gibbs preserving. This stems from the fact that every left stochastic and Gibbs preserving matrix, multiplied by a diagonal matrix M ρ β , can be turned into a transportation polytope [17] -a matrix of non-negative elements with a property that elements of k column and l row sum to some number, c k and r l , respectively . In our case, r k = c k = q k,0 . A set of transportation polytopes satisfying the given summation criteria is always non-empty as long as k c k = k r k . This is visible from the fact that, if k r k = 0, the conditions are satisfied by a matrix with all elements equal to 0. Otherwise, a matrix A with elements A i,j = r i c j / k r k satisfies it. The existence of respective transportation polytopes is guaranteed also for a set of conditions that arises from fixing values of some matrix elements of the original matrix, as long as one fixes to 0 all other elements of the row(column) that the element was in, and subtracts the value of the fixed element from c k (r k ). This is exactly a process that describes fixing of shown matrix elements in the TP above. As there is a solution for the respective transportation polytope problem, there will be one as well for the case of the above left stochastic and Gibbs-preserving matrix.
Note that the above TP maps all states with occupations on levels {m, i 1 , . . . , i |I| } into states with occupations on levels {n, j 1 , . . . , j |J| } -a property that does not depend on the temperature. However, from (27) we see that every process with such a property could lead to an increase of deterministic extractable work from a state whenever q n,0 + i∈I q i,0 < q m,0 + j∈J q j,0 . Therefore, all processes with such a property, including the above process, have to cease at the temperature for which q n,0 + i∈I q i,0 = q m,0 + j∈J q j,0 .
It is instructive to see that the above construction generates the appropriate extremal TPs for three level systems. There, we can have A = {0} and B = {1, 2} under the assumption q 0,0 ≥ q 1,0 + q 2,0 . This leads to n = 0, m = 1 and j 1 = 1 and generates A 9 extremal TP. On the other hand, if one takes A = {1, 2} and B = {0} under the assumption q 0,0 ≤ q 1,0 + q 2,0 , one gets n = 1, i 1 = 2 and m = 0, which leads to a TP described by a convex combination of extremal TPs A 11 and A 13 . The number of threshold temperatures depends on the Hamiltonian of the system. If we assume no degeneracies, then for d level systems it is equal to the number of possible allocations of elements from the set {a 1 , a 2 , . . . , a d } with known order a 1 > a 2 > · · · > a d into two disjoint non-empty sets, such that the above order does not determine sum over elements from which set is bigger or equal to a sum over elements from the other set. Total number of possible allocations is given by , with a term under sums being number of possible different allocations of k 1 elements into first set and k 2 allocations into the second set, and a factor 1 2 accounts for indistinguishability of the first and the second sets. Direct calculation of number of allocations satisfying the above criteria yields that the number of threshold temperatures for d = 3, 4, 5, 6 levels is equal to 1, 6, 26, 106, respectively.
V. APPROXIMATE TRANSFORMATIONS
In Sec. III we gave an example of a transition that cannot be performed exacly by TPs acting on 2 levels of the system: p A question arises about how the set of allowed transitions changes when we accept some error in the output state. Namely, we ask if for arbitrary > 0 there exists a state r such that: ||r − r || ≤ we have p T P (2) − −−− → r . Below we show that for p and r taken from Sec. III such a state does not exist, i.e. there is some finite neighborhood of a state r that TPs acting on 2 levels cannot lead to, and therefore they cannot be used to approximate r from p up to an arbitrary precision.
We will first sketch the idea of the proof for three level systems (d = 3). An abitrary 2 level TP can be represented as a convex combination of sequences of extremal 2 level TPs. Let us start with investigating such sequences separately, and later generalise the result to the case of an arbitrary TP acting on two levels of the system. Since for two levels, there is just one extremal point (apart from identity), see eq. (3), and there are three different pairs of levels, the sequence consists of one of three maps. One finds that for the chosen state, the map acting on two highest levels does not change the state. Hence, it is enough to consider sequences which start with one of the maps acting on levels 0 and 1 or 1 and 2 (denote them by Λ 0,1 and Λ 0,2 ).
Consider one of these maps, e.g. Λ 0,1 (for the other, the argument is the same). We shall now analyze the thermomajorization curve of the state r resulting from an arbitrary sequence starting with this map. Our aim will be to show, that such a curve will be bounded away from from the curve of the target state r. This will be enough, because, if the curve of the state r cannot lie arbitrarily close to the curve of target state, then also the state r itself cannot lie arbitrarily close to the target state in statistical distance. Now, let us argue that curve of r must be indeed bounded away from that of r. Let us focus on the point Q (see Fig. 5 ) on the curve of r. After applying Λ 0,1 to p, it can be seen that the curve of the emereging state is bounded away from the curve of the target state r, as the separation D between the point Q and the curve Λ 0,1 p is always positive: D > 0. Moreover, we see that subsequent application of another TP, call it Λ rest , cannot lead to a curve of r = Λ rest Λ 0,1 p which converges with the curve of r: e.g., the point Q on the curve of r remains unattainable, and will be always separated from the curve of Λ rest Λ 0,1 p at least by a distance D > 0, set by the curve Λ 0,1 p. This stems from the fact that every curve Λ rest Λ 0,1 p lies no higher than the curve Λ 0,1 p due to thermomajorization condition (see Prop. 2). Now, as thermomajorization curves of all states formed from p by a sequence of 2 level TPs lie below the line of the target state, we see that convex combination of these sequences cannot make the thermomajorization line of the corresponding state approach the target line. Therefore, the transition cannot be performed up to an arbitrary precision by TPs acting on two levels.
Below we present a calculation of the lower bound of this minimal separation for arbitrary dimension d. We choose a metric ||p − r|| = i |p i − r i |. The proof is based on the transformations of vectors which describe slopes of segments of given states on thermomajorization diagrams, as defined in Sec. IV C. The relation ∂x i = x i q 0,i for a given vector x and its associated 'slope' vector ∂x, when applied to initial p and final r states, gives As explained in Sec. IV C, every Thermal Process A such that Ap = r is associated with a map: A s ∂p = ∂r such that A s is a right stochastic matrix. In particular, every non-trivial, extremal TP on 2 different levels k and m, (see eq. 3), that we will denote E(k, m), has the associated map E s (k, m) of the form where Id acts on the subspace of remaining levels. It implies that a slope of the higher level after transformation is equal to the slope of the lower level before the transformation, and the slope of the lower level is averaged.
From the right-stochasticity of maps transforming slope vectors we see that, by performing a sequence of TPs, one cannot create a slope vector with increased maximal value. If we aim at obtaining a state r close to r, we have to apply some TPs connecting level 0 with other levels, as this is the only way to obtain non-zero values of ∂r j , j = 1, . . . , d − 1. Otherwise, ||r −r || = 2 i =0 q i,0 . Therefore, we investigate possible impact which 2 level TPs applied to this state have on the distance. We concentrate on investigating sequences of extremal TPs, and show at the end, that allowing for mixed TPs cannot improve the distance. For the extremal case, based on the structure of E s (k, m), we conclude that the distance cannot be reduced to zero.
We have to start with some transformation E(0, i), where i = 1, . . . , d − 1. We will describe cases i = 1 and i > 1 separately.
Case i > 1. The following transformation of the initial slope vector takes place: where 1 in the output vector is at position i. We see that further transformations are required, as at the moment we would have ||r − r || ≥ |r i−1 − r i−1 | = q i−1,0 > 0. Furthermore, we cannot leave an i level untouched, as it would limit the achievable value But performing a 2 level extremal TP on a level i diminishes the maximal value present in the slope vector, with minimal reduction, to value (1 − q i,i−1 )(1 − q i,0 ) + q i,i−1 = 1 − q i,0 + q i,0 q i,i−1 happening for transformation between i − 1 and i levels, that follows after filling the level i − 1 with the highest value possible: Therefore, we see that by starting with E(0, i) for i > 1, we cannot approach state r arbitrary close.
VI. DISCUSSION AND CONCLUSIONS
We have presented a construction of Thermal Operation for arbitrary d -level system, that cannot be performed without executing a joint operation on all energy levels. The extremal Thermal Process that performs the transformation exists for all temperatures low enough to allow for d−1 i=1 e −βEi ≤ 1 to be satisfied. For three level systems, we have also identified counterpart processes for the remaining temperature range, showing their non-decomposability into a convex combination of composition of Thermal Processes acting non-trivially on 2 energy levels. We speculate that these processes can be generalized to an arbitrary dimension by exploiting the bipartite-graph structure associated with these matrices [17]. We also point out that some extremal points satisfy quantum detailed balance condition, whereas others form pairs with respect to conjugation according to an associated scalar product. The conjectured nondecomposibility of self-dual extremal points of Thermal Processes may be a helpful property in the analysis of the geometry of the set of d level Thermal Processes.
One can try the solve the general decomposibility problem of Thermal Processes by analyzing the convex structure of the set, which probably would require determination of its extremal points. While pursuing the method of their computation that relies on fixing all matrix elements by some minimal number of zeros can be infeasible for larger d, exploitation of observed symmetries associated with quantum detailed balance condition and or/and gradual generation of extremal points of the set may lead to establishing a precise description of the geometry of the set of Thermal Processes that would take into account its decomposability into convex combination of products of more 'local' processes. In this, establishing a connection between the set of Thermal Processes and a set of all states possible to be obtained through Thermal Operations from a given initial state may be important. One should note e.g. that all states r such that β(r) has all elbows on β(p) and is thermo-majorized by it, constitute all extremal points of this set [18]. Due to inability to increase the deterministically extractable work under Thermal Processes, in order to determine the full set of extremal points for systems with non-degenerated Hamiltonian it should be possible to focus on just two temperatures: one satisfying 1 ≥ d−1 i=1 e −βEi , and the other 1 ≤ e −βE d−2 + e −βE d−1 . | 11,527.6 | 2017-07-21T00:00:00.000 | [
"Physics"
] |
SGLT1 Knockdown Attenuates Cardiac Fibroblast Activation in Diabetic Cardiac Fibrosis
Background: Cardiac fibroblast (CF) activation is a hallmark feature of cardiac fibrosis in diabetic cardiomyopathy (DCM). Inhibition of the sodium-dependent glucose transporter 1 (SGLT1) attenuates cardiomyocyte apoptosis and delays the development of DCM. However, the role of SGLT1 in CF activation remains unclear. Methods: A rat model of DCM was established and treated with si‐SGLT1 to examine cardiac fibrosis. In addition, in vitro experiments were conducted to verify the regulatory role of SGLT1 in proliferation and collagen secretion in high-glucose– (HG–) treated CFs. Results: SGLT1 was found to be upregulated in diabetic cardiac tissues and HG-induced CFs. HG stimulation resulted in increased proliferation and migration, increased the expression of transforming growth factor-β1 and collagen I and collagen III, and increased phosphorylation of p38 mitogen-activated protein kinase and extracellular signal-regulated kinase (ERK) 1/2. These trends in HG-treated CFs were significantly reversed by si-SGLT1. Moreover, the overexpression of SGLT1 promoted CF proliferation and collagen synthesis and increased phosphorylation of p38 mitogen-activated protein kinase and ERK1/2. SGLT1 silencing significantly alleviated cardiac fibrosis, but had no effect on cardiac hypertrophy in diabetic hearts. Conclusion: These findings provide new information on the role of SGLT1 in CF activation, suggesting a novel therapeutic strategy for the treatment of DCM fibrosis.
INTRODUCTION
Diabetic cardiomyopathy (DCM) is a myocardial disease that is specific to patients with diabetes and is independent of various types of heart diseases, including hypertension, coronary, and valvular (Bugger and Abel, 2014;Seferovic and Paulus, 2015). Cardiac fibrosis caused by abnormal glucose metabolism and microangiopathy are the main pathological features of DCM, leading to impairment of cardiac function and eventual progression to heart failure (Wang et al., 2021). The activation of cardiac fibroblasts (CFs) and degeneration of cardiomyocytes provide the biological basis for cardiac remodeling and the pathophysiological basis of DCM formation (Zhang et al., 2020). CFs switched from a resting type to an activated type increasing their proliferation and migration capacity and began to secrete large amounts of extracellular matrix, causing fibrosis of the heart (Frangogiannis, 2021). The development of fibrosis-targeting therapies for patients with DCM will help to further understand the functional pluralism of CFs and dissect the molecular basis for fibrotic remodeling.
Sodium-glucose cotransporter (SGLT) belongs to the solute carrier five gene family, which transports glucose against a concentration gradient in an energy-consuming manner and plays an important role in the active transport of glucose (Wood and Trayhurn, 2003;Sano et al., 2020). Sodium-glucose cotransporter 1 (SGLT1) is expressed in various human tissues and organs, including the intestine, lung, heart, skeletal muscle, and kidney (Gyimesi et al., 2020). SGLT1 is essential for the quick absorption of glucose and galactose in the intestine, and increases in SGLT1 protein expression cause interstitial fibrosis and cardiac remodeling in mice (Ramratnam et al., 2014). SGLT1 expression is also elevated in hypertrophic cardiomyopathy, ischemic cardiomyopathy, and DCM in humans (Song et al., 2016). Selective inhibition of SGLT1 expression has a protective effect against myocardial-infarctioninduced ischemic cardiomyopathy (Sawa et al., 2020). In addition, Hirose et al. (2018) demonstrated that SGLT1 knockout effectively alleviated pressure-overload-induced cardiomyopathy, suggesting that SGLT1 inhibitors have an active effect on hypertrophic cardiomyopathy. More importantly, our previous study found that SGLT1 inhibition could attenuate apoptosis and relieve myocardial fibrosis, thus suppressing DCM development by regulating the JNK/p38 signaling pathway (Lin et al., 2021). However, in the abovementioned study, we only investigated the role of SGLT1 in cardiomyocytes and rat H9C2 cells. It would be more appropriate to study the role of SGLT1 in the activation of CFs during the development of DCM.
Our previous study found that high-glucose (HG) levels promote SGLT1 and matrix metalloproteinase 2 expression in CFs (Meng et al., 2018), but whether HG levels promote cardiac fibrosis by inducing CF activation and whether SGLT1 is involved in HG-induced CF activation have not been reported. Thus, arrays of experiments were performed in this study to determine the role of SGLT1 in CF activation during DCM. Moreover, we tried to characterize the role of the p38 and ERK1/2 signaling pathways in the regulatory mechanism of SGLT1 expression for CF activation.
Ethics Statement
All animal procedures were conducted in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the Medicine Animal Welfare Committee of Shaoxing People's Hospital.
Culture of Rat Primary CFs
Primary rat CFs were isolated from the ventricles of neonatal male Sprague Dawley rats (2-3 days old) using enzyme digestion. The tissue was cut into 1 mm cube pieces and digested with trypsin/EDTA (Gibco, NY, United States) and collagenase II (Sigma, United States) at 37°C. The mixture (collagenase and trypsin, 100: 1) was placed in a shaker at 37°C for 20 min, and the supernatant was collected and combined with DMEM containing 10% FBS, and this process was repeated until the tissue was fully digested. Cardiomyocytes were separated from CFs using centrifugation at a low speed (300 g), and the supernatant containing CFs was collected. The isolated CFs were cultured in DMEM supplemented with 10% FBS and 1% penicillin-streptomycin at 37°C in a humidified incubator with 5% CO 2 . Cells were divided into four groups: 1) a control group, in which CFs were incubated with DMEM containing 5.5 mmol/L normal glucose for 48 h; 2) an HG group, in which CFs were incubated with DMEM containing 33 mmol/L glucose (HG) for 48 h; 3) an HG + si-NC group, in which CFs were transfected with si-NC and cultured under HG conditions for 48 h; and 4) an HG + si-SGLT1 group, in which CFs were transfected with si-SGLT1 and cultured under HG conditions for 48 h.
Small Interference RNA Transfection
To knockdown the expression of SGLT1 in CFs, small interfering RNAs against the SGLT1 gene (si-SGLT1) and the siRNA negative control (si-NC) were synthesized at Guangzhou RiboBio Co., Ltd. Briefly, CFs grown to 70-80% confluence were incubated with Lipofectamine 3,000 transfection reagent (Invitrogen, Waltham, MA, United States) loaded with siRNAs for 48 h. Transfection efficiency was evaluated using RT-qPCR analysis.
Cell Counting Kit-8 Assay
CFs were cultured in 96-well plates at a density of 1×10 4 cells/ well. A 10 µl aliquot of the Cell Counting Kit-8 (MCE, Shanghai, China) solution was added to each well, and the plates were incubated at 37°C for 1 h. Absorbance was measured using a microplate reader (Molecular Devices, CA) at 450 nm.
Wound Scratch Assay
Cells were grown to 90% confluence in 6-well plates in DMEM supplemented with 10% FBS, and the medium was replaced with serum-free DMEM to starve cells for 24 h. Wounds were made with a sterile 200 μl pipette tip by drawing a line through the plated cells perpendicular to the abovementioned line. CFs were transfected with si-NC or si-SGLT1 and exposed to HG conditions for 24 h. Images were acquired using a Leica microscope (DM 2000, Leica, Wetzlar, Germany).
Western Blotting
Protein samples from CFs and cardiac tissues were extracted using the RIPA buffer, and equal amounts of proteins from each group were separated by 12% sodium dodecyl sulfatepolyacrylamide gel electrophoresis. Then, the separated proteins were transferred to polyvinylidene fluoride membranes and blocked using 5% nonfat milk for 1 h at room temperature. Subsequently, the membranes were incubated with primary antibodies at 4°C overnight, followed by incubation with horseradish peroxidase-(HRP-) conjugated goat anti-rabbit secondary antibody (1:5,000, Santa Cruz) for 1 h at room temperature. The following primary antibodies were purchased from Abcam and used at a 1:1,000 dilution: β-actin (ab5694), SGLT1 (ab14686), collagen I (ab34710), and collagen III (ab6310). The following antibodies were purchased from Cell Signaling Technology and used at a 1:1,000 dilution: ERK1/2 (cat. 4695), phospho-ERK1/2 (cat. 4376), p38 (cat. 8690), and phospho-p38 (cat. 4511).
Animal Experiments
Sprague Dawley rats were obtained from the Nanjing Biomedical Research Institute of Nanjing University (China). A total of 24 6week-old male SD rats were assigned to four groups (control, STZ, si-NC, and si-SGLT1 groups) using the random number method, with six rats in each group. A 12-h light-dark cycle was used, and the rats were provided ad libitum access to food and water. After acclimation for 1 week, rats in the diabetes groups were fed a highfat diet (fat provided 60% of total calories, Research Diet D12492) for 4 weeks and then intraperitoneally injected with 60 mg/kg STZ (Sigma) dissolved in a citrate buffer (pH 4.5), whereas the control group received normal chow. We performed intraperitoneal glucose tolerance tests (IPGTTs) to identify the insulin-resistant rats, and fasting blood glucose (FBG) levels were measured seven days after injection. Body weights were recorded. Successful induction of diabetes was defined by an FBG value higher than 16.7 mmol (Feng et al., 2019). After successful establishment of the rat model with DCM, the rats in the si-NC and si-SGLT1 groups were injected with 5 μl of siRNA or 5 μl of si-SGLT1 (200 nmol/ 500 g) in PBS, once a week. All rats were sacrificed after 16 weeks of feeding. The left ventricular tissues were removed and cut into pieces for histomorphological analysis.
ELISA
After the rats were fasted overnight, blood samples were obtained from the postcaval vein and processed for plasma extraction within 1 h (centrifuged at 3,000 x g for 10 min at 4°C), and the plasma was stored at -80°C in polypropylene tubes for further analysis. The expression levels of collagen I, collagen III, and transforming growth factor-β1 (TGF-β1) in CFs and rat serum were detected using the rat collagen I Type I ELISA Kit (abx052369, Abbexa, United Kingdom), rat collagen type III ELISA Kit (abx573727, Abbexa), and TGF-β1 ELISA Kit (PT878, Beyotime, Jiangsu, China), respectively, following the instructions of the manufacturer.
Histology
Tissues from rats were fixed using 10% buffered formalin, dehydrated, embedded in paraffin, and sectioned into 5 μm-thick sections. Hematoxylin and eosin (HE) staining was used to assess cardiac injury, whereas Masson's trichrome staining was used to detect collagen fibers, and the slides were observed under an optical microscope. For immunohistochemistry, sections were stained with a primary antibody against SGLT1 (1:200, Abcam) and then stained with a secondary antibody. After washing with PBS, the slides were incubated with 3,3′-diaminobenzidine. The detailed procedure has been described previously (Lin et al., 2019). Semiquantitative analysis was performed using image analysis software (Image-Pro Plus, Media Cybernetics).
Wheat Germ Agglutinin Staining
Slides were stained with Alexa Fluor 488-conjugated wheat germ agglutinin WGA (Sigma). In brief, slides were dewaxed, rehydrated, and blocked with 3% BSA for 20 min. The slides were then incubated in WGA solubilized in PBS for 30 min at room temperature in the dark. After washing with PBS, the sections were stained with DAPI (Invitrogen) for 5 min and images were acquired using a Nikon Eclipse Ti-U fluorescence microscope (Minato-ku, Tokyo, Japan). The cardiomyocyte size was determined by dividing the total area by the number of cardiomyocytes using Image J software (NIH, Bethesda, MD, Unites States).
Statistical Analysis
The experimental data are expressed as the mean ± standard deviation. All statistical analyses were performed using GraphPad Prism 8.0 software (GraphPad Software, San Diego, CA, United States). Data are presented as the mean ± standard deviation. The t-test was used to perform comparisons between the two different groups. One-way analysis of variance was used to compare multiple groups. Statistical significance was set at p < 0.05.
SGLT1 Is Upregulated in Diabetic Cardiac Tissues and High-Glucose-Induced Cardiac Fibroblasts
The results of IPGTT revealed that blood glucose levels peaked half an hour after intraperitoneal injection, slowly decreased thereafter, and remained at 6-8 mmol/L in the control group Frontiers in Pharmacology | www.frontiersin.org June 2021 | Volume 12 | Article 700366 throughout the entire process. In contrast, the blood glucose levels of diabetic rats demonstrated evident hyperglycemia throughout the entire process ( Figure 1A). Figure 1B shows that rats in the control group had a relatively stable FBG level, whereas the FBG level of the DCM group increased significantly after injection with STZ in the fourth week (p < 0.05). The body weight of the DCM group was higher than that of the control group ( Figure 1C). HE and Masson staining were performed to examine the changes in cardiac pathology ( Figures 1D,E). Increased interstitial fibrotic areas were observed in the DCM group compared to the NC group ( Figure 1F).
We then validated that SGLT1 mRNA and protein levels were both upregulated in the DCM group compared to those in the control group ( Figures 2A,B). Moreover, using immunohistochemistry, we found that CFs in diabetic rats expressed higher SGLT1 levels than those in normal rats ( Figures 2C,D). In contrast to the control group, diabetic rats showed a significant increase in fibrosis-related proteins, including collagen I and collagen III expression ( Figures 2C,E). In vitro, CFs changed from a long and thin shape to fusiform under HG conditions ( Figure 2F), and the results of immunofluorescence revealed that the fluorescence intensity of SGLT1 was markedly increased in the HG medium ( Figure 2G). FIGURE 1 | Changes of blood glucose level and histopathology in DCM rats. SD rats were fed a high-fat diet (HFD) for 4 weeks before streptozotocin (STZ) was injected intraperitoneally at a dose of 60 mg/kg, and rats were fed for another 12 weeks. (A) Intraperitoneal glucose tolerance tests (IPGTTs) were used to identify insulin resistance in the two groups. Accordingly, western blotting analysis revealed that compared to the control group, the levels of SGLT1, collagen I, and collagen III proteins were significantly upregulated in the HG group ( Figure 2H).
Knockdown of SGLT1 Inhibits High-Glucose-Induced Cardiac Fibroblast Activation
To investigate the role of SGLT1 in CF activation, we knocked down SGLT1 by transfecting CFs with specific siRNAs against SGLT1. The results of RT-qPCR ( Figure 3A) and western blotting ( Figure 3B) analysis validated that SGLT1 si-RNA2 exerted the highest knockdown efficiency and was, therefore, chosen to perform the subsequent assays. Subsequent characterization of the CFs showed that HG stimulation significantly increased cell viability ( Figure 3C) and migration ( Figures 3D,E). We found that CFs with SGLT1 inhibition had markedly reduced cell viability and migration compared with those in the HG + si-NC group ( Figures 3C-E). Furthermore, ELISA revealed that HG stimulation caused high expression of TGF-β1, collagen I, and collagen III, indicating the synthesis of collagen in CFs, whereas the inhibition of SGLT1 effectively reversed this increase ( Figures 3F-H). We further analyzed the potential involvement of the p38 mitogen-activated protein kinase (MAPK) and ERK1/2 signaling pathways in the regulatory role of SGLT1 in HG-mediated CF activation. As expected, HG significantly activated the phosphorylation of p38 MAPK and ERK1/2, whereas SGLT1 silencing reduced the effects of HG ( Figure 3I). Therefore, these data indicate that SGLT1 regulates the function of CFs.
Sodium-Glucose Cotransporter 1 Regulates p38 MAPK and ERK1/2 Signaling and Collagen Synthesis in Cardiac Fibroblasts
We further investigated the potential mechanism underlying the involvement of SGLT1 in the activation of CFs. SGLT1 overexpression was achieved by transfecting CFs with a Frontiers in Pharmacology | www.frontiersin.org June 2021 | Volume 12 | Article 700366 5 plasmid containing the SGLT1 gene, and RT-qPCR and western blotting analysis revealed significantly higher SGLT1 levels in the SGLT1-transfected group when compared with the control group ( Figures 4A,B). Interestingly, we found that the overexpression of SGLT1 significantly increased the proliferation of CFs under both normal and HG conditions ( Figure 4C). In addition, the overexpression of SGLT1 effectively promoted the secretion of TGF-β1, collagen I, and collagen III in the cell supernatant, under FIGURE 3 | Knockdown of SGLT1 inhibited high-glucose-induced CF activation. CFs were transfected with SGLT1 siRNAs, and SGLT1 mRNA and protein levels were detected using RT-qPCR (A) and western blotting, respectively (B). CCK-8 assay was used to detect the proliferation of CFs under high-glucose condition with or without SGLT1 inhibition. (D, E) The representative images of the wound-healing assay were obtained at 0 and 24 h after knockdown of SGLT1, and the migrative ability of CFs was compared. (F-H) ELISA was used to detect the levels of collagen-synthesis-related markers, including TGF-β1, collagen I, and collagen III in the cell supernatant (n 6). (I) Western blotting analysis was performed to investigate the phosphorylation levels of p38 mitogen-activated protein kinase (MAPK) and extracellular signal-regulated kinase (ERK)1/2 in CFs under high-glucose condition with or without SGLT1 inhibition.
Frontiers in Pharmacology | www.frontiersin.org June 2021 | Volume 12 | Article 700366 6 both normal and HG conditions ( Figures 4D-F). Furthermore, overexpression of SGLT1 significantly increased p38 and p-ERK1/2 levels under both normal and HG conditions ( Figure 4G). These data suggest that SGLT1 might activate CFs by activating the p38 and p-ERK1/2 pathways.
Sodium-Glucose Cotransporter 1 Inhibition Alleviates Fibrosis in the Diabetic Heart
Since cardiac fibrosis and hypertrophy are important pathological structural features of DCM, we investigated whether SGLT1 regulates fibrosis and hypertrophy in diabetic hearts. We knocked down SGLT1 in rats with DCM by continuously administering si-SGLT1 via intravenous injection in the tail vein. As shown in Figures 5A,B, the expression of SGLT1 mRNA and protein in the heart was reduced in the DCM + si-SGLT1 group when compared with that in the DCM + si-NC group. Accordingly, the immunohistochemistry results indicated that CFs expressed low SGLT1 levels after SGLT1 inhibition ( Figure 5C). Interestingly, compared with the DCM + si-NC group, a significant reduction in interstitial fibrosis was observed in the DCM + si-SGLT1 group ( Figure 5D). The hallmarks of fibrosis and protein levels of collagen I and collagen III were reduced by SGLT1 knockdown (Figures 5E,F). However, SGLT1 knockdown had no significant effect on cardiac hypertrophy, as examined by WGA staining ( Figure 5G). Collectively, our results indicated that the knockdown of SGLT1 reduced cardiac fibrosis but had no effect on cardiac hypertrophy in DCM.
DISCUSSION
Increasing attention has been placed on the utility of SGLT2 inhibitors because, in addition to controlling blood glucose levels, they have been shown to provide significant cardiovascular benefits in T2DM patients (Packer et al., 2021). Rather than SGLT2, recent findings have also emphasized the potential role of SGLT1 in the development of cardiovascular diseases. The myocardial expression of SGLT1 in humans is altered in various cardiovascular disease states. Compared with controls, left ventricular SGLT1 mRNA and protein expression was significantly upregulated in heart failure patients with DCM (Sayour et al., 2020). Individuals carrying loss-of-function mutations in the SGLT1 gene are estimated to have a lower FIGURE 4 | SGLT1 overexpression promotes collagen release via the p38 and ERK1/2 signaling pathway in CFs. CFs were transfected with an SGLT1 plasmid or vector, and SGLT1 expression levels were detected using RT-qPCR (A) or western blotting analysis (B). (C) After overexpression of SGLT1 in CFs, CCK-8 assay was used to detect the proliferation of CFs under normal or high-glucose condition (n 6). (D-F) After overexpressing SGLT1 in CFs, the collagen-synthesis-related markers, including TGF-β1, collagen I, and collagen III, in the cell supernatant were measured using ELISA (n 6). (G) After overexpressing SGLT1 in CFs, the protein levels of phosphorylated p38 and p-ERK1/2 were analyzed using western blotting in CFs both under normal or high-glucose condition (n 3).
Frontiers in Pharmacology | www.frontiersin.org June 2021 | Volume 12 | Article 700366 risk of developing heart failure, driven by mitigation of postprandial hyperglycemic episodes (Seidelmann et al., 2018). In endothelial cells, angiotensin II upregulates SGLT1 expression to promote sustained oxidative stress, and inhibition of SGLT1 appears to be an attractive strategy to enhance protective endothelial function (Park et al., 2021). Apart from cardiomyocytes and endothelial cells in the heart, we first identified that SGLT1 was expressed in human CFs (Meng al., 2018), and in the present study, the significant finding was that an increase in SGLT1 expression in rat hearts triggered the development of cardiac fibrosis through activation of CFs by upregulating the p38 MAPK and ERK1/2 signaling pathways.
Targeting SGLT1 has also been found to have cardioprotective effects in DCM. RNA-mediated inhibition of SGLT1 gene glycemic variability and cardiac damage were seen in type 2 diabetes mellitus mice in vivo . In cultured cardiomyocytes, SGLT1 knockdown restored cell proliferation, suppressed reactive oxygen species, and induced cytotoxicity . These data supported the notion that SGLT1 might serve as a target for myocardial injury in the diabetic heart. It is well known that CF activation plays an essential role during the development of cardiac fibrosis. However, the role of SGLT1 in CF activation remains unclear.
Several lines of experimental evidence suggest that SGLT1 silencing may attenuate cardiac fibrosis. Ramratnam et al. (2014) demonstrated that cardiac overexpression of SGLT1 increases collagen I gene expression and interstitial fibrosis in mouse hearts. Another study found that SGLT1 knockout downregulated CTGF and collagen I gene expression and interstitial fibrosis in pressure-overload-increased mouse hearts . Similar to these studies, we also found that knockdown of SGLT in diabetic hearts suppressed the synthesis of TGF-β1, collagen I, and collagen III. Furthermore, in cultured CFs, we found that SGLT1 regulates cell proliferation and collagen synthesis, suggesting the role of SGLT1 in regulating CF activation. To the best of our knowledge, this study is the first to demonstrate that SGLT1 regulates the activation of CFs in DCM.
Our in vitro experiments showed that HG upregulated SGLT1 expression in CFs, which was accompanied by an increase in the abundance of p-p38 and p-ERK1/2. SGLT1 overexpression significantly induced the abundance of these proteins in CFs under both normal and HG conditions. TGF-β1 stimulation in CFs resulted in increased proliferation, increased collagen I and collagen III expression, and increased p38 and ERK1/2 phosphorylation (Xu et al., 2017), whereas inhibition of the activation of p38 kinase and ERK1/2 could effectively attenuate cardiac fibrosis (Tao et al., 2016). Activation of MAPKs participates in the upregulation of cerebral SGLT-1 expression (Yamazaki et al., 2018). Moreover, the relationship between the SGLT1 and MAPK signaling pathways in the heart has also been reported in our previous study (Lin et al., 2021). Based on the abovementioned results, we deduced that the increase in SGLT1 expression in the diabetic heart is involved in triggering CF proliferation and subsequent cardiac fibrosis.
Furthermore, we noticed that the study performed by Matsushita et al. suggested that SGLT1 knockout could prevent chronic pressure-overload-induced hypertrophic cardiomyopathy . However, in our study, we found that knockdown of SGLT1 had no effect on hyperglycemia-related hypertrophy in diabetic hearts. This discrepancy may be because of the differences in experimental animal models. We used SD rats to establish a DCM model, and Matsushita et al. (2018) used mice that underwent transverse aortic constriction surgery. The other significant difference is that we only knocked down SGLT1 in rats using specific siRNA, rather than using gene knockout technology. A previous study suggested that SGLT1-deficient mice need to consume a glucose-galactosefree diet because they show symptoms of glucose-galactose malabsorption syndrome (Gorboulev et al., 2012). Therefore, SGLT1 knockout may not be appropriate in DCM. More studies are needed to investigate the exact role of SGLT1 in cardiac hypertrophy.
In summary, our study evaluated the changes in the expression of SGLT1 in the progression of diabetic cardiac fibrosis and identified a significant increase in SGLT1 expression in the diabetic heart. SGLT1 is involved in cardiac fibrosis via the p38 and ERK1/2 signaling pathways. Our findings suggest that SGLT1 is a potential therapeutic target for the prevention of diabetic cardiac fibrosis.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The animal study was reviewed and approved by the Medicine Animal Welfare Committee of Shaoxing People's Hospital.
AUTHOR CONTRIBUTIONS
LM and HU designed the project; HL and LG performed animal experiments and analyzed the data; LM and HL performed in vitro experiments and wrote the draft manuscript; HG supervised and funded the project; and HU and HG made the modification. | 5,549.6 | 2021-06-24T00:00:00.000 | [
"Medicine",
"Biology"
] |
Influence of pozzolanic addition on strength and microstructure of metakaolin-based concrete
The intent of this study is to explore the physical properties and long-term performance of concrete made with metakaolin (MK) as a binder, using microsilica (MS) and nanosilica (NS) as substitutes for a portion of the ordinary Portland cement (OPC) content. The dosage of MS was varied from 5% to 15% for OPC-MK-MS blends, and the dosage of NS was varied from 0.5% to 1.5% for OPC-MK-NS blends. Incorporation of these pozzolans accelerated the hardening process and reduced the flowability, consistency, and setting time of the cement paste. In addition, it produced a denser matrix, improving the strength of the concrete matrix, as confirmed by scanning electron microscopy and X-ray diffraction analysis. The use of MS enhanced the strength by 10.37%, and the utilization of NS increased the strength by 11.48% at 28 days. It also reduced the penetrability of the matrix with a maximum reduction in the water absorption (35.82%) and improved the resistance to the sulfate attack for specimens containing 1% NS in the presence of 10% MK. Based on these results, NS in the presence of MK can be used to obtain cementitious structures with the enhanced strength and durability.
Introduction
Cement is an essential construction material, and concrete has the highest demand worldwide because of its cost-effectiveness and availability [1][2][3][4][5].The absence of a viable substitute material in the foreseeable future further emphasizes the importance of studying the behavior of concrete in both its fresh and hardened states.Having a comprehensive understanding of concrete properties is also crucial for developing effective strategies to enhance its performance and lifespan in various applications [6][7][8][9][10].The use of concrete as a construction material has been increasing year by year.Therefore, different approaches have been utilized to improve the quality of concrete [11][12][13][14][15]. Furthermore, new methods have been employed to enhance the sustainability of concrete [16][17][18][19][20].One of these methods is the application of waste materials as a replacement for aggregate or cement [21][22][23][24][25][26][27][28][29][30].The use of waste materials as a replacement for aggregate significantly reduces the utilization of raw materials [31][32][33][34][35][36].On the other hand, the use of waste materials as replacement for cement reduces the use of raw materials and CO 2 emissions [8,37,38].The integration of supplementary cementitious materials (SCMs) into the concrete mix has become increasingly important [39].In general, cement is partially substituted by the calculated amount of SCMs in terms of percentage by weight of cement [40].SCMs not only enhance the durability of concrete but also provide a sustainable solution to reduce CO 2 emissions during concrete production [41].In addition to its environmental benefits, the use of SCMs offers economic advantages by reducing the overall cost of concrete production [42].
The chemical composition of SCMs determines whether they are self-cementing, pozzolanic, or both [43,44].These materials can be acquired from industrial wastes, including fly ash (FA) [45], silica fume (SF) or microsilica (MS) [46], metakaolin (MK) [47], slag [48], nanosilica (NS), and even an agricultural waste, such as sugarcane bagasse ash [49].Researchers have utilized various types of SCMs to improve the characteristics of cement-based structures [50].Pozzolans can greatly improve the performance of cementitious materials in terms of their resistance to the chemical attack, durability, and strength [51].In addition, these additives have shown to enhance the microstructure of the interface region between aggregates and cement paste, leading to improved mechanical properties of cement mortar and concrete [52].Golewski [43] explored the pozzolanic process in cement composites by incorporating FA, which transformed disordered phases into homogeneous and compact forms, thereby filling porous spaces with pozzolanic reaction products.Likewise, Nandhini and Ponmalar [53] reported a dense matrix in M40 grade self-compacting concrete with enhanced development of calcium silicate gel, resulting in the improved tensile strength and reduced permeability, particularly with the addition of 2% NS.
Pozzolanic materials are used to develop the strength, durability, and other properties of concrete, and their effects can be additive or synergistic when utilized together [54].Rajamony Laila et al. [44] reported enhanced compressive and flexural strengths by replacing cement with granite culver (GP) and incorporating super absorbent polymer (SAP) on self-compacting concrete (GP-SSC) at an optimal replacement of up to 15% GP, along with 0.3% SAP.Tawfik et al. [47] indicated an overall improvement in the strength and sulfate resistance of modified lightweight concrete by adding MS (5-20%) and MK (10-35%) with SF, which demonstrated superior results compared with MK.Ilić et al. [55] examined the impact of thermally activated kaolin (AK) and mechanically activated kaolin (MK) on the compressive strength and microstructure of mortar.Substituting ordinary Portland cement (OPC) with MK increased the compressive strength because of the higher reactive silica content, enhancing the pozzolanic reaction and refining the pore structure.However, AK substitution led to lower strengths in comparison with MK.Thus, it is highly critical to choose the right kind and quantity of pozzolans, depending on the individual application and desired attributes [56].
The incorporation of micro-and nano-sized pozzolans can have beneficiary effects as well as obstacles that are linked with their usage, such as an extended setting time and the possibility of an alkali-silica reaction [40,57].However, the specific effects of their combination would need to be studied in a specific concrete mix to determine the optimal ratio and combination of additives [58].MK, with a smaller particle size than cement particles, has been extensively used for the strength enhancement.The effectiveness of utilizing nanosized NS and microsized MS as SCMs in improving the compactness and strength of composites in the presence of MK remains a subject of debate [59].Additionally, there are conflicting results from studies evaluating the optimal proportions of these additives that yield the most desirable physico-mechanical properties in cement-based composites [60].Thus, the application of these pozzolans in cementitious materials requires further investigation in this sector based on these findings.The current research work aims to determine the optimum proportions of MS and NS suitable for partially substituting cement in MK-based cement composites and explore their effects on the physical and microstructural characteristics of concrete.The beneficiary impact on the penetrability of the matrix in an aggressive environment has also been evaluated.
Materials
JK Super Portland cement (43-grade classification, fineness 311 m 2 /kg, and specific gravity 3100), fine aggregates such as standard Ennore sand (bulk density 1670 kg/m 3 and specific gravity 2.58), and coarse aggregates (bulk density 1493 kg/m 3 and specific gravity 2.71) were purchased from a local vendor in Bathinda.Following sieve examination, sand was found to be in compliance with zone II.MK (mean particle size 135 nm) was obtained from Madhavram, Chennai.NS (mean particle size 10 nm, specific surface area 2.5×10 5 m 2 /kg, and apparent density 200 kg/m 3 ) was purchased from Bee Chemicals, Kanpur, and MS (mean particle size 0.25 μm) was purchased from FOSROC office, Chandigarh, complying with the IS 9103-1999 [44], with the chemical compositions listed in Table 1.
Preparation of concrete specimens
The experimental program for the specimen preparation and analysis is displayed in Fig 1.
Table 2 provides the varying percentages of all ternary binders containing MS (OPC-MK-MS) and NS (OPC-MK-NS), sand, and water.To enhance the workability of concrete, a polycarboxylate-based super plasticizer, (QCDA 1551) Fosroc Auramix 400 (8 liter per m 3 ), and MK (10%) were used.The binders were mixed mechanically for 2 min before the addition of fine and coarse aggregates.The mixture was again stirred mechanically for 10 min before adding water to achieve the homogeneity.The water-to-binder ratio was consistently maintained at 0.5 throughout the experiment.The mixture was then poured into designated molds with thorough compaction, followed by smoothing the surface and covering the specimens with plastic film.After one day of casting, the specimens were decanted and cured for 28 days at room temperature in potable water.
Analysis of concrete specimens
The consistency (IS 4031-2019 part 4), initial setting time (IST) and final setting time (FST) (IS 4031-2019 part 5), and flow of the cement paste (IS 5512-1983) were analyzed [61].The compressive strength (IS 10080-1982), splitting tensile strength, and flexural strength of the specimens (IS 5816-1999) were determined at 28 days of curing.Following this step, the specimens were cured separately in two tanks: water tank and the tank with 5% MgSO 4 solution).
The compressive strength after exposure to sulfate solution (IS 4031-1988 part 6) and the compressive, flexural, and splitting tensile strengths analyses of the water-cured specimens were performed after 56, 90, and 180 days.The level of degradation was quantified on the basis of the amount of loss in the compressive strength.The reduction in the water absorption was measured to determine the impact on the penetrability of the matrix according to IS 1124-1974 [62].FESEM-EDX (field emission scanning electron microscopy-energy dispersive X-ray analysis) and XRD (X-ray diffraction analysis) were employed to characterize the microstructure of the specimen matrix after 28 days of curing.
Fresh properties
The effects of different percentage levels of MS and NS (substituting cement) on the consistency of ternary binders (OPC-MK-MS and OPC-MK-NS) containing a constant dosage of MK (10%) were analyzed.Fig 2A illustrates the results of the standard consistency tests for the specimens containing MS at different percentage levels.It was observed that the water requirement increased with increasing percentage levels of MS, which is consistent with the literature [49].This increase in the water demand was recorded for all OPC-MK-MS ternary binders.The percentage increase in the consistency for each specimen compared with the control mix (MB-1) was obtained as follows: MB-2 (3.45%), MB-3 (10.34%),MB-4 (13.79%),MB-5 (20.69%), and MB-6 (27.59%).The comparatively higher fineness of MS particles than that of MK particles can be ascribed to this phenomenon [49].Interestingly, it was found that when MS was added as replacement of cement to make the ternary binder along with MK, the water demand increased.The combination of MS and MK has a positive impact on the strength and durability of concrete, and the findings revealed that it can lead to an increase in the water demand [45].Moreover, the presence of NS in the ternary binder consisting of OPC-MK-NS increased the consistency (Fig 2A).This ternary binder also required more water with an increasing percentage level of NS at a constant dosage of MK.However, the consistency of the ternary binder with NS was slightly higher than that without NS.The percentage increase in the consistency for each specimen compared with MB-1 was obtained as MB-7 (6.90%), MB-8 (10.34%),MB-9 (17.24%),MB-10 (24.14%), and MB-11 (31.03%).This increase in the water demand can be attributed to the high surface area of NS, which demands more water [63].This observation aligns with the results of prior research, which documented a rise in the water consumption when cement is substituted with NS [64,65].
The flow values obtained for MB-1 and cement mortars including MS (OPC-MK-MS) and NS (OPC-MK-NS), are depicted in Fig 2B .According to our findings, the flow of MB-1 was greater than that of cement mortars containing MS (OPC-MK-MS).Furthermore, the flow of mortar decreased as the dosage of MS increased.Compared with MB-1, the flow of cement pastes containing MS dropped to MB-2 (7.30%), MB-3 (11.70%),MB-4 (13.20%),MB-5 (15.60%), and MB-6 (18.55%).The flow of cement mortars containing NS was found to be smaller than that of MB-1 and cement mortars containing MS.Because of the filler effect, finer NS particles improve packing and lower flow [30].We found that the flow of mortar reduced as the NS content increased in the case of OPC-MK-NS.When compared with MB-1, the flow of cement pastes containing NS dropped to MB-7 (6.79%), MB-8 (3.16%), MB-9 (3.56%), MB-10 (4.21%), and MB-11 (5.01%).These results imply that the addition of micro-and nanosubstituents may negatively impact the flow characteristics of cement mortars and pastes by increasing the viscosity of the matrix [53].
Strength analysis
The compressive strength of the specimens was determined 28, 56, 90, and 180 days of curing.strength to the strength of the control specimen.The effect of the increased fineness on the compressive strength is most often seen at early age [67].At 28 days of hydration, the compressive strengths of the MS-containing concrete specimens (OPC-MK-MS) were much higher than those of MB-1.The observed phenomenon can be primarily owing to the combined influence of MS micro particles and MK fine particles as pozzolanic activators within the cementitious matrix [51].It serves as a synergistic filler material, filling the interstitial gaps and pores within the matrix of cured cement paste, and enhancing its density and strength [68].This observation suggests that the compressive strength of OPC-MK-MS is remarkably influenced by the presence of amorphous silica.It is worth noting that the silica (SiO 2 ) content of the supplementary material MS exceeded 90%, whereas that of MK was 55%.In addition, the high contents of SiO 2 and CaO in MS further enhance the formation of calcium silicate hydrate (CSH) gel, which is responsible for the strength and durability of the cementitious materials [69].
The rate of strength growth in the concrete specimens containing MS was likewise displayed to be greater at other curing days than in MB-1; however, the percentage increase was greater at 28 days of curing.Early compressive strength increases can be due to the hydration acceleration.Microparticles hydrate quickly, resulting in a rapid increase in the initial strength [54].The maximum enhancement in the compressive strength was noticed up to substitution by 10% MS in the ternary OPC-MK-MS binders, and then a slight decline was witnessed (Fig 3C).The percentage increase in the compressive strength for the MB-4 specimen compared with MB-1 was 10.37% (28 days), 10.04% (56 days), 9.89% (90 days), and 9.86% (180 days), whereas that for the MB-5 specimen was 10.03% (28 days), 9.87% (56 days), 9.74% (90 days), and 9.75% (180 days).Thus, 10% was considered the optimal dosage of MS.This decline can be owing to friction among amorphous silica particles at higher concentrations [70].
The inclusion of NS further increased the strength of the OPC-MK-NS concrete specimens.Pozzolanic reactions, in essence, bring about alterations in the microstructure of OPC-MK-NS and induce changes in the chemical composition of the hydration products attributed to the consumption of calcium hydroxide (CH) produced during the hydration of Portland cement [71].The experimental results demonstrate that the average compressive strengths of the specimens belonging to OPC-MK-NS, which contain the supplementary nanomaterial NS, consistently exhibited higher values than those of the MB-1 and OPC-MK-MS specimens.The best results were obtained for inclusion of 1% NS in the presence of 10% MK, which was considered as the optimal dosage, while a previous study reported 2% NS as the optimal dosage [53].NS is a comparatively costly material compared with MK, and its lower dosage would provide costeffectiveness.The pozzolanic reaction with CH is related to the surface area accessible for interaction with SiO 2 particles [69].The finer particle size of NS reacts faster and allows for better packing and filling of voids in the presence of MK within the cement matrix, resulting in a denser and more homogeneous structure.This phenomenon played a crucial role in strengthening the interparticle bonding within the cement matrix, consequently leading to an improvement in the compressive strength and overall structural integrity of the concrete material [72].
The increase in the strength was better at the early ages, owing to the better packing, rapid hydration, and pozzolanic impact of fine nanoparticles.At later curing ages, the increased strength was attributable to a decrease in the CH concentration with concurrent secondary CSH formation, pore size refinement, and matrix densification [52].A minor decrease in the relative increase in the compressive strength was observed for the MB-10 and MB-11 specimens.This decline may be because of the agglomeration propensity of the NS particles at increasing dosages [73].For instance, compared with MB-1, the percentage increase in the compressive strength for the MB-9 specimen was 11.48% (28 days), 11.21% (56 days), 10.46% (90 days), and 10.38% (180 days), whereas that for the MB-10 specimen was 11.31% (28 days), 11.18% (56 days), 10.31% (90 days), and 9.5% (180 days).strength increased for all the mixtures on all days, consistent with the compressive strength analysis.The results also demonstrated that very high percentages of MS and NS did not appreciably boost the splitting tensile strength, and a drop in the splitting tensile strength was found beyond 10% MS (in case of the OPC-MK-MS specimens) and 1% NS (in case of the OPC-MK-NS specimens).These results may be due to decreasing the homogeneity of the cement matrix at higher additive dosages [74].Thus, the incorporation of the two pozzolans at optimized content significantly enhances the strength of the cement matrix.Sharma et al. [75] also reported that the addition of a higher amount of pozzolans hinders the unified dispersion of the constituent particles in concrete specimens, decreasing the mechanical strength.
The analysis involved a comparative assessment at three different curing ages as 56, 90, and 180 days.The results are indicated in Fig 6 .The specimens did not exhibit any notable alterations in mass upon exposure to a magnesium sulfate solution.As a result, the data from this observation have not been included in the article.The data collected from the specimens consistently demonstrated a direct relationship between the duration of the curing process and compressive strength.The experimental results displayed in Fig 6A illustrate that the OPC-MK-MS specimens still provided greater compressive strength than the MB-1 specimens.However, the strength in the presence of the sulfate attack exhibited a decrease compared with its strength in water.The observed phenomenon may be ascribed to the gradual deterioration of the CSH gel and subsequent gypsum formation [67].The specimens, which were composed of a mixture containing 10% MS, indicated an observable improvement in their ability to withstand the harmful effects of the sulfate attack (Fig 6C ).The observed increase in the resistance was accompanied by a relatively minor decrease in the compressive strength.This phenomenon can be because of the pore filling mechanism, in which the silica particles consume CH, leading to the inhibition of the gypsum formation, as outlined in reaction 2 [34].
The empirical findings suggest a direct relationship between the duration of curing and the compressive strength of the OPC-MK-NS specimens.As depicted in Fig 6B, the experimental results clearly show that the specimens incorporating partial replacement of cement with NS exhibited the enhanced compressive strength compared with the MB-1 and OPC-MK-MS specimens.By the partial replacement of cement with NS, in conjunction with the inclusion of 10% MK, it was seen that all the specimens still demonstrated an increase in the compressive strength compared with the MB-1 reference specimen.This enhancement can be due to the collaborative effect of MK and NS, which acts in tandem to augment the pore structure of the matrix [71].These results suggest that the inclusion of 1% NS in the mixtures may result in a comparatively smaller decrease in the compressive strength when exposed to sulfate solutions, regardless of the length of the curing period (Fig 6D).This points out that the addition of NS and an optimized dosage of MK yields a more favorable outcome in terms of the performance.The OPC-1%NS-10%MK formulation is a subject of interest in the field of research.The results of this study reveal that the ternary blends displayed a notable improvement in their ability to withstand the sulfate attack [42].
Water penetrability analysis.
This study also involved an examination and a comparative analysis of the penetrability of all the concrete specimens at the curing age of 28 days, and the results are represented in Fig 7 .It was found that there was a reduction in the water absorption percentage of all the specimens, both those with the partial substitution of cement by MS and NS, as compared with MB-1.The experimental results indicate that the specimens gave a lesser degree of the penetrability due to the filler and pozzolanic effects of the pozzolanic substituents [76].These findings showed that the impact of NS on the specimens was relatively higher than that of MS [70].The analysis of Fig 7A demonstrates that the OPC-MK-MS specimens, which involve the partial replacement of cement with MS, exhibited a lesser reduction in the water absorption compared with that of the OPC-MK-NS specimens, which involve the partial replacement of cement with NS owing to the nanoscaled particles of NS.In addition, the performance of the MB-4 specimens containing 10% MS provided superior characteristics with 28.99% reduction in the water absorption compared with MB-1 (Fig 7C).This is evident from the observation that these specimens displayed the highest compressive strength, indicating the enhanced durability with lesser penetrability [77].This reduced penetrability further confirms the active participation of MS in the pozzolanic reaction [72].Better reduction in the water absorption of the OPC-MK-NS specimens (35.82%) compared with both the MB-1 and OPC-MK-MS specimens revealed superior performance (Fig 7B).The results further support the earlier observation that concrete containing NS, which possesses superior pozzolanic activity compared with MS, exhibits enhanced durability [78].Further, the performance of the MB-9 specimens containing 1% NS showed consistency with previous research findings, as depicted in Fig 7D .The results indicate that the use of NS and MK in combination demonstrates a synergistic pozzolanic impact, leading to the refinement of the matrix structure and enhanced resistance to the penetration [79].
Microstructural analysis
3.4.1.SEM-EDX analysis.Various researchers have used microstructural analysis to determine the correlation with the strength of the cement matrix [44].The microstructure analysis by Garg et al. [80] pointed out the improved performance of cement composites owing to the denser and more uniform microstructure resulting from the addition of MS and NS.Various authors have suggested that the improved impermeability of the cement matrix is due to the denser microstructure resulting from the addition of pozzolans, which reduce the pore size and increase the connectivity [81].Furthermore, the stoichiometric Ca/Si ratio serves as a quantitative indicator of the crystal composition within different regions of the specimens.It is derived by evaluating the ratio of the atomic percentages of calcium (Ca) and silicon (Si) obtained through the EDX analysis.The observed decline in this ratio signifies the progression of the CSH phase as the CH content diminishes.Conversely, an increase in the ratio implies an excess of CH, accompanied by a reduction in the pozzolanic process [82].
Fig 8 illustrates the surface microstructure displayed in the SEM-EDX images of the MB-1 concrete specimen along with the specimen having the highest compressive strength among the OPC-MK-MS specimens (MB4) and OPC-MK-NS specimens (MB9) after curing for 28 days.The microstructure of the MB-1 specimen (Fig 8A ) predominantly comprises continuously evolving honeycomb-like phases of CSH and hexagonal plates of CH.Large crystals and voids can be witnessed in the absence of MS and NS in MB-1 (10% MK), resulting in a porous microstructure [83].Golewski [78] also reported that the inclusion of 20% amount of FA had not been sufficient to noticeably enhance the structure of concrete after the 28-day curing period.Concrete demonstrated clear signs of the porosity and contained loose clusters of the CSH phase, which affected its overall quality with the presence of few unreacted FA grains.
The presence of MS and NS has a profound effect on the microstructure of concrete.Concrete specimens with either MS or NS content; nevertheless, retain some massive crystals.However, the crystal size and number of vacancies differ under these two conditions.The microstructure of the MB-4 specimen (Fig 8B) revealed fewer holes in a denser and more compact morphology [71].Owing to its larger surface area, NS shows a greater efficiency in its impact when compared with the higher MS values in the mixture.As a result, specimens containing 1% NS were more modified than those containing 10% MS.This results in a dramatic decrease in the number of large crystals generated in the MB-9 specimen (Fig 8C ), leading to the production of a dense compact structure [49].These microstructure investigations indicate that finer silica nanoparticles in hardened concrete specimens provide a dense microstructure and more filled holes along with a larger volume of the CSH gel with the use of degrading CH [60].Thus, the increased strength of the MB-4 and MB-9 specimens can be well correlated with the microstructural enhancements [79].
The Ca/Si ratio for the CSH generation varies between 0.67 and 2.0 [84].This ratio is significant in the context of the strength enhancement in cementitious materials.The Ca/Si ratio values obtained for the MB-1, MB-4, and MB-9 specimens were 1.77, 1.26, and 0.72, respectively.The inclusion of pozzolanic materials in the mixture can deplete a noticeable portion of CH, resulting in a reduced calcium to silicate (Ca/Si) ratio within CSH [84].Thus, the lowest Ca/Si ratio for the MB-9 specimen displays better CH consumption, leading to an enhanced matrix with better pore refinement and better resistance to the sulfate attack and water penetrability [58].
XRD analysis. Fig 9 illustrates
the periodic change in the interaction between CH and NS or MS at the interface, as determined by the XRD pattern analysis.In addition to expediting the cement hydration process, the pozzolanic material also undergoes a reaction with CH [85].The investigation of the CH consumption within a matrix comprising NS or MS can be effectively demonstrated through the analysis of intensity fluctuations observed in the primary diffraction peaks of crystals at specific 2θ values [86].The products were identified and classified as quartz (Q), CH, anhydrous grains of dicalcium/tricalcium silicate (CS), and various CSH at different 2θ values.The characteristic peaks of Q were seen around 26˚, whereas the characteristic peaks of CH were observed around 18˚, 21˚, and 50˚.The analysis focused on the characteristic peaks of CS and CSH, which were witnessed in the regions of 30-45˚and 55-80˚, respectively [87].At 28th day, it was evident that the diffraction peak intensities of the crystal faces of CH at the interface of the MB-1 specimen exhibited lower values than those of the MB-4 specimen, whereas that of Q was the highest.Similarly, the crystal face intensity of CH in the MB-9 specimen gave the lowest value.The findings of this study reveal that NS has a greater capacity to consume the CH crystals at the interface than MS [51].In addition, NS indicated a more effective ability to the overall structure of the interface compared with MS [87].In contrast, the intensities of the CSH peaks were highest for the MB-9 specimen, followed by the MB-4 and MB-1 specimens.
The reduction of the particle size from micro to nano, exemplified by the transition from a larger MS particle size to a finer NS particle size, results in a notable augmentation of the specific surface area and the number of atoms present on the surface [88].Because of their nanoscaled particles, NS shows a significant increase in the surface energy [86].Consequently, the atoms on the surface of these particles display heightened reactivity, which facilitates their interaction with surrounding atoms.These findings reconfirm that the pozzolanic activity of NS is greater than that of MS in the initial phases [49].According to these findings, NS exhibits a considerably greater number of nucleation sites for hydration products than MS during the initial stages [85].Hence, the inclusion of NS in the matrix has been observed to enhance the mechanical strength, particularly during the early stages of development, leading to better resistance to deteriorating environments, as studied in the sulfate attack and water penetration analyses.Moreover, the incorporation of NS improves the interface structure more efficiently than the inclusion of MS.The use of a limited quantity of NS positively affects both the longevity and mechanical characteristics of cementitious materials [48].
Conclusions
Our present study provides valuable insights into the effects of MS and NS on the fresh and strength properties of the ternary binders based on MK, which can be summarized as follows: • The combination of MS (5-10%) and NS (0.5-1.5%) with MK can lead to an increased consistency 3.45-27.59%and 6.90-31.03%,respectively.Thus, the inclusion of MS and NS may have an adverse effect on the fluidity.
• The addition of these fine pozzolanic particles can significantly decrease IST and FST of cement paste while increasing the strength considerably at the optimized MS (10%) and NS (1%) contents in the presence of MK (10%).
• Specifically, the spectroscopic results revealed the development of a more compact matrix with the addition of NS, resulting in more efficient and sustainable cement mixes with improved setting time properties.
• There was a significant reduction in the water absorption (35.82%) and increased resistance toward the sulfate attack for the specimens containing the optimal dosage of NS in the presence of MK.
Thus, the incorporation of these pozzolans can provide an enhanced matrix with the reduced penetrability and resistance to the sulfate attack, thus improving the durability characteristics of concrete blends.Furthermore, a comparative decrease in cement dosage would also reduce in global carbon dioxide emissions.However, resistance to other deteriorating environments should be studied for suitability in sustainable construction.
Fig 4
Fig 4 depicts the variation in the splitting tensile strength of the concrete specimens, while Fig 5 illustrates the flexural strength variation of the specimens at curing ages of 28, 56, 90, and 180 days.When compared with MB-1, the splitting tensile strength and flexurestrength increased for all the mixtures on all days, consistent with the compressive strength analysis.The results also demonstrated that very high percentages of MS and NS did not appreciably boost the splitting tensile strength, and a drop in the splitting tensile strength was found beyond 10% MS (in case of the OPC-MK-MS specimens) and 1% NS (in case of the OPC-MK-NS specimens).These results may be due to decreasing the homogeneity of the cement matrix at higher additive dosages[74].Thus, the incorporation of the two pozzolans at optimized content significantly enhances the strength of the cement matrix.Sharma et al.[75] also reported that the addition of a higher amount of pozzolans hinders the unified dispersion of the constituent particles in concrete specimens, decreasing the mechanical strength. | 6,826.4 | 2024-04-10T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
NanoFIRE: A NanoLuciferase and Fluorescent Integrated Reporter Element for Robust and Sensitive Investigation of HIF and Other Signalling Pathways
The Hypoxia Inducible Factor (HIF) transcription factors are imperative for cell adaption to low oxygen conditions and development; however, they also contribute to ischaemic disease and cancer. To identify novel genetic regulators which target the HIF pathway or small molecules for therapeutic use, cell-based reporter systems are commonly used. Here, we present a new, highly sensitive and versatile reporter system, NanoFIRE: a NanoLuciferase and Fluorescent Integrated Reporter Element. Under the control of a Hypoxic Response Element (HRE-NanoFIRE), this system is a robust sensor of HIF activity within cells and potently responds to both hypoxia and chemical inducers of the HIF pathway in a highly reproducible and sensitive manner, consistently achieving 20 to 150-fold induction across different cell types and a Z′ score > 0.5. We demonstrate that the NanoFIRE system is adaptable via substitution of the response element controlling NanoLuciferase and show that it can report on the activity of the transcriptional regulator Factor Inhibiting HIF, and an unrelated transcription factor, the Progesterone Receptor. Furthermore, the lentivirus-mediated stable integration of NanoFIRE highlights the versatility of this system across a wide range of cell types, including primary cells. Together, these findings demonstrate that NanoFIRE is a robust reporter system for the investigation of HIF and other transcription factor-mediated signalling pathways in cells, with applications in high throughput screening for the identification of novel small molecule and genetic regulators.
Introduction
The Hypoxia Inducible Factors 1 and 2 (HIF-1/2) are key transcription factors induced under low oxygen conditions.In normal physiology they are essential for processes including vascular development, erythropoiesis, and lung function [1][2][3].However, the HIFs are also implicated in the pathophysiology of numerous diseases.In ischaemic diseases such as stroke and wound healing, the HIFs are advantageous and promote blood vessel growth for oxygen delivery, while in diseases such as chronic kidney disease, elevated HIF activity increases erythropoietin production to overcome associated anaemia [4].However, Biomolecules 2023, 13, 1545 2 of 16 the HIFs are also commonly pro-tumorigenic, driving angiogenesis and metabolic transformation, and are therefore attractive therapeutic targets for inhibition [5,6].Thus, the HIFs are promising targets for both activation and inhibition, dependent on the disease context.
HIF regulation is predominately controlled post-translationally in an oxygen-dependent manner.HIF-1 and HIF-2 each consist of an oxygen-regulated HIF-α subunit (HIF-1α or HIF-2α, respectively), which under normoxic conditions is hydroxylated by Prolyl Hydroxylase Domain enzymes (PHDs) in an oxygen-dependent manner, initiating ubiquitination by the E3 ligase Von Hippel-Lindau (VHL) and resulting in proteasome-mediated degradation [7].Factor inhibiting HIF (FIH) under normoxic conditions also hydroxylates the HIF-α subunits in an oxygen-dependent manner, which inhibits histone acetyltransferases p300 and CREB-binding protein (CBP) association, downregulating HIF transcriptional activity [8].When oxygen levels decrease the oxygen-dependent PHDs and FIH are inhibited, such that the HIF-α proteins are stabilised and can bind coactivators p300 and CBP.HIF-α dimerises with its constitutively expressed partner protein ARNT (also known as HIF-β), forming the active HIF transcription factor, and binds to Hypoxic Response Elements (HREs) to drive gene transcription.
HIF-driven cell-based reporter systems have been used extensively to investigate the mechanisms of HIF regulation and to identify small molecules which target the pathway, with the goal of using this knowledge for therapeutic targeting of HIF.Transiently transfected firefly luciferase reporters are most commonly used [9][10][11].However, despite displaying high sensitivity, their transient nature results in labour intensive setup for high throughput screens, they are restricted to cells that are efficiently transfected, and plasmidbased reporters lack the chromatin context of endogenous HIF target genes [12].More recently, fluorescence-based HIF reporter systems which can be stably integrated into cells have been developed, acting as a more suitable system for high throughput screening (HTS) purposes [13][14][15][16].However, these systems commonly lack the sensitivity of luminescencebased reporters, and most fluorophores require oxygen for optimal fluorescence, limiting their applications in the investigation of signalling under hypoxia [17].
To overcome these limitations, and thus develop a system which is stably integrated, is sensitive, and can be used under hypoxia, we developed NanoFIRE, a NanoLuciferase and Fluorescent Integrated Reporter Element, to act as a sensitive, stable reporter system to investigate HIF and other signalling pathways.By placing NanoFIRE under the control of a Hypoxic Response Element (HRE-NanoFIRE), we formed a robust sensor of HIF activity within cells.We demonstrate that HRE-NanoFIRE can respond to hypoxia and chemical inducers of the HIF pathway, dimethyloxalylglycine (DMOG) and FG-4592, within multiple cell lines and primary cells.NanoFIRE is also highly versatile, with adaption to a synthetic transcription factor system allowing investigation of transcriptional regulation by the HIF regulator FIH, and substitution of the upstream response element facilitating investigation of progesterone receptor signalling.We therefore present NanoFIRE as a sensitive and versatile reporter system which can be used to investigate transcription factor and transcriptional regulation activity across cell lines and primary cells.
Animals
21-day old CBA × C57BL/6 F1 (CBAF1) mice were obtained from the University of Adelaide Laboratory Animal Services.Mice were given water and chow ad libitum and maintained in 12 h light/12 h dark conditions.All experiments were approved by The University of Adelaide Animal Ethics Committee and were conducted in accordance with the Australian Code of Practice for the Care and Use of Animals for Scientific Purposes; ethics number M-2021-058.
Lentivirus Production and Generation of Stable Cell Lines
First, 80% confluent HEK-293T cells were transfected with 8.2 µg psPAX (Addgene #12260), 3.75 µg pMD2G (Addgene #12259) and 12.5 µg of the required lentiviral construct using polyethyleneimine (PEI) at a 3 µg: 1 µg ratio with DNA in serum free DMEM or Optimem.Then, 16 h later, the media was changed, and virus harvested 1-2 days later and filtered (0.45 µM filter, Satorius, Göttingen, Germany).Target cells were transduced at a MOI < 1 and incubated with virus for 48 h.Media was then changed to fresh culture media containing required antibiotics for selection; hygromycin at 140 µg/mL (Thermo Fisher) or puromycin at 1 µg/mL (Sigma-Aldrich, Macquarie Park, NSW, Australia) and maintained under antibiotic selection for approximately 2 weeks.Lentivirus for the transduction of primary mouse granulosa cells was generated and concentrated by The University of Adelaide Gene Silencing & Expression facility (GSEx).Briefly, a similar protocol to the above was completed for lentivirus production but using HEK-293T/17 cells and Lipofectamine 3000 with Optimem as per the manufacturers protocol (Thermo Fisher).Virus was collected 24 and 48 h post media change and concentrated by ultracentrifugation.Virus was titred using HEK-293T/17 cells infected with serial dilutions of virus and assessed for EGFP expression using flow cytometry.
NanoLuciferase Reporter Assays-Stable Cell Lines
U2OS and HEK-293T cells were seeded at 0.5 × 10 4 cells/well and KGN and Huh7 cells at 1 × 10 4 cells/well in white walled, clear bottom 96 well plates, with all HEK-293T cells spun down and resuspended in fresh media prior to plating.The following day, cells were treated with 1 mM DMOG (Cayman Chemical, Ellsworth Ann Arbor, MI, USA) or 0.1% DMSO, 1 µg/mL doxycycline (Sigma-Aldrich, Australia) or 0.1% H 2 O, 50 µM FG-4592 (Cayman Chemical, USA) or 0.1% DMSO, and 100 nM R5020 (PerkinElmer, North Ryde, NSW, Australia) or 0.1% ethanol.Huh7 cells were treated with 0.1 mM DMOG due to 1 mM DMOG showing significant toxicity.Then, 16 h post treatment, reporter output was determined using the Nano-Glo Luciferase Assay System as per the manufacturer's instructions with minor modifications (Promega, Alexandria, NSW, Australia).Briefly, cells were removed from the incubator for 10 min to allow room temperature equilibration, media aspirated, then 25 µL of assay buffer mixed 50:1 with NanoGlo substrate was injected into each well.After 3 min incubation, luminescence was recorded using the GloMax Discovery Microplate Reader (Promega).Each experiment was completed three times independently each in duplicate or triplicate.Where stated, NanoLuciferase units (NLU) were normalised to DMOG treated cells under normoxia and stated as normalised NLU (NNLU).Hypoxic cell incubations were completed in the Oxford Optronix HypoxyLab at stated oxygen concentrations under humidified conditions at 37 • C and 5% CO 2 .
Primary Granulosa Cell Culture and NanoLuciferase Reporter Assays
Primary mouse granulosa cells were isolated and cultured as described in Dinh et al. [19], but cultured in a 1:1 mix of F12:DMEM (no glucose) with 35 nM testosterone (Sigma Aldrich), 1 µM retinoic acid (Sigma-Aldrich), and 50 ng/µL recombinant mouse follicle stimulating hormone (R&D Systems, Minneapolis, MN, USA) and seeded onto fibronectin-coated, white-walled, clear-bottom 96 well plates at 70,000 cells/well.Cells were transduced with approximately 1.8 × 10 6 infectious units of HRE-NanoFIRE or NoRE-NanoFIRE lentivirus in a total volume of 1 µL/well, incubated for 24 h, then treated with 1 mM DMOG (Cayman Chemical, USA) or 0.1% DMSO.Finally, 24 h post treatment, the NanoLuciferase reporter assay was completed as per above, but with 50 µL of NanoLuciferase reagent added per well.
High Content Imaging of Fluorescent Reporter Cells
Stable HEK-293T HIF dual fluorescent monoclonal reporter cells [13] were seeded in black, clear-bottom, 96 well plates at 1 × 10 4 cells/well in cell culture media (DMEM with 10% FBS, 1× glutaMAX and 1% penicillin/streptomycin).Then, 24 h later, cells were treated with 1 mM DMOG, 50 µM FG-4592 or vehicle (0.1% DMSO).Next, 16 h post treatment, cell populations were imaged in media at 10× magnification using the Thermo Fisher ArrayScan TM XTI High Content Reader.Tomato mean fluorescent intensity (MFI) and EGFP MFI were imaged with an excitation source of 560/25 nm and 485/20 nm, respectively.Individual cells were defined by nuclear EGFP expression, determined by isodata thresholding to filter out background objects and abnormal nuclei were also excluded.MFI is the average from 2000 individual nuclei per well.EGFP MFI was used to confirm no change between treatment groups and control.Image quantification was completed using Thermo Fisher HCS Studio TM 3.0 Cell Analysis Software.
Statistical Analysis and Figures
All data are expressed as mean ± standard deviation and p values calculated by a one-way ANOVA with Tukey's multiple comparisons or for pairwise analysis, an unpaired T test, and were calculated using GraphPad Prism v9.Schematic figures were made using biorender.com(accessed on 27 September 2023).Z calculations were made using the following formula, as described in Zhang et al. [20].
Design and Characterisation of HRE-NanoFIRE to Investigate HIF Signalling
We aimed to develop a sensitive, versatile HIF reporter system which would respond to hypoxia and allow stable integration into the genome of any cultured cells of interest.To achieve this, we modified our previously established lentiviral dual fluorescent reporter construct [13], such that the oxygen-sensitive tomato fluorescent reporter gene was replaced with the bright and oxygen insensitive bioluminescent NanoLuciferase [21], forming the NanoFIRE reporter system.Within NanoFIRE, NanoLuciferase is expressed in a signal-dependent manner while a downstream, independent constitutive promoter controls expression of a hygromycin resistance gene and enhanced green fluorescent protein (EGFP), acting as a dual selectable marker (Figure 1A,B).PEST-tagged NanoLuciferase was chosen for better temporal response to changes in transcriptional activity [22], while lentiviral delivery enabled stable integration into the genome to eliminate the need for transient transfection and to facilitate adaptation to multiple cell types.
We aimed to develop a sensitive, versatile HIF reporter system which would respond to hypoxia and allow stable integration into the genome of any cultured cells of interest.
To achieve this, we modified our previously established lentiviral dual fluorescent reporter construct [13], such that the oxygen-sensitive tomato fluorescent reporter gene was replaced with the bright and oxygen insensitive bioluminescent NanoLuciferase [21], forming the NanoFIRE reporter system.Within NanoFIRE, NanoLuciferase is expressed in a signal-dependent manner while a downstream, independent constitutive promoter controls expression of a hygromycin resistance gene and enhanced green fluorescent protein (EGFP), acting as a dual selectable marker (Figure 1A,B).PEST-tagged NanoLuciferase was chosen for better temporal response to changes in transcriptional activity [22], while lentiviral delivery enabled stable integration into the genome to eliminate the need for transient transfection and to facilitate adaptation to multiple cell types.To investigate endogenous HIF signalling, we inserted a HRE concatemer from Razorenova et al. [18] upstream of the NanoLuciferase reporter gene, forming HRE-NanoFIRE (Figure 1A,B).For initial characterisation of HRE-NanoFIRE, we selected the osteosarcoma derived U2OS cell line as U2OS cells display strong induction of HIF-1α protein in response to hypoxia [23,24] and robustly activate HRE controlled reporter systems [25].
HRE-NanoFIRE Displays Robust Reporter Response to Hypoxia and Hypoxia Mimetics
To initially characterise HRE-NanoFIRE, we assessed reporter response in cells treated with chemical inducers of the HIF pathway, testing both a pan 2-oxoglutarate dependent dioxygenase inhibitor DMOG and PHD-specific inhibitor FG-4592.In stably integrated polyclonal U2OS HRE-NanoFIRE cells, robust and consistent reporter activation to DMOG (159-fold, Z = 0.90 ± 0.06) and FG-4592 (118-fold, Z = 0.83 ± 0.09) was achieved (Figure 1C), thus demonstrating high system inducibility, with the response to FG-4592 confirming PHD-specific reporter control [26].The high Z scores for both DMOG and FG-4592 confirm high system robustness and suitability for HTS [20].DMOG and FG-4592 were unable to induce reporter activity in a NanoFIRE reporter construct which lacked a response element (NoRE-NanoFIRE), confirming HIF-specific activation of the HRE-NanoFIRE system (Figure 1C).Reporter induction was time-dependent with peak reporter activity observed within 12 h of DMOG treatment, which slowly declined over 48 h (Figure 1D).This decline reflects the short half-life of PEST tagged NanoLuciferase and highlights the ability to detect changes in transcriptional activity within a short period of time, in contrast with fluorescent proteins which typically require longer periods of expression for sufficient signal accumulation [21,27].
Importantly, hypoxia robustly induced a U2OS HRE-NanoFIRE response, increasing with the severity of hypoxia; 1% O 2 induced a 58-fold increase (Z = 0.77 ± 0.11) and 0.1% O 2 induced a 132-fold increase (Z = 0.81 ± 0.16) relative to vehicle-treated cells at normoxia (Figure 1E,F).This was achieved without a specific reoxygenation step (approximate 10-min normoxia exposure to allow for temperature equilibration only), in contrast to the typical ≥4 h required for recovery in fluorescent-based reporter systems [28].The robust response of HRE-NanoFIRE to chemical inducers of HIF-1α and hypoxia highlights its suitability for HTS applications to identify HIF modulators and for investigating HIF signalling under moderate and severe hypoxia.
Next, we assessed HRE-NanoFIRE in the ovarian tumour cell line KGN and hepatoma cell line Huh7 to investigate system adaptability in other cell contexts.Both KGN HRE-NanoFIRE and Huh7 HRE-NanoFIRE cells exhibited robust responses to DMOG, FG-4592, and 1% O 2 with consistent Z values greater than 0.5, demonstrating assay robustness and suitability for HTS (Figure 2D-G).It is worth noting that while all cells induced HRE-NanoFIRE activity with DMOG, FG-4592 and 1% O 2 , the relative level of induction over background varied between cells and treatments.For example, in comparison to all other cell lines tested, KGN HRE-NanoFIRE cells displayed lower fold induction in response to the PHD inhibitor FG-4592 and 1% O 2 compared to DMOG.This is likely reflective of differences between cell types in the levels of HIF-1α, HIF-2α, PHD1-3, and FIH, and is consistent with other studies, showing that the HIFs are regulated in a cell-type-dependent manner [9,29,30].Next, we assessed HRE-NanoFIRE in the ovarian tumour cell line KGN and hepatoma cell line Huh7 to investigate system adaptability in other cell contexts.Both KGN HRE-NanoFIRE and Huh7 HRE-NanoFIRE cells exhibited robust responses to DMOG, FG-4592, and 1% O2 with consistent Z' values greater than 0.5, demonstrating assay robustness and suitability for HTS (Figure 2D-G).It is worth noting that while all cells induced HRE-NanoFIRE activity with DMOG, FG-4592 and 1% O2, the relative level of induction over background varied between cells and treatments.For example, in comparison to all other cell lines tested, KGN HRE-NanoFIRE cells displayed lower fold induction in response to the PHD inhibitor FG-4592 and 1% O2 compared to DMOG.This is likely reflective of differences between cell types in the levels of HIF-1α, HIF-2α, PHD1-3, and FIH, and is consistent with other studies, showing that the HIFs are regulated in a celltype-dependent manner [9,29,30].Together with demonstrating versatility across cell lines, it was important to test HRE-NanoFIRE activity within primary cells, given that there are limited systems available which allow investigation of HIF transcriptional activity in a primary cell context, and these cells are typically difficult to transfect.Hence, HRE-NanoFIRE was tested in primary mouse granulosa cells of the ovary.Granulosa cells were collected from pregnant mare serum gonadotropin (PMSG)-stimulated mice and cultured in vitro (Figure 3A).When these primary granulosa cells were transduced with HRE-NanoFIRE virus and the next day treated with DMOG for 24 h, 41-fold reporter induction (Z = 0.62 ± 0.08) was achieved compared to vehicle treated cells (Figure 3B).Mouse granulosa cells transduced with the NoRE-NanoFIRE control displayed no reporter induction in response to DMOG, confirming HIF specific activation of HRE-NanoFIRE.
serum gonadotropin (PMSG)-stimulated mice and cultured in vitro (Figure 3A).When these primary granulosa cells were transduced with HRE-NanoFIRE virus and the next day treated with DMOG for 24 h, 41-fold reporter induction (Z' = 0.62 ± 0.08) was achieved compared to vehicle treated cells (Figure 3B).Mouse granulosa cells transduced with the NoRE-NanoFIRE control displayed no reporter induction in response to DMOG, confirming HIF specific activation of HRE-NanoFIRE.
HRE-NanoFIRE Is More Sensitive Than Equivalent Fluorescent Reporter Systems
Fluorescent-based reporter systems are a common alternative for investigating HIF transcriptional activity [14][15][16].We therefore compared HEK-293T HRE-NanoFIRE cells to our similar HIF dual fluorescence reporter system (comparing Figure 2A to Figure 2C) [13].Our HEK-293T HIF dual fluorescent cells express a stable reporter construct similar to HRE-NanoFIRE but with nuclear tomato in place of the NanoLuciferase reporter gene [13].While both systems efficiently report on HIF transcriptional activity when incubated with hypoxia mimetics for 16 h, the NanoLuciferase expressing HEK-293T HRE-Nano-FIRE cells induced much higher reporter output than the dual fluorescent HEK-293T cells (37-fold compared to 4-fold with DMOG and 27-fold compared to 2-fold with FG-4592, respectively) (Figure 2A,C).This confirmed the enhanced sensitivity of HRE-NanoFIRE as a HIF reporter system over equivalent fluorescent-based systems.
HRE-NanoFIRE Is More Sensitive Than Equivalent Fluorescent Reporter Systems
Fluorescent-based reporter systems are a common alternative for investigating HIF transcriptional activity [14][15][16].We therefore compared HEK-293T HRE-NanoFIRE cells to our similar HIF dual fluorescence reporter system (comparing Figure 2A to Figure 2C) [13].Our HEK-293T HIF dual fluorescent cells express a stable reporter construct similar to HRE-NanoFIRE but with nuclear tomato in place of the NanoLuciferase reporter gene [13].While both systems efficiently report on HIF transcriptional activity when incubated with hypoxia mimetics for 16 h, the NanoLuciferase expressing HEK-293T HRE-NanoFIRE cells induced much higher reporter output than the dual fluorescent HEK-293T cells (37-fold compared to 4-fold with DMOG and 27-fold compared to 2-fold with FG-4592, respectively) (Figure 2A,C).This confirmed the enhanced sensitivity of HRE-NanoFIRE as a HIF reporter system over equivalent fluorescent-based systems.
NanoFIRE Can Be Adapted to Investigate Transcriptional Regulators and Synthetic Transcription Factors
The HRE-NanoFIRE system provides a read out of total HIF transcriptional activity, with PHD-specific inhibitor FG-4592 treatment allowing the contribution of PHDdependent regulation to be determined.There are a lack of cell-based systems available, however, which allow for the specific determination of the FIH contribution to HIF transcriptional control, particularly in a high-throughput setting.We therefore aimed to develop a synthetic transcription factor-controlled NanoFIRE system, based on previously established transient FIH reporter systems [8], to provide a read out of FIH-dependent regulation in a stable cell-based setting.
Analysis of HEK-293T FIH-NanoFIRE cells demonstrated minimal reporter activity with either DMOG or dox alone, whereas treatment with both dox and DMOG induced a 28-fold increase in FIH-NanoFIRE reporter activity above dox alone, demonstrating robust reporter activation upon FIH inhibition (Figure 4B).This produced a Z score of 0.92 ± 0.03 confirming high signal to background separation, assay consistency, and suitability for HTS applications.The GalRE-NanoFIRE reporter did not respond in cells which lacked the gal4DBD-HIFCAD expression construct, confirming specific reporter control by FIH (Figure 4B).
HEK-293T FIH-NanoFIRE cells also induced robust reporter activity in response to hypoxia with dox-treated cells incubated at 1% O 2 inducing a 13-fold increase (Z = 0.76 ± 0.13) in reporter output relative to dox-treated cells at normoxia, and 0.1% O 2 inducing a 12-fold increase (Z = 0.58 ± 0.44) (Figure 4C,D).Similar to the U2OS HRE-NanoFIRE line, HEK-293T FIH-NanoFIRE reporter induction in response to DMOG was time-dependent, with maximum reporter signal obtained between 12 and 24 h of treatment (Figure 4E).When treated with the PHD-specific inhibitor FG-4592, no significant increase in reporter activity was observed, confirming FIH-NanoFIRE is specifically controlled by FIH and not influenced by the PHDs (Figure 4F).Treatment with the FIH-specific inhibitor dimethyl N-oxalyl-D-phenylalanine (DM-NOFD) [32] at concentrations up to 1 mM produced no significant response (Figure 4F).This is consistent with our use of DM-NOFD in other cell-based assays where we have observed ineffective inhibition of FIH at concentrations up to 1 mM.
NanoFIRE Can Be Adapted to Investigate Other Transcription Factors
Finally, we aimed to investigate if the NanoFIRE system is more widely adaptable and could be used to investigate other transcription factors, specifically the Progesterone Receptor (PR).PR is a nuclear receptor consisting of isoforms PR(A) and PR(B).Upon cytoplasmic binding to its cognate ligand progesterone or the synthetic agonist R5020, PR undergoes nuclear translocation, homodimerisation, and DNA binding to cognate response elements to drive the expression of target genes [33].PR is essential to female ovulatory control [34,35]; however, it is expressed in a limited number of cell lines making it difficult to investigate in this context [36,37].We therefore paired expression of NanoFIRE under the control of a Progesterone Response Element (PRE-NanoFIRE) with stable expression of dox-inducible PR(A) or PR(B) in the ovarian granulosa cell cancer line KGN (Figure 5A), forming the PR(A)-NanoFIRE and PR(B)-NanoFIRE cell lines.This provided a system to investigate PR signalling within an ovarian cell context.Treatment of both PR(A)-NanoFIRE and PR(B)-NanoFIRE lines with dox and R5020 resulted in strong reporter induction relative to dox only treated cells at 9-fold and 39-fold, respectively (Figure 5B).Each provided a Z score greater than 0.7, confirming assay robustness and high signal separation upon PR activation with R5020.This demonstrated that aside from sensing HIF activity, NanoFIRE can be adapted to investigate the activity of tissue specific transcription factors in both endogenous and synthetic systems in a sensitive and signal-dependent manner.
NanoFIRE Can Be Adapted to Investigate Other Transcription Factors
Finally, we aimed to investigate if the NanoFIRE system is more widely adaptable and could be used to investigate other transcription factors, specifically the Progesterone Receptor (PR).PR is a nuclear receptor consisting of isoforms PR(A) and PR(B).Upon cytoplasmic binding to its cognate ligand progesterone or the synthetic agonist R5020, PR undergoes nuclear translocation, homodimerisation, and DNA binding to cognate porter induction relative to dox only treated cells at 9-fold and 39-fold, respectively (Figure 5B).Each provided a Z' score greater than 0.7, confirming assay robustness and high signal separation upon PR activation with R5020.This demonstrated that aside from sensing HIF activity, NanoFIRE can be adapted to investigate the activity of tissue specific transcription factors in both endogenous and synthetic systems in a sensitive and signaldependent manner.
Discussion
Here we describe a novel NanoLuciferase reporter system, NanoFIRE, which provides a robust and highly sensitive read out of HIF transcriptional activity in response to multiple stimuli and can be adapted to investigate other transcription factors and transcriptional regulatory pathways.We show that NanoFIRE is a robust and sensitive reporter system within a polyclonal setting across diverse cell lines and within primary cells, as demonstrated with granulosa cells of the ovary (Figures 2 and 3).Across all cell lines and reporter constructs tested, NanoFIRE could be used to generate systems suitable for HTS.This was evident with all HIF and FIH systems providing a Z' value greater than 0.5 in response to hypoxia mimetic treatment, demonstrating they are within the range of an excellent screen [20].High Z' values in response to hypoxia were also obtained for most cell lines tested, supporting the suitability of NanoFIRE for HTS also under hypoxia.
Previous NanoLuciferase HIF reporter systems have either measured HIF-1 heterodimerisation through using HIF-1α and ARNT split NanoLuciferase fusions or expressed
Discussion
Here we describe a novel NanoLuciferase reporter system, NanoFIRE, which provides a robust and highly sensitive read out of HIF transcriptional activity in response to multiple stimuli and can be adapted to investigate other transcription factors and transcriptional regulatory pathways.We show that NanoFIRE is a robust and sensitive reporter system within a polyclonal setting across diverse cell lines and within primary cells, as demonstrated with granulosa cells of the ovary (Figures 2 and 3).Across all cell lines and reporter constructs tested, NanoFIRE could be used to generate systems suitable for HTS.This was evident with all HIF and FIH systems providing a Z value greater than 0.5 in response to hypoxia mimetic treatment, demonstrating they are within the range of an excellent screen [20].High Z values in response to hypoxia were also obtained for most cell lines tested, supporting the suitability of NanoFIRE for HTS also under hypoxia.
Previous NanoLuciferase HIF reporter systems have either measured HIF-1 heterodimerisation through using HIF-1α and ARNT split NanoLuciferase fusions or expressed HIF-1α fused to full length NanoLuciferase [38,39].In these systems, modulators of only either HIF-1 dimerisation or HIF-1α protein levels (i.e., PHD regulation), respectively, can be reported on.By HRE-NanoFIRE acting as a readout of HIF transcriptional activity, modulators of all aspects of control over the HIF pathway can be reported on, and the system can be adapted through changing the controlling response element to investigate other pathways, providing versatility which HIF fusion reporters lack.Additionally, the lentiviral nature of NanoFIRE provides versatility, particularly in difficult to transfect cells.
Compared to screening systems of the HIF pathway which transiently overexpress HIF fusion proteins in cells [38], use purified protein in vitro [32,40] or use molecular modelling based on known crystal structures [41], NanoFIRE has the advantage of being cell-based and also a direct readout of changes in endogenous HIF activity within a cellular context.This is particularly important, given there are multiple avenues of cell signalling, such as reactive oxygen species [42,43], lipopolysaccharide (LPS) [44], and numerous metabolites [45] that can influence HIF activity, all of which are commonly missed in overexpression or cell-independent screening systems.
Given the high sensitivity and Z scores of the NanoFIRE system obtained across cell lines and signalling pathways, we envisage NanoFIRE as either a primary or secondary screening system to complement already-established fluorescent and transient firefly reporter systems in either small molecule screening or arrayed genetic screens for novel regulators of the HIF pathway.There remains a need for the discovery of novel HIF inhibitors, in particular for the treatment of cancer where the HIFs are commonly pro-tumorigenic by promoting angiogenesis and glycolysis to aid tumour growth and survival [5].Unlike the discovery of HIF-2 inhibitors, which has been notably successful with the identification of PT-2385 for the treatment of renal cancers [41], there are a lack of direct HIF-1 inhibitors, with two of the better inhibitors, acriflavine and PX-478, showing non-specificity and indirect mechanisms of action [46][47][48].In a small molecule screening setting, NanoFIRE complements established fluorescent-based HTS systems by providing a system not influenced by autofluroescent compounds and that overcomes the time delay required for fluorescent protein accumulation and maturation (Figure 2A,C) [49].Furthermore, compared to firefly luciferase, NanoLuciferase has structural and substrate differences, thus allowing NanoFIRE to complement firefly-based systems for the screening and identification of common nuisance compounds, which can cause false positive and negative firefly readout [50,51].The HEK-293T HRE-NanoFIRE line represents a sensitive reporter system specifically for HIF-1, given that HEK-293T cells do not express HIF-2α [9] (Figure 2A,B) and would thus be ideal for small molecule drug screening to identify novel compounds which specifically modulate HIF-1 activity, in particular those that directly inhibit HIF-1 which would have broad therapeutic potential in cancer.
Given the hypoxic insensitivity of the HRE-NanoFIRE system, it also has future applications in arrayed CRISPR genetic screening for identifying novel modulators of the HIF pathway under the most physiologically relevant conditions of hypoxia.This is advantageous over fluorescent-based systems which are typically used for genetic screening and either display reduced sensitivity or require an extensive reoxygenation period after hypoxic incubation [16,28].Importantly, NanoFIRE may also have in vivo applications as NanoLuciferase is amenable to in vivo imaging [52][53][54], with recent research showing the development of alternative and more bioavailable substrates for sensitive NanoLuciferase imaging within deep tissues [53].
Adaption of NanoFIRE to investigate FIH and PR signalling (Figures 4 and 5) demonstrated its use as a reporter system for various classes of transcriptional regulators, including synthetic transcription factors, and therefore as a tool for probing a wide range of signalling pathways.FIH-NanoFIRE provides a cell-based screening system for FIH regulators, as demonstrated with this line achieving a Z score greater than 0.9 in response to dox and DMOG.The poor response of FIH-NanoFIRE to the best available FIH-specific inhibitor DM-NOFD (Figure 4E) confirmed that alternative FIH inhibitors with better cell-based efficacy are required as a research tool, but may also have therapeutic potential, for example in the treatment of metabolic diseases [55,56] and of certain cancers [57,58].The KGN PR(A)-NanoFIRE and PR(B)-NanoFIRE lines demonstrated that NanoFIRE can be used to dissect the transcriptional activity of exogenously expressed transcription factors (Figure 5) and is also sensitive enough to detect changes in activity of endogenously expressed PR.This highlights the broad versatility of the NanoFIRE reporter system, with further substitutions of the response element cassette offering a plethora of transcriptional pathways which could be investigated.Finally, this system is highly adaptable, beyond the substitution of response elements driving expression of NanoLuciferase.While the system contains constitutively expressed EGFP which could be used for normalisation, as we have successfully done in our analogous dual fluorescent reporter system [13], this could also be interchanged for firefly or click beetle luciferase to form a dual luminescent reporter [21].Additionally, the PEST-NanoLuciferase could be exchanged for secreted NanoLuciferase to allow assessment of temporal reporter expression in real time without cell lysis.Stable cell lines could also be used for the generation of monoclonal NanoFIRE lines, which we envisage would provide an even more consistent NanoFIRE reporter system with enhanced signal to background sensitivity.
In summary, NanoFIRE expands on the available repertoire of reporter systems and acts as a sensitive, highly versatile, stably integrating reporter for the analysis of HIFdependent hypoxic signalling.NanoFIRE has future applications in the discovery of HIF modulators with therapeutic potential in human disease, and broad adaptability to investigate other signalling pathways.
Figure 3 .
Figure 3. HRE-NanoFIRE can be used as a HIF reporter system in primary cells.(A) Workflow for mouse granulosa cell isolation, in vitro culture, and treatment prior to NanoLuciferase reporter assay.(B) NanoLuciferase Units (NLU) of primary mouse granulosa cells transduced with NoRE-NanoFIRE or HRE-NanoFIRE reporter virus and treated with vehicle (0.1% DMSO) or 1 mM DMOG for 24 h.n = 3 biologically independent experiments, each consisting of cells pooled from at least 3 mice and performed in duplicate or triplicate, presented as mean ± standard deviation.*** p < 0.0005, ns = not significant, t test assuming equal standard deviation.Fold change and statistics relative to vehicle.
Figure 3 .
Figure 3. HRE-NanoFIRE can be used as a HIF reporter system in primary cells.(A) Workflow for mouse granulosa cell isolation, in vitro culture, and treatment prior to NanoLuciferase reporter assay.(B) NanoLuciferase Units (NLU) of primary mouse granulosa cells transduced with NoRE-NanoFIRE or HRE-NanoFIRE reporter virus and treated with vehicle (0.1% DMSO) or 1 mM DMOG for 24 h.n = 3 biologically independent experiments, each consisting of cells pooled from at least 3 mice and performed in duplicate or triplicate, presented as mean ± standard deviation.*** p < 0.0005, ns = not significant, t test assuming equal standard deviation.Fold change and statistics relative to vehicle.
Figure 5 .
Figure 5. NanoFIRE can be adapted to investigate Progesterone Receptor (PR) transcriptional activity.(A) Schematic of the PRE-NanoFIRE reporter construct and reporter activity under conditions of R5020 (R) and doxycycline treatment for PR expression.(B) Normalised NanoLuciferase Units (NNLU) of KGN PR(A)-NanoFIRE and KGN PR(B)-NanoFIRE cells treated for 16 h with 100 nM R5020 or vehicle (0.1% ethanol) and 1 µg/mL dox.Values normalised to R5020 treated cells for each line.n = 3 biologically independent experiments, each performed in duplicate, mean ± standard deviation, * p < 0.05, unpaired t test for each individual cell line.Fold change relative to −R5020 (dox only) treated cells stated.
Figure 5 .
Figure 5. NanoFIRE can be adapted to investigate Progesterone Receptor (PR) transcriptional activity.(A) Schematic of the PRE-NanoFIRE reporter construct and reporter activity under conditions of R5020 (R) and doxycycline treatment for PR expression.(B) Normalised NanoLuciferase Units (NNLU) of KGN PR(A)-NanoFIRE and KGN PR(B)-NanoFIRE cells treated for 16 h with 100 nM R5020 or vehicle (0.1% ethanol) and 1 µg/mL dox.Values normalised to R5020 treated cells for each line.n = 3 biologically independent experiments, each performed in duplicate, mean ± standard deviation, * p < 0.05, unpaired t test for each individual cell line.Fold change relative to −R5020 (dox only) treated cells stated. | 7,589.4 | 2023-10-01T00:00:00.000 | [
"Biology"
] |
Greener Chelators for Recovery of Metals and Other Applications
Metals are extensively used by industries in various applications such as electronics, materials, catalysts, chemicals, modern low-carbon energy technologies [1] (nuclear, solar, wind, bioenergy, carbon capture and storage (CCS)) and electricity grids [1,2]. Greater pressure has been placed on metal utilisation because of population growth coupled with a higher standard of living. Furthermore, industrialisation has led to the increasing demand for critical metals, as many of these are required in modern technologies. This is causing concern over the supply of critical metals for future generations. Therefore, according to Hunt et al. [3] the sustainable use of metals is vital so that both the current and future generations have access to them without hitches. Industries or nations classify metals as critical depending on the purpose and need of assessment [4]. Some metals have been identified as critical metals because of their significance [3]. However, elements with significant supply restriction issues (geopolitical issues, conflicts, international monopolies and mining as a by-product of other elements) and those which would have a dramatic impact on business or economy if limited are considered critical [5]. The top 14 metals like tellurium, indium, tin, hafnium, silver, dysprosium, gallium, neodymium, cadmium, nickel, molybdenum, vanadium, niobium and selenium are critical and commonly needed in these emergent low carbon energy technologies [1,6].
Aminopolycarboxylates (APCs) chelators (like EDTA and NTA) and phosphonates have strong chelation effects for metals [20,61]. Unfortunately, most of these compounds are not readily biodegradable [20,59,62]. The infiltration of these chelants into the environment could cause dissolution of heavy metals from the sediments and soils, thereby mobilizing them [24,40,49,63] thus leading to increased levels of metals [22], except phosphonates that do not mobilise toxic metals [40,59]. These strong chelants persist in the environment due to their high solubility in water and low biodegradability (except NTA) [22]. It has been stated that 800 µg/L of EDTA has been found in some U.S. industrial and municipal wastewater treatment plants and up to 12 mg/L in European bodies of water [20]. EDTA is now among the EU priority list of substances for risk assessment [16]. According to Sillanpaa [64], ethylenediamine tetraacetic acid (EDTA) contains 10% nitrogen which could harm aquatic organisms. Furthermore, the majority of the traditional chelating agents (APCs and phosphonates) are petroleum derived [65,66]. Therefore, the consumption of traditional APCs chelators is declining (-6% annually), because of the persisting concerns over their toxicity and negative environmental impact [58]. Another concern is that most of these common chelants are produced from toxic substances like cyanide [20,67].
In addition, the EU is regulating the use of phosphates in consumer laundry detergents and consumer dishwasher detergents in order to reduce the eutrophication risks and costs of phosphate removal by wastewater treatment plants [68][69][70][71]. Their persistence in the environment is because of their low biodegradability and high water solubility [67,72]. In addition, studies have shown that there is a decline in the high quality phosphorus rock reserves used to produce phosphate chelants which could lead to higher costs associated with obtaining phosphates and phosphonate products. The continuous dependence on phosphates and phosphonate chelators will further accelerate the decline of finite high quality phosphate rocks [73]. Furthermore, phosphates are essential components in fertilisers (used for food production) and therefore the utilisation of phosphates as chelators is in direct competition with the food industry. Therefore, it is essential to look for Greener alternative chelating agents in order to reduce the reliance on these traditional chelants. Hence, this paper gleans for the Greener alternative chelators and their applications, especially in metals recovery.
Some Greener Alternative Chelators
Aminopolycarboxylic acids chelators are the most widely consumed chelating agents; however, the percentage of the Greener alternative chelators in this category continues to grow Organic and Medicinal Chemistry International Journal [24]. In 2013, these Greener alternative chelants represented approximately 15% of the total aminopolycarboxylic acids demand. This is expected to rise to around 21% by 2018, replacing in particular the EDTA (ethylenediaminetetraacetic acid), NTA (nitrilotriacetic acid) and aminophosphonic acids used in cleaning applications [20,24,58]. This is because of issues like non-biodegradability, toxicity, and mobilization of toxic metals by these traditional chelants [24] as earlier mentioned. In addition, more than 90% of organic chemicals are derived from fossil fuel refineries [74,75] which is not sustainable. The continuous depletion of petroleum resources coupled with a shift to Greener products by consumers means that it is vital to look for alternative Greener chelating agents. Therefore, in order to replace traditional chelants, the alternative chelating agents must have a strong ability to form complexes [16,76], as well as possess low nitrogen content so as to reduce the loading of nitrogen [16]. In addition, they should be readily or at least inherently biodegradable [16,76]. These alternative chelants are well favored by environmental protection policies [62,77]. Examples of some Greener alternative chelating agents include ethylenediamine disuccinic acid ([S, S]-EDDS), polyaspartic acid (PASA), methylglycinediacetic acid (MGDA) [24,25], glutamic diacetic acid (L-GLDA), citrate, gluconic acid, amino acids, plant extracts etc. Asemave [78] and Asemave et al. [79] reported the use of lipophilic β-diketone, 14,16-hentriacontanedione as Greener alternative chelator for metals recovery. These have been proposed to replace the classical EDTA and diethylenetriaminepentaacetic acid (DTPA) chelators in various applications [16,20,80,81]. According to Hyvönen [16], alternative chelants have a lower chelating ability when compared to the traditional chelators, notwithstanding, this will make them less toxic.
Glutamic Acid Diacetic Acid (L-GLDA)
L-glutamic acid diacetic acid is 86% bioderived from foodapproved natural amino acid salt (monosodium L-glutamate or MSG) [66,82]. It's in turn obtained by fermenting sugar, molasses, corn or rice (renewable feedstock) [66], and is marketed as Dissolvine GL-38 [24]. According to Dixon [24], L-GLDA is produced by a waste-free process and from renewable feedstock, which is in accordance with the 4th principle of green chemistry [24]. Ammonia is generated as a by-product which is collected and re-used in industries. It is a strong chelating agent that is safe, readily biodegradable [61,83] and is considered to be an adequate alternative to phosphates, NTA and EDTA, especially in cleaning applications [20,61,84]. It is readily soluble in water at different pH values, which increases its performance rate [20]. L-GLDA is stable over a different temperature than other APCs. L-GLDA, citrate and carbonate are incorporated in detergent formulations [85]. Aqueous solutions containing L-GLDA can be use as oil field chemicals to dissolve calcium carbonate scale and other subterranean carbonate formations to increase permeability and enhance the withdrawal of oil or gas [86].
Polyaspartic Acid
There are different ways to obtain PASA [87], but the typical method for obtaining it is by heating aspartic acid to 453 K resulting in poly (succinimide) with elimination of water. The sodium hydroxide in the system then reacts with the polymer to partially cleave off the amide bonds, in which the (α and β) bonds are hydrolyzed resulting in a sodium poly (aspartate) copolymer with 30% α-linkages and 70% β-linkages (see Equation 1) [87]. Polyaspartic acid production (PASA) is cost-effective; hence it is available on a large scale. L-aspartic acid derived from plant sugars [88] could be used for the sustainable production of PASA. Poly aspartate is used as a biodegradable anti-scaling agent, corrosion inhibitor and as a metal chelator [87]. Lingua et al. [89] described PASA as a green chelant used in agriculture to supply minerals to crop so as to improve the crop yield.
Ethylenediamine Dissunic Acid ([S, S]-EDDS)
Ethylenediamine disuccinic acid, [S, S]-EDDS is a naturally occurring compound and was first isolated from culture filtrate of the actinomycete, Amycolatopsis orientalis. The biosynthesis of [S, S]-EDDS is from L-aspartate and serine [83,90] or from oxaloacetate and 2,3-diaminopropionic acid [35]. [S, S]-EDDS is also synthesized by the nucleophilic addition of ethylenediamine with sodium maleate affording stereoisomers of ethylenediamine-N,N'-disuccinic acid [90,91]. Alternatively, [S, S]-EDDS is produced from the reaction of maleic anhydride and ethylenediamine to yield a mixture of the 3-isomer of EDDS. The reaction of aspartic acid with 1,2-dibromoethane results in the formation of two isomers ([R, R]-EDDS and [S, S]-EDDS, depending on the isomer of aspartic acid used. Since aspartic acid can be derived from plant sugars it could also enhance the sustainable production of [S, S]-EDDS. It is also produced by fermentation of A. orientalis [35]. [S, S]-EDDS is the structural isomer of EDTA, however it is readily biodegradable than EDTA [61]. Equation 2 describes the synthesis of this compound.
Equation 2: Synthesis of [S, S]-EDDS
According to Dixon [24], [S, S]-EDDS production is in conformity with the 3 rd principle of green chemistry; i.e. designing less hazardous chemical synthesis. It is one of the most
Organic and Medicinal Chemistry International Journal
promising biodegradable chelating agents [39,49] and has a low nitrogen content [16] making it less toxic [92]. Furthermore, [S, S]-EDDS has zero NTA, formaldehyde or cyanide (toxic chemicals) unlike common traditional APCs chelants [83]. [S, S]-EDDS is effective in chelating several metals from soil [93][94][95]. Furthermore, it is capable of binding transition metal ions in place of Mg(II) and Ca(II) [24,83]. According to work by Yang et al. [96] [S, S]-EDDS at pH 5.5 is more suitable for Cu(II), Zn(II) and Pb(II) extraction. Ullmann et al. [97] modified [S, S]-EDDS by attaching a lipophilic hydrocarbon chain to its nitrogen atoms in order to make a hydrophobic chelating agent. Such lipophilic chelants are especially good as metals extractants.
In addition to these chelating agents above, in 1998, another greener alternative chelator, sodium iminodisuccinate was introduced [20]. Its production is based on the reaction of maleic anhydride with ammonia and sodium hydroxide [20,98] (see Equation 3). It is readily biodegradable [43,99] and environmentally benign chelator, it is effective in chelating Ca(II), Fe(III), Cu(II). And is used in; cleaning, water softening, photography, agriculture. Thus eliminating the problem of environmental persistence common conventional chelating agents [20]. . Another biodegradable chelating agent, tetrasodium 3-Hydroxy-2,2'-Iminodisuccinate (HIDS) has also been reported to have high chelating capability [101], which is effective in removing heavy metal ions such as Fe(III), Cu(II), Ca(II) and Mg(II) over wide range of pH. It's thermally stable, solubility in concentrated alkaline solutions, and is and environmental harmonious chelating agent [101]. HIDs is being found applicable in cleaning processes, textile processing, bleach stabilization, photography, paper and pulp processing, scale removal and prevention, metal treatment working, water treatment, Agriculture [101,102].
Citrates
These are salts of citric acid (2-hydroxy-1, 2, 3-propane tricarboxylic acid). Citric acid is known to be produced by fermentation (using fungi and yeasts) [103], synthesis and extraction from citrus fruits [103,104]. Vegetable wastes of potato, brinjal, cabbage wastes also have been found to as potential sources of citric acid [103]. Citrate fruits are used in the treatment of renal calculi [105]. Citric acid is an excellent chelating agent which is used to remove lime scale from boilers and evaporators [87]. They are used in some cases in place of classical chelating agents. For instance, a 24 h washing of the contaminated soil with 0.5 M citric acid reduces the levels of Cd(II), Cu(II), Zn(II) and Pb(II) from 0.01, 0.04, and 0.42, 41.52 mg g -1 to 0, 0.02, 0.18, and 5.21 mg g -1 respectively [106]. In another development, the ability of citric acid as chelating agents to the removal of lead from contaminated soil was examined both in the soil washing [107]. In the soil washing, the removal efficiency of lead with citric acid was less as [S, S]-EDDS in the pH range from 7-10 [107]. Although citrate is less efficient in terms of coordinating metal ions as compare to some conventional chelants, its activity towards removal of Pb(II) in acid soil is better for its low cost and less harm to crops [42]. It is also used for removing Ca(II) ions [87]. Citric acid is green chelator for removal of heavy metals from contaminated sludge with higher extraction efficiency at mildly acidic pH of about 2.30 [107][108][109]. Citric acid was found to highly efficient for the recovery of Cr(III), Zn(II) and Mn(II) from a printed circuit boards (PCBs) [110]. Again mobilization of Pb(II), Zn(II) and Cu(II) from harbor sediments using citric acid as chelating agents has been previously reported [95]. Extraction efficiencies of citric acid for Cr(III), Cu(II), Ni(II), Pb(II) and Zn(II) is significant to lower the heavy metal content in sludge below the legal standards [111].
Gluconates
Gluconic acid (C 6 H 12 O 7 ) is found naturally in fruit, honey, kombucha tea, and wine [87]. Gluconic acid is a weak organic acid obtained from glucose by a simple oxidation reaction. The oxidation is done by the enzyme glucose oxidase (fungi) and glucose dehydrogenase (bacteria such as Gluconobacter) [112]. But the microbial production of gluconic acid is the preferred method where the most studied and widely applied fermentation process involves the fungus Aspergillus niger [112]. Gluconic acid has two bonding sites: the ionic acid oxygen (-COO-) and the oxygen on the hydroxyl group (-OH) which can bond with the metal ion [113]. Gluconic acid and its derivatives (such as the sodium gluconates) have wide applications in food and pharmaceutical industries because of their chelating ability [112,113]. Aqueous solutions of the natural chelating agents D-gluconic acid and D-glucaric acid (D[+]-saccharic acid) were used to remove heavy metal ions (Cd(II), Cr(III), Cu(II), Ni(II), Pb(II), Zn(II)) from a soil polluted by long-term application of sewage sludge [114]. They found that, between the pH 12.0 and 13.0, Pb(II) and Cu(II) were selectively extracted [114].
Gallic Acid
Bioconversion studies with Aspergillus niger and Rhizopus oryzae showed that raw substrates like myrobalan fruits can be used as potential substrates instead of extracted tannins for gallic acid production [115]. It was found that Aspergillus
Organic and Medicinal Chemistry International Journal
niger is better gallic acid producing strain [115]. Gallic and citric acids were reported to induce removal of Cd(II), Zn(II), Cu(II) and Ni(II) from soil without increasing the leaching risk [63]. Net removal of these metals by these acids can be as much as other classical chelators. A major reason for this is the lower phytotoxicity of gallic and citric acids [63]. Other bioderived molecules like cyclodextrins (CDs) have also been identified as molecular chelating agents [116]. Cyclodextrins possess a cage-like supramolecular structure like cryptands, calixarenes, cyclophaneS, Spherands and crown ethers [116]. It can either be in alpha, beta, or gamma form cyclodextrins [80]. Therefore CDs complexes are widely used in many industrial products, technologies and analytical methods [116]. Other applications includes; drug carrier, food and flavors, cosmetics, packing, textileS, Separation processes, environment protection, fermentation and catalysis because of negligible cytotoxic effects of CDs [116]. Also, phytochelatins are oligomers of glutathione, produced by the enzyme phytochelatin synthase. They are found in plants, fungi, nematodes and all groups of algae including cyanobacteria [87]. Phytochelatin are used for heavy metal detoxification [87]. Another natural chelator, phytic acid is an organic acid found in rice bran [117]. It is used as an acidulant for pH adjustment. Phytic acid binds to metals strongly because of strong chelating effect [117]. Moreover, phytic acid shows antioxidant action and prevention of color degradation [117]. The most outstanding feature of phytic acid is its strong metal chelate function, allowing metal ions such as iron (Fe) which often adversely affect the production or storage of food in various forms to be removed or deactivated [117]. Moreso, pectin (found in foods like, apples, bananas, grapes, okra, beets, carrots and all citrus fruits) is useful in removing of heavy metals from the body [118].
Chitosan is a useful polymeric material produced from the shells of crustaceans [119]; it's a partially deacetylated polymer of acetylglucosamine [119]. Chitosan is a common biodegradable chelating compound [50]. In most cases, chitosan and its derivatives usages is based on their ability to chelate strongly heavy and toxic metal ions [120]. Chelation of copper and nickel by the addition of the biodegradable chelating agent, chitosan, EDTA and citrate was investigated [50]. The experiments showed that the extraction ability for copper and nickel from the contaminated soil decreased as follows: chitosan > EDTA > citrate at pH 3.00 -3.50. Pimenta et al. [121] also found that, 0.2% chitosan, 15% EDTA and 10% citric acid gave comparable effects in decreasing dentin microhardness. Amino acids and their derivatives have been found use as chelating agents. Amino acid chelants are used to deliver minor elements to plant unlike synthetic chelates [122]. In addition, amino acids complexes of some metals are useful as; anti-inflammatory agents, antibacterial agents (as applied against Escherichia coli and streptococcus pyogenes) and anti-tumor agents (against melanoma) [123]. Furthermore, Fischer [124] investigated the ability of β-thiol group containing amino acids L-cysteine and L-penicillamine to remove heavy metals (Cd(II), Cr(III), Cu(II), Hg(II), Ni(II), Pb(II), Zn(II)) from some soil components (peat, bentonite, illite) at neutral pH. The extractability of metals from peat in the presence of L-penicillamine was slightly higher than L-cysteine in these metals. The recovery of metals from bentonite was higher generally [124]. Riri et al. [125] investigated the use of simple organic acid (oxalic, glycolic and malic acid) to chelate gadolinium (III). Lignosulfonates, proteins, humic or fulvic acids and polyflavonoids are bioderived chemicals that can be used for complexing metals and subsequent application in agricultural foliar [126][127][128].
Additionally, some plant extracts can also be used as chelators [129,130]. The chelating efficiency of methanolic extracts of Triticum aestivum (wheatgrass) towards iron was investigated to determine the iron chelating activity in iron dextran induced acute iron overload animals. The chelating power or efficacy of the compound was found to be 34.5% to that of desferoxamine (commercial chelant) [131]. Ebrahimzadeh et al. [132] found that the phenolic and flavonoid extract of Mellilotus arvensis has ability to chelate Fe(II) [132]. The chelating ability of aqueous extract of Tetracarpidium conophorum was tested in vitro [133]. The dose (97.38%) showed the highest chelating ability. Therefore, the aqueous extract of Tetracarpidium conophorum could be used in the treatment of iron-overload disorders due to its high chelating ability in vitro at low doses [133]. The tannin fractions isolated from hazelnuts, walnuts and almonds were characterized for chelation of Zn(II), Fe(II), Cu(II) [134]. Copper ions were most chelated by the tannin fractions of hazelnuts, walnuts and almonds. Fe(II) complexation ability of the tannin fractions of walnuts and hazelnuts were lower as compared to the almond tannin fraction [134]. The capacity to chelate Zn(II) was quite varied for the different nut tannin. An in vitro iron chelating properties of 60% ethanolic extracts of some plant parts (Terminalia chebula, Caesalpinia crista, Cajanus cajan, Terminalia belerica, Emblica officinalis, and Tinospora cordifolia) were investigated. The iron chelating property of the plant extracts as reported were; T. chebula > T. belerica > E. officinalis > C. cajan > T. cordifolia > C. crista [135].
Likewise Soya beans extract were found as chelants towards Cu(II) [136]. The binding properties of Pb(II), Cu(II), Ni(II), Cd(II), Zn(II), Cr(III) and Cr(VI) in native and NaOH-modified biomass of Solanum elaeagnifolium were investigated [137]. The result at pH 5.0 revealed that; 20.6 mg Pb(II)/ g, 13.1 mg Cu(II)/ g, 6.5 mg Ni(II)/ g, 18.9 mg Cd(II)/ g, 7.0 mg Zn(II)/ g, 2.8 mg Cr(III)/ g and 2.2 mg Cr(VI)/ g were removed respectively. Better still the NaOH modified material gave higher binding properties in each case [137]. Tsujimoto et al. [138] had observed that anacardic acids from cashew nut can chelate Fe(III). Plant extracts have been used for removal of heavy metals especially Fe [129,131,132,135,136]. They can be used in treatment of iron-overload [133] and recovery of other heavy metals from the environment [134,139]. Column experiments of 14 d and 7 d with partially hydrolyzed wool as chelating agent on a Organic and Medicinal Chemistry International Journal silty-loamy sand agricultural soil was studied. The 14 d wool hydrolysate mobilized 68% of Cu in soil, whereas in the case of Cd it mobilized 5.5%. The plant (Nicotiana tabacum) uptake of Cd(II) and Cu(II), assisted by the application of 6.6 g kg -1 wool hydrolysate was increased by 30% in comparison to the control plants. Phytoextraction has revealed great potential with no leaching detected unlike use of conventional chelating agents [140].
Recovery of Metals with Greener Chelators
Aqueous solution of the chelant may be directly used to leach metals from spent solid waste into aqueous state. Then recovery process of metals from aqueous system with chelating agents mostly involves liquid -liquid extraction of metal ions with that of chelating agents. Solvent extraction of metals with chelating agents has been considered to be an effective method for purifying metals [141]. Subsequently the resulted complex (chelate) is stripped with strong acid (HCl or HNO 3 ) resulting to the release of the captured metal into another aqueous phase. This is then concentrated to obtain the metal into pure state. From the literature, it has been shown that chelating agents (such APCs) alone or supported on other solids have been used for the recovery of metals [142][143][144][145]. Figure 2 is the flow sheet showing the major stages for recovery of metals using chelating agents. The ability of chelant to bind metal ion is determined by the stability constants [24]. Wuana et al. [146] also reported that extraction of metals with chelating agents depend on stability constants [146]. The larger the stability constant, the stronger the chelation effect and the free metal ion in solution become lesser [147]. Hence, the commonly consumed chelants (APCs) usually have high stability constants with different metal ions [24]. Table 1 gives important information because L-GLDA and [S, S]-EDDS have relatively higher stability constants for most metal ions than most other Greener chelating agents [147,148]. As a matter of fact, they have been considered as replacement for EDTA and NTA in some applications [25]. Although other factors such as temperature, pH and presence of other ions can affect the ability to remove metals by chelants [24]. Whereas, Table 2 gives some of these Greener chelants; their sources and metal chelating functions [149][150][151][152][153][154]. Again, Table 3 contain some plant extracts which have been used for removal of heavy metals especially Fe.
Conclusion
Classical chelating agents (especially aminopolycarboxylates, APCs and phosphonates) are till date the commonly used in industrial and home processes. This is due to their ability to strongly bind metals, and perhaps their being available in the market over a long time now. Abundant evidences proved that they are not environmentally benign. This has spurred the quest of industrialists and academia, as prompted by environmental policies, towards low toxicological profile and environmentally friendly chelating agents. The desire has resulted into annual rise in formulations and proposals of Greener alternative chelators such as glutamic diacetic acid (L-GLDA), ethylenediamine disuccinic acid [S, S]-EDDS, polyaspartic acid, citrate, gluconic acid, amino acids, lipophilic β-diketone (14,16)-hentriacontanedione, plant extracts etc. to be used in place of the classical chelants. For reasons of environmental compatibility, low toxic profile, biodegradability and sustainability, these Greener chelators are better employed for industrial and home applications. Importantly, they can be applied to recovering metals from wastes to ensure sustainability of metals and their uses. | 5,338.2 | 2018-05-15T00:00:00.000 | [
"Chemistry"
] |
New 1-octanoyl-3-aryl thiourea derivatives: Solvent-free synthesis, characterization and multi-target biological activities
An efficient solvent-free synthesis of a 10-member library of octanoyl linked substituted aryl thioureas was accomplished successfully. The octanoyl isothiocyanate was freshly prepared in excellent yield and purity by the reaction of potassium thiocyanate with octanoyl chloride followed by removal of potassium chloride by filtration. The reaction of the latter with a series of ten different substituted anilines by stirring at 60-65°C lead to the formation of the title compounds. The in vitro antifungal activity of newly synthesized compounds was evaluated against Aspergillus niger, A. flavus and Fusarium solani strains of pathogenic fungi. Antibacterial assay was carried out against Gram positive (Staphylococcus aureus, Micrococcus luteus) and Gram negative bacterial strains (Escherichia coli, Enterobacter aerogens). Furthermore, antioxidant potential and enzyme inhibition studies against α-amylase and butyryl cholinestrase were performed. The results obtained indicated moderate to excellent activities of most of the compounds whilst some derivatives showed potency higher than the standard used. Article Info Received: 2 August 2016 Accepted: 11 October 2016 Available Online: 14 November 2016 DOI: 10.3329/bjp.v11i4.29059 Cite this article: Saeed A, Larik FA, Channar PA, Ismail H, Dilshad E, and Mirza B. New 1-octanoyl-3-aryl thiourea derivatives: solvent-free synthesis, characterization and multi-target biological activities. Bangladesh J Pharmacol. 2016; 11: 894-902. New 1-octanoyl-3-aryl thiourea derivatives: Solvent-free synthesis, characterization and multi-target biological activities Fayaz Ali Larik1, Aamer Saeed1, Pervaiz Ali Channar1, Hammad Ismail2, Erum Dilshad3 and Bushra Mirza3 Department of Chemistry, Quaid-i-Azam University, Islamabad 45320, Pakistan; Department of Biochemistry and Molecular Biology, University of Gujrat, Gujrat 50700, Pakistan; Department of Biochemistry, Quaid-i-Azam University, Islamabad 45320, Pakistan. This work is licensed under a Creative Commons Attribution 3.0 License. You are free to copy, distribute and perform the work. You must attribute the work in the manner specified by the author or licensor. prepared octanoyl isothiocyanate with suitably substituted anilines. The complete description of the substituents for derivatives is enunciated in Scheme 1. This method provides facile access towards the synthesis of thiourea derivatives by employing solvent free conditions in a very short time. Moreover, the yield obtained by employing this methodology is excellent. The melting points were determined on a Bio Cote SMP10-UK and are uncorrected. The chemicals like octanoic acid, ammonium thiocyanate, thionyl chloride, and aromatic amines, were purchased from SigmaAldrich and are used as received. The experiments were carried out in standard pyrex capable glassware chamber (20 mL). NMR spectra were recorded on a Bruker ARX, 300 MHz spectrometer, 1H NMR (300.13 MHz) and 13C NMR (75.47 MHz) using internal standard CDCl3 solutions (7.28 ppm from TMS). The splitting of proton resonances in the reported 1H NMR spectra are defined as s singlet, d doublet, t triplet, q quartet and m complex pattern; coupling constants were reported in Hz. FT-IR spectra were recorded as KBr pellets on a Bio -Rad Excalibur FT-IR model FTS 3000 MX (400-4000 cm1) and the elemental analyses were performed using a LECO-932 CHNS analyzer. Antibacterial assays The antibacterial activity of the compounds was evaluated by disc diffusion assay as reported previously (Ellman et al., 1964). In experiment two Gram positive [Staphylococcus aureus (ATCC 6538) and Micrococcus luteus (ATCC 10240)] and two Gram negative [Escherichia coli (ATCC 15224) and Enterobacter aerogens (ATCC 13048)] were cultured in nutrient broth for 24 hours at 37°C. These cultured strains were used as inoculums (1%) to run the assay. Each bacterial strain was added to the nutrient agar medium at 45°C, poured into sterile petri plates and allowed to solidify. 5 μL of the test compound with a final concentration of 200 μg/ mL was poured on sterile filter paper discs (4 mm) and placed on nutrient agar plates. Kanamycin and DMSO were used as positiveand-negative controls, respectively on each plate. The assay was performed in triplicate and the plates were incubated at 37°C for 2448 hours. The antibacterial activity of the compounds was determined by measuring the diameter of zones showing complete inhibition (mm) with the help of Vernier caliper.
Introduction
Thioureas find extensive utility in synthetic, biological and commercial fields.It possesses a broad spectrum of biological activities including antibacterial, antifungal, anti-oxidant, and enzyme inhibition.Thioureas are the key precursors for the synthesis of a wide variety of heterocycles.In recent decades, several new methods have also been reported for the preparation of substituted thioureas.A mild and efficient microwave assisted synthesis of various di-and tri-substituted thioureas have been reported.Our research group has extensively been involved in the synthesis, biological assay and synthesis of heterocycles from thioureas (Saeed et al., 2015;Saeed et al., 2015;Saeed et al., 2014 andSaeed et al., 2016) their biological activities and theoretical studies have comprehensively been published (Saeed et al., 2015;Saeed et al., 2015;Saeed et al., 2015;Saeed et al., 2014;Zaib et al., 2014;Saeed et al., 2013;Saeed et al., 2014;Saeed et al., 2010;Saeed et al., 2016;Saeed et al., 2009;Saeed et al., 2010 andSaeed et al., 2011).Thus, application of thioureas as excellent targets in medicinal chemistry has already been well established.
Herein, we report a clean, efficient and the solvent-free synthesis of a small library of 1-aroyl thioureas and their evaluation as antibacterial, antifungal, antioxidant and enzyme inhibitors.
Experimental
The mild solvent-free synthesis of a small library of ten candidates (3a-3j) was achieved by stirring freshly prepared octanoyl isothiocyanate with suitably substituted anilines.The complete description of the substituents for derivatives is enunciated in Scheme 1.This method provides facile access towards the synthesis of thiourea derivatives by employing solvent free conditions in a very short time.Moreover, the yield obtained by employing this methodology is excellent.
The melting points were determined on a Bio Cote SMP10-UK and are uncorrected.The chemicals like octanoic acid, ammonium thiocyanate, thionyl chloride, and aromatic amines, were purchased from Sigma-Aldrich and are used as received.The experiments were carried out in standard pyrex capable glassware chamber (20 mL).NMR spectra were recorded on a Bruker ARX, 300 MHz spectrometer, 1 H NMR (300.13MHz) and 13 C NMR (75.47 MHz) using internal standard CDCl3 solutions (7.28 ppm from TMS).The splitting of proton resonances in the reported 1 H NMR spectra are defined as s singlet, d doublet, t triplet, q quartet and m complex pattern; coupling constants were reported in Hz.FT-IR spectra were recorded as KBr pellets on a Bio -Rad Excalibur FT-IR model FTS 3000 MX (400-4000 cm - 1 ) and the elemental analyses were performed using a LECO-932 CHNS analyzer.
Antibacterial assays
The antibacterial activity of the compounds was evaluated by disc diffusion assay as reported previously (Ellman et al., 1964).In experiment two Gram positive [Staphylococcus aureus (ATCC 6538) and Micrococcus luteus (ATCC 10240)] and two Gram negative [Escherichia coli (ATCC 15224) and Enterobacter aerogens (ATCC 13048)] were cultured in nutrient broth for 24 hours at 37°C.These cultured strains were used as inoculums (1%) to run the assay.Each bacterial strain was added to the nutrient agar medium at 45°C, poured into sterile petri plates and allowed to solidify.5 µL of the test compound with a final concentration of 200 µg/ mL was poured on sterile filter paper discs (4 mm) and placed on nutrient agar plates.Kanamycin and DMSO were used as positive-and-negative controls, respectively on each plate.The assay was performed in triplicate and the plates were incubated at 37°C for 24-48 hours.The antibacterial activity of the compounds was determined by measuring the diameter of zones showing complete inhibition (mm) with the help of Vernier caliper.
Antifungal assay
Antifungal activity of synthesized compounds was measured by previously reported disc diffusion method (Mohammed et al., 2011) against Mucor species (FCBP 0300), Aspergillu sniger (FCBP 0198), Aspergillus flavus (FCBP 0064) and Fusarium solani (FCBP 0291).All fungal strains were cultured on Sabouraud dextrose agar (SDA) at 28°C for 5-7 days.Actively growing fungal spores of each strain were spread with the help of autoclaved cotton swaps on solidified SDA petri plates under sterile conditions.5 µL of each test compound with a final concentration of 200 µg/mL was poured on sterile filter paper discs (4 mm) and placed on SDA plates respectively.Terbinafine and DMSO served as positive and negative controls, respectively on each plate.All plates were incubated at 28°C for 5-7 days and Scheme 1: Synthesis of substituted aryl thioureas fungal growth was determined by measuring the growth diameter (mm) with the help of a Vernier caliper.
Anti-oxidant activity
Radical scavenging activity of test compounds against stable free radical 2,2 diphenyl-1-picryl-hydrazyl (DPPH) was determined spectrophotometrically.Each test compound (5 µL) with the final concentration of 200, 100 and 50 µg/mL was mixed with 100 µM DPPH (95 µL) in 96-well microtiter plates.Ascorbic acid and DMSO were used as positive-and negative-control respectively.The experiment was performed in triplicate and reaction mixtures were incubated in dark for 30 min at 37°C in dark.After incubation, absorbance was measured at 515 nm by using microplate reader (BioTeK, Elx 800).IC50 was calculated with Graph pad Prism 5.
α-Amylase assay
The compounds were tested for their enzyme inhibition activity against α-amylase by the previously reported method (Gorja et al., 2013).For assay 5 µL of each test compound with the final concentration of 200, 100 and 50 µg/mL was mixed with 40 µL of starch (0.05%) and 30 µL of potassium phosphate buffer (pH 6.8) in 96-well micro titer plates followed by the addition of 10 µL of αamylase enzyme (0.2 U/well).Acarbose and DMSO were used as positive-and negative-control respectively.The plates were incubated for 30 min at 50°C and 20 µL HCl (1M) as stopping reagent was added.Then 100 µL of iodine reagent (5 mM KI and 5 mM I2) was added to check the presence and absence of starch and absorbance was measured at 540 nm with microplate reader (Bio Tek, Elx800).The experiments were performed in triplicate and IC50 was calculated with Graph pad Prism 5.
General procedure
Freshly prepared octanoyl isothiocyanate was treated with respective aryl amines in a 1:1 Molar ratio under dry conditions.The reaction mixture was stirred at 60-65°C for about 10-15 min.On cooling, the reaction mixtures were slowly poured into acidified (pH 4-5) chilled water and stirred well (50 mL).The solid products obtained were separated by filtration and dried at room temperature.
Antibacterial assay
The synthesized compounds were evaluated for their potential antibacterial activity by using four different bacterial strains (Table I).The results showed that two compounds 3a and 3c showed antibacterial activity against all the tested bacterial strains which represent that these compounds are equally activity against Gram positive as well as Gram negative bacteria.The compound 3i exhibited antibacterial good activity against three bacterial strains except Enterobacter aerogenes.The compounds 3g and 3h showed antibacterial activity only against Staphylococcus aureus.Overall a range of antibacterial activity has been exhibited by the tested compounds.
Antifungal assay
All the tested compounds (3a-3j) showed significant antifungal activity against all tested fungal strains indicating that these compounds broad spectrum antifungal candidates (Table II).The highest activity was measured against Aspergillus flavus and Aspergillus niger.These are very interesting findings that some compounds, e.g 3a and 3c possess both antibacterial as well as significant antifungal activity against all tested strains.
Butyryl cholinesterase assay and anti-oxidant assay
The compounds were screened for their radical scavenging activity through DPPH assay (Table III).The results showed that 3a, 3b, 3c and 3d were found anti-oxidant and the rest of the compounds did not show any anti-oxidant activity.
α-Amylase assay
The compounds were evaluated for their enzyme inhibition potential against α-amylase enzyme and results in the form of IC50 values are given in Table III.Experiment was performed in triplicates and acarbose (IC50 17.1 µg/mL) was used as positive control.The results showed that the compounds 3f, 3g and 3h exhibited enzyme inhibition activity with IC50 values of 282.1, 294.2 and 285.1 µg/mL respectively.
Butyryl cholinesterase assay
The synthesized compounds were screened for their inhibition ability against butyryl cholinesterase enzyme.
The experiment was performed in triplicates and galantamine hydrobromide was used as a positive control (IC50 4.6 µg/mL).It can be seen from results that three compounds 3b (IC50 185.8 µg/mL), 3f (IC50 238.8 µg/mL) and 3i (IC50 271.4 µg/mL) were found potential inhibitor of butyrylcholinesterase enzyme while the rest of the compounds were not active against enzyme (Table III).
Table III shows the results of anti-oxidant activity and enzyme inhibition.The compounds were subjected for enzyme inhibition for two different enzymes.
Discussion
Previously, the reported synthesis of thioureas by conventional method has not delivered excellent yield and tedious reaction work ups were involved and the reaction was completed in 5 hours.
It is well recognized that thioureas possess a wide spectrum of biological activities and previously our research group and other researchers have highlighted the biological application of different acyl and aryl thioureas.To the best our knowledge, there is no paper published related to the octanoyl thiourea.However, Correa et al. (2015) reported the BSA-and DNAbinding studies of thiourea complexes against lung and prostate tumor cells.The antimicrobial activities of thioureas were reported in which few derivatives were found to be good against E. coli (Zhong et al., 2008).Madabhushi et al. (2014) reported the benzimidazole linked chiral thioureas as antibacterial and anti-cancer agents.The biological applications of acyl/aryl thioureas have comprehensively been discussed (Saeed et al., 2014) and cytotoxic activities of polyamide thioureas (Stringer et al., 2013).Saeed et al. reported the coumarin linked thioureas as cholinesterase inhibitors (Saeed et al., 2015).Saeed et al reported amino benzene sulfonamide thiourea conjugates as carbonic anhydrase inhibitors (Saeed et al., 2014).But herein, we have reported the multi-target activities of synthesized thioureas.
But all these other reported work on thioureas have demonstrated their different biological activities, herein, we first time, report the multi-target potential of octanoyl thioureas.
The nature of substrates H2NR, strongly influence the yield of the final products obtained (Scheme 1).The substituents like F, Br, Cl, methyl, OMe, attached at the ortho, meta and para positions on the aromatic ring in H2NR moiety show the strong mesomeric effect by releasing electrons through delocalization of lone pairs in spite of the inductive effect (-I) that result in an increase of nucleophilic character of the amino group.It is well established that the resonance effect is stronger than inductive effect and the net result is electron releasing to rest of the molecule.However, the reactivity of di-and higher substituted substrate has been enhanced owing to their increased electron releasing capability and of better yield as the data suggest.This has been reflected in the increase of product yield in the following order; Br > methyl > OM > F. The nitro-containing substrates have shown lowest yield due to its electron pulling nature from the aromatic ring, rendering the ring electron deficient, which in turn diminishes the nucleophilic character of amine.
The significant absorptions observed in the FT-IR spectra of all the synthesized substituted thioureas are listed in spectroscopic data along with the respective compounds the experimental section.The tentative assignments for functionalities are made according to the literature (Saeed et al., 2014).The absence of v (S-H) vibration in the range of 2529-2588 cm -1 confirmed the conversion of isothiocyanate moiety into thiourea (-NHCSNH-) functionality.The substituted thioureas behave both as a monodentate and bidentate ligands, depending upon the reaction conditions.The characteristic IR bands of substituted thioureas are found around; 3120-3402 (NH), 2960-3090 Ph(CH), 1660-1720 (C=O), 1540-1620 (CN), 1243-1277 (C=S) and 1130-1185 (C-S) (Saeed et al., 2013).The addition of the amine to the C-N double bond took place by attacking the carbon atom in the isothiocyanate group resulting in the formation of the desired compounds.After complete reaction, the strong band at 2000 cm -1 (N-C-S) in the isothiocyanate disappeared (Saeed et al., 2016).Instead of the expected normal carbonyl (C-O) absorption around 1710 cm -1 a medium strong band at 1665-78 cm -1 suggested a possible hydrogen bond formation between the H-atom of the NH-group and the O-atom of the carbonyl group.The effect and type of hydrogen bonding in thioureas have been excellently discussed by us in earlier reported paper [Saeed et al., 2011].The 1 H NMR data for the synthesized disubstituted thioureas showed that the NH hydrogen resonates considerably down-field from other resonances in the spectra.The proton chemical shifts were found around 11-12 ppm for free and hydrogen bonded NH, respectively, and aromatic protons appeared downfield between 7.23 and 8.50 ppm in their usual regions.It was also observed in our previous work that the coordinating or highly polar solvents like DMSO-d6 had a profound effect on the free NH protons chemical shift and appeared more downfield as compared with the non-coordinating solvents like CDCl3, C6D6 and CD2Cl2.This shift could be attributed to the possible hydrogen bonding between the NH and sulfoxide (S-O) moiety (Saeed et al., 2014).The 13 C NMR data explicitly show all the signals due to the distinct carbons present in compounds (3a-i).The aromatic carbon resonances of the thiourea l were assigned on the basis of signal intensities and then were compared with the reported values (Saeed et al., 2012).The chemical shifts of the carbons pertaining to CONH and CSNH moieties of the substituted thiourea ligands resonate around 166-68 and 177-80 ppm respectively.
The structure activity relationship was developed to rationalize the results of biological activities.The extensive applications of thiourea in biological field have prompted researchers to explore the structureactivity relationship.Thioureas are capable of showing inter and intra molecular hydrogen bonding which helps in acting as receptors as shown in Figure 1.A series of ten compounds were designed and synthesized to develop structure activity relationship for different five bioassays (antibacterial, antifungal, anti-oxidant and enzyme inhibition (-amylase and butyl cholinesterase).Compounds 3a-d showed better results in anti-oxidant activity, probably due to the substitution at para-position of the benzene ring.Compounds 3a-c was found potent inhibitors against alpha-amylase enzyme and the series of compounds were found to be less potent against butylcholinesterase enzyme inhibition.In case of antibacterial and antifungal activity, the derivatives 3a-e showed excellent results and 3c derivative showed higher activity than the standard drug.Compounds 3a-e with para-substitution showed better activity than meta and ortho substituted derivatives.
Conclusion
A series of new 1-octanoyl-3-aroyl was designed and synthesized.The biological assay results indicated that most of the compounds possessed in vitro antifungal activity against fluconazole-resistant Trichophyton rubrum and Cryptococcus neoformans.Compounds (3a-i) showed adequate activity. | 4,177.6 | 2016-11-14T00:00:00.000 | [
"Chemistry"
] |
Semantic and Geometric-Aware Day-to-Night Image Translation Network
Autonomous driving systems heavily depend on perception tasks for optimal performance. However, the prevailing datasets are primarily focused on scenarios with clear visibility (i.e., sunny and daytime). This concentration poses challenges in training deep-learning-based perception models for environments with adverse conditions (e.g., rainy and nighttime). In this paper, we propose an unsupervised network designed for the translation of images from day-to-night to solve the ill-posed problem of learning the mapping between domains with unpaired data. The proposed method involves extracting both semantic and geometric information from input images in the form of attention maps. We assume that the multi-task network can extract semantic and geometric information during the estimation of semantic segmentation and depth maps, respectively. The image-to-image translation network integrates the two distinct types of extracted information, employing them as spatial attention maps. We compare our method with related works both qualitatively and quantitatively. The proposed method shows both qualitative and qualitative improvements in visual presentation over related work.
Introduction
Autonomous driving systems require effective and secure operation under various visibility conditions.The functionality of these systems is heavily dependent on their perception tasks, which have seen significant improvements in accuracy through advances in deep learning in recent years.Despite these advances, challenges persist in addressing perception tasks under poor visibility conditions (e.g., nighttime, rain, and fog).The primary obstacle stems from an imbalance in the amount of available data for each scenario.Deep-learning-based models, reliant on substantial datasets and annotations for training, often encounter difficulties due to the scarcity of relevant data for adverse visibility situations.Although numerous datasets have been created, most are concentrated on clear daytime conditions, making it impractical to collect and annotate data for every conceivable traffic scene and visibility scenario.
To address this challenge, researchers [1][2][3][4][5][6][7] have increasingly utilized synthetic data (e.g., computer graphic images from sources such as video games and simulators) to diversify the datasets.Despite the advantages of easy dataset creation for various scenarios, there remains a disparity between synthetic and real-world data.Consequently, deeplearning-based models (e.g., depth estimation, semantic segmentation, and camera pose estimation) trained with synthetic data may exhibit decreased performance in real-world applications.Efforts to enhance the photorealism [8][9][10][11] have been made, but the usability of models trained on synthetic data for autonomous driving systems remains a challenge.
In contrast, day-to-night image translation offers a solution by creating realistic nighttime data while preserving the objects, structure, and perspective.This process involves translating annotated daytime images into nighttime images, enabling the utilization of daytime image labels for the translated nighttime images.This facilitates the creation of nighttime datasets.
Numerous contemporary image translation methods leverage generative adversarial networks (GANs) [12], a robust framework for the training of generative models.However, it is challenging to obtain paired data for model training in traffic scenes (i.e., daytime and nighttime image pairs where every corresponding point is the same, except for the time of day).Consequently, this paper adopts unsupervised image-to-image translation methods to address the lack of paired data.
In this paper, we introduce an unsupervised day-to-night image translation model based on GANs [12] as a data augmentation technique.The translation of daytime images to nighttime poses a formidable challenge.This requires not only accurate color adjustment but also the consideration of semantic and geometric information at the pixel level.The goal is to achieve consistent transformation in semantic and geometric information while allowing for diverse style conversion.As shown in Figure 1, our model sets a hypothesis.We assume that the semantic and geometric information can be extracted from semantic segmentation and depth estimation.We first train the multi-task network, which estimates the semantic segmentation and depth (Figure 2a).The trained parameters of the multi-task network are utilized for the encoder and decoders of the image translation networks (Figure 2b).The attention module infers the attention map using the feature map extracted from the decoder as input.Leveraging the semantic segmentation and depth multi-task estimation network's capacity to extract vital semantic and geometric information, the attention modules are able to infer semantic and geometric attention maps along the spatial dimension.Our contributions can be summarized as follows.
•
We propose a semantic-and geometric-aware image-to-image translation network that adopts semantic segmentation and depth estimation guided attention modules.To the best of our knowledge, this is the first work that utilizes both semantic segmentation and depth information in image-to-image translation.
•
We introduce the semantic segmentation and depth estimation guided attention modules and adopt them for image-to-image translation.Our method does not require annotations for the target domain.
•
The proposed method generates better results both quantitatively and qualitatively in our experiments; it outperforms the related work in two distinct evaluation metrics.
Our method is trained with two different authentic datasets (i.e., Berkeley Deep Drive [13], Cityscapes [14]) at the same time.
Generative Adversarial Networks (GANs)
Generative models within the realm of deep learning, particularly those based on the framework of generative adversarial networks (GANs) [12], have received significant attention.The fundamental structure of GANs [12] involves two adversarial networks (i.e., a generator and a discriminator) engaged in a competitive training process.The generator aims to produce data that the discriminator perceives as real, leading to a continuous interplay between the two networks.Subsequent to the introduction of GANs [12], various enhancements and alternative versions have been proposed to further improve their capabilities.cGAN [15] is one such advancement, introducing a conditional approach by incorporating additional input layers to condition the data.This allows the explicit generation of outputs based on specified conditions.Combining GANs [12] with autoencoders [16], VAE/GAN [17] and VEEGAN [18] represent innovative approaches.These models leverage the strengths of both GANs and auto-encoders, with the aim of enhancing the overall generative process.In pursuit of better training objectives, alternative loss functions have been explored.LSGA [19] addresses the vanishing gradients problem by utilizing the least-squares loss function for the discriminator.This adjustment helps to stabilize the training process and improve the overall performance of the GAN [12] model.WGAN [20] introduces a different training objective by adopting the Wasserstein distance between the distributions of generated and real data.This alternative approach aims to overcome the limitations associated with traditional GANs' training objectives.
Image-to-Image Translation Network
Pix2Pix [21] made significant strides as the initial unified framework for paired imageto-image translation, using cGAN [15].In more recent developments, there has been a shift towards unsupervised image-to-image translation methods, which operate without a reliance on paired data.To tackle the inherent challenges of this ill-posed problem, various approaches have adopted the cycle consistency constraint.This constraint ensures that the translated data can be accurately reconstructed back to the source domain [22][23][24][25].Some methods assume a shared latent space among images in different domains.CoGAN [26] features two generators with shared weights, producing images from different domains using the same random noise.UNIT [27], building upon CoGAN and incorporating VAE/GAN [17], maps each domain into a common latent space.Additionally, the exploration of multimodal image-to-image translation methods has gained traction [28][29][30][31][32][33].However, these methods tend to exhibit suboptimal results when confronted with images from domains with substantial differences, such as daytime and nighttime, as they often lose instance-level information.
Day-to-Night Image Translation Network
Focusing on domain adaptation between daytime and nighttime images is crucial in enhancing the performance of various perception tasks, such as object detection [34], semantic segmentation [35,36], and localization [37].Some works have attempted to boost deep network training for specific [38] or multiple [39] tasks.Some methods utilize semantic segmentation for additional information.SG-GAN [40] adopts semantic-aware discriminators, using semantic information to distinguish generated images from real ones.SemGAN [41] and Ramirez et al. [42] take a distinctive approach by inferring semantic segmentation from the translated images, thereby enforcing semantic consistency during the translation process.This emphasis on semantic information contributes to the overall perceptual coherence of the generated images.AugGAN [43,44] is a multi-task network designed for both day-to-night image translation and semantic segmentation estimation.This integrated approach reflects a comprehensive strategy in which the generator simultaneously learns the information about image translation between day and night and semantic segmentation.
Attention Mechanism
Attention mechanisms weight the parameters of deep learning models based on features extracted from input images.RAM [45] was initially proposed in the field of computer vision, introducing a method to recurrently estimate spatial attention and update the network.SENet [46] and ECA Net [47] introduced channel attention networks.Subsequent research has demonstrated advancements in spatial [48,49] or channel [50,51] attention.Some studies have proposed inferring attention maps along spatial and channel dimensions [52][53][54][55][56].More recently, self-attention [57][58][59] and Transformer [60][61][62] have been introduced into computer vision, rapidly advancing the field.
Proposed Method
In this section, we propose a semantic-and geometric-aware day-to-night image translation method based on the CycleGAN framework [22].When translating daytime images to nighttime ones, both descriptions of light sources with semantic properties and expressions of darkness according to geometric distances are required.As shown in Figure 2, the proposed method aims to extract semantic and geometric information from input images and apply it to the image-to-image translation network with the attention mechanism.
We assume that semantic and geometric information can be acquired from semantic segmentation and depth estimation processes, respectively.We first train the semantic segmentation and depth multi-task network.After, this trained multi-task network is utilized in the image translation phase to extract semantic and geometric information as feature maps.Subsequently, semantic and geometric attention maps are generated from the feature maps of the decoders.Finally, the attention maps are applied to image-to-image translation networks.
Here, we denote the domains of the daytime and nighttime RGB images by X and Y, respectively.
Semantic Segmentation and Depth Estimation
Figure 2a provides an overview of the multi-task network for the estimation of semantic segmentation and depth.The multi-task network first encodes authentic daytime images x ∈ X into latent representations via the encoder E Mul ; then, the decoders D Seg and D Dep estimate semantic segmentation maps s X and depth maps d X , respectively, from the The trained parameters of the multi-task network are utilized in the image-to-image translation network.
Image Translation
The overall framework of our proposed method is depicted in Figure 2b.The framework consists of two opposite cycles (i.e., day-to-night cycle and night-to-day cycle) and each of them is a coupled image-to-image translation network.For each cycle, we refer to the translation from real images as translation and the translation from translated images as reconstruction.
Image-to-Image Translation Network
The day-to-night image translation network consists of one encoder E X , one generator G X , two decoders for semantic segmentation D , where T indicates time domain T ∈ {X, Y}).Each attention module infers a spatial attention map from the corresponding feature map.Here, two different sizes of feature maps are utilized from each decoder (i.e., semantic segmentation and depth decoder).The resultant attention maps are then mapped to their correspondingly sized feature maps within the generator.
In each cycle, authentic images {x, y} are translated into { ȳ, x} by the image-to-image translation network with the semantic and geometric attention maps The night-to-day image translation network is structured in the same way.Here, the transferred parameters of the day-to-night image translation networks E X , D Moreover, discriminators {Disc X , Disc Y } are defined for each domain to determine whether a daytime or nighttime image is real or fake.
Sharing of Semantic and Geometric Feature Maps
Sharing semantic and geometric information during the image translation cycle is considered plausible, given the consistency observed in most scene elements before and after the image translation.However, challenges arise because the multi-task networks E Mul , D Seg , and D Dep are not trained on nighttime images, which can cause poor accuracy in semantic segmentation and depth estimation from nighttime images.Therefore, during the reconstruction phase, from translated images { ȳ, x} to reconstructed images { x, ŷ}, the feature maps of semantic segmentation and depth decoders are shared only within the day-to-night cycle.In contrast, these feature maps are separately estimated in the night-to-day cycle, as shown in Figure 2b and f k X is the feature map extracted from translated images x.
Training Networks
The proposed method follows a two-step training process.In the initial step, multitask networks responsible for estimating semantic segmentation and depth maps (i.
Semantic Segmentation and Depth Estimation Network
For the first step, we train the multi-task networks {E Mul , G Seg , G Dep } on the daytime images x.In this step, we employ multi-class cross-entropy loss l mce for the training of semantic segmentation and L2 loss for the training of depth estimations.Let S and D be the domains of semantic segmentation and depth labels, respectively.The mathematical expressions for the loss function of each task are given by The overall loss function is as follows:
Image-to-Image Translation Network
As the second step, we train the image-to-image translation networks.The image translation generators adopt the attention maps inferred from the feature maps of the semantic segmentation and depth estimation networks.Attention modules (CBAM [55]) are jointly trained with encoders and generators.Following the CycleGAN framework [22], we adopt the loss functions as follows to train the image translation network with unpaired data.
Adversarial Loss
Adversarial losses are implemented for both day-to-night and night-to-day image translation networks, seeking to minimize the distributional gap between translated images and targets.The objectives of the image translation networks {T X→Y , T Y→X } and their corresponding discriminators {Disc Y , Disc X } are expressed as follows:
Cycle Consistency Loss
The cycle consistency loss is introduced to prevent the image translation network from generating arbitrary images in the target domain, regardless of the input images.The primary goal is to alter only the time of day, while preserving all other elements of the scene.To accomplish this, we apply constraints to guarantee the alignment between the input image and the reconstructed image.The cycle consistency loss is mathematically expressed as follows:
Identity Loss
The identity loss requires that the image translation networks exclusively translate the source domain images and not those from the target domain.The objective is expressed as follows: where {M ′ X , M ′ Y } represents the attention maps inferred from real images {x, y} using the networks designed for opposite domains.
Total Loss
The total loss' objective is as follows: where λ cyc and λ id are hyperparameters to control the influences of each loss.
Experiments
In this section, we first compare our method with related methods for day-to-night image translation.Then, we present experiments to investigate the validity of the architecture of the proposed method.
Experimental Environments 4.1.1. Datasets
The proposed method requires daytime and nighttime images, along with the corresponding labels for semantic segmentation and depth during training.However, there is currently no available dataset that encompasses all these requirements.Consequently, we conducted training using data from two distinct datasets: the Berkeley Deep Drive (BDD) dataset [13] and the Cityscapes dataset [14].
The Berkeley Deep Drive dataset comprises 10,000 RGB images capturing diverse driving scenarios (e.g., highway, urban area, bridge, and tunnel).The dataset includes variations in weather conditions and the time of day.Semantic segmentation labels are provided for these images.The image resolutions are 1280 × 720 pixels.
The Cityscapes dataset offers 5000 RGB daytime images showcasing various driving environments, accompanied by multiple annotations.In particular, the dataset includes disparity information that can be converted to depth.The image resolutions are 2048 × 1024 pixels.
During the training of the network outlined in Section 3.1, we utilized semantic segmentation labels from the BDD dataset and depth labels from the Cityscapes dataset.The image translation network described in Section 3.2 was trained using both daytime and nighttime images from the BDD dataset.
•
Learning rate: fixed at lr = 0.0002 for the initial 100 epochs and then linearly decayed to lr = 0 over the next 100 epochs.• Batch size: set to 4.
During training, we randomly sampled 1000 daytime images with semantic segmentation labels and 1000 nighttime images from the BDD dataset.In addition, 1000 daytime images with depth labels were randomly selected from the Cityscapes dataset.The images from the BDD dataset and the Cityscapes dataset were randomly cropped to 512 × 512 pixels and 824 × 824 pixels, respectively.Subsequently, the images were resized to 256 × 256 pixels after the cropping.For testing, 1000 daytime images were randomly sampled from the BDD dataset, and these images were center-cropped to 512 × 512 pixels before being resized to 256 × 256 pixels.
Comparison
We compare the proposed method with the following models.
•
SemGAN [41] adopted semantic consistency loss to maintain semantic information during image-to-image translation.• AugGAN [43,44] learned image translation and semantic segmentation simultaneously.• Lee et al. [64] transfer-learned the weights of semantic segmentation networks to the day-to-night image translation networks.• UNIT [27] achieves unsupervised image-to-image translation through VAE [16] from different domains that share a latent space.• MUNIT [30] is a multimodal unsupervised image-to-image translation method that is an extension of UNIT [27].
Evaluation Metrics
We evaluate the proposed and compared methods using the following metrics for quantitative comparisons.
•
The Fréchet Inception Distance (FID) [65] measures the Fréchet distance between the distributions of features extracted from the Inception-V3 network [66] for real and generated images.
•
The Kernel Inception Distance (KID) [67] uses the calculation of the squared Maximum Mean Discrepancy (MMD) by comparing the Inception-V3 [66] features of the real and translated samples.This comparison is conducted through the application of a polynomial kernel.
•
The Learned Perceptual Image Patch Similarity (LPIPS) metric [68], utilized to assess the diversity of an image set, computes the average feature distances between all pairs of images.Specifically, it gauges the translation diversity by evaluating the similarity between distinct deep features extracted from the pre-trained AlexNet [69].
Comparison to the Related Work
We conducted experiments to compare our proposed method with the related works as mentioned in Section 4.1.3.
Figure 4 shows the day-to-night image translation results of each method.Cycle-GAN [22], Lee et al.'s method [64], UNIT [27], and the proposed method generate expressions of streetlights, while other methods do not generate these expressions.Furthermore, our method generates visually more detailed expressions (i.e., color, shape, and position) of the streetlights than CycleGAN [22], Lee et al.'s method [64] and UNIT [27].Translation failures in the sky area, sandwiched between buildings, are observed from the results of CycleGAN [22], AugGAN [43,44], and Lee et al.'s method [64].UNIT [27] and MUNIT [30] appear to fail in their translations, as they darken the entire scene, giving the impression of inverting the colors in the images.In contrast, the proposed model's results demonstrate the preservation of color information for each object in the input image.The results of our method show the car and road in front of the ego vehicle being illuminated by headlights.
Table 1 shows the quantitative evaluation with FID [65], KID [67], and Diversity (LPIPS [68]).Our method outperforms other methods in both the FID [65] and KID [67] metrics, which evaluate the realism of the results.AugGAN [43,44] obtained the highest score in Diversity (LPIPS [68]).However, it is crucial to note that achieving a high score in Diversity (LPIPS [68]) does not necessarily indicate a suitable result, as this metric does not consider the realism aspect.This point is underscored by the presence of images in the bottom row of AugGAN's outputs in Figure 4. Here, the translation between daytime and nighttime failed, and the increased Diversity (LPIPS [68]) can be interpreted as a result of images closer to daytime (which lack realism).Therefore, Diversity (LPIPS [68]) metrics should be evaluated comprehensively alongside realism assessments.
In this context, our proposed model received the highest evaluations in both the quantitative and qualitative assessments of realism.Simultaneously, it recorded values close to the Diversity (LPIPS [68]) of real night images.Hence, our proposed model can be deemed to have performed the best overall.
Table 1.Quantitative evaluation with FID [65], KID [67], and Diversity (LPIPS [68]).The up and down arrows next to the metrics indicate that a larger and smaller numerical value corresponds to a better outcome, respectively.The bold numbers highlight the best results.
Network Settings
We examined different configurations of the proposed method to optimize the architectural composition.Initially, we analyzed the impact of combinations of attention maps.Subsequently, we demonstrated the effectiveness of sharing feature maps within the decoder during the day-to-night cycle, and alternatively calculating them separately within the night-to-day cycle., SemGAN [41], AugGAN [43,44], Lee et al. [64], UNIT [27], MUNIT [30], and our method, respectively.
Pipelines of the Semantic and Geometric Feature Maps
The proposed method, as shown in Figure 2b, incorporates the sharing of feature maps from the semantic segmentation and depth estimation decoders during the day-to-night cycle.While a similar process could be applied to the night-to-day cycle, estimating semantic segmentation and depth maps from nighttime images poses a challenge.There is a concern that sharing low-accuracy estimates may mislead the reconstruction process.To address this, we conducted experiments with three types of feature map pipelines throughout the night-to-day cycle.Figure 5 illustrates the visual results of these three pipelines.
In the night-to-day cycle, the expressions of the sky and streetlights vary based on the chosen pipeline.Sharing the feature maps or not adopting attention maps within the night-to-day cycle can lead to expression failures in the sky.On the other hand, when we separately extract the feature maps for translation and reconstruction during the night- [22], SemGAN [41], AugGAN [43,44], Lee et al. [64], UNIT [27], MUNIT [30], and our method, respectively.
Pipelines of the Semantic and Geometric Feature Maps
The proposed method, as shown in Figure 2b, incorporates the sharing of feature maps from the semantic segmentation and depth estimation decoders during the day-to-night cycle.While a similar process could be applied to the night-to-day cycle, estimating semantic segmentation and depth maps from nighttime images poses a challenge.There is a concern that sharing low-accuracy estimates may mislead the reconstruction process.To address this, we conducted experiments with three types of feature map pipelines throughout the night-to-day cycle.Figure 5 illustrates the visual results of these three pipelines.
In the night-to-day cycle, the expressions of the sky and streetlights vary based on the chosen pipeline.Sharing the feature maps or not adopting attention maps within the night-to-day cycle can lead to expression failures in the sky.On the other hand, when we separately extract the feature maps for translation and reconstruction during the nightto-day cycle, more detailed expressions of streetlights are observed.Table 2 indicates that the translated images obtained by separately extracting the feature maps during the night-to-day cycle are more realistic in all metrics.
We adopt the pipeline that shares the feature maps on the day-to-night cycle and extracts them separately on the night-to-day cycle for our proposed method.
Attention Modules
The proposed method aimed to enhance the semantic and geometric information for the day-to-night image translation network.As a means of enhancing the semantic and geometric information, we introduced attention maps derived from relevant information in the input images, which were then utilized in the image translation network.
We assumed that the semantic and geometric information was able to be extracted as feature maps by the decoders trained to infer the semantic segmentation and depth.Based on this assumption, we generated attention maps from the feature maps calculated by each decoder.In our method, one image translation network adopts two different sizes of attention maps for semantic and geometric information, respectively: {m k T } k∈{Seg 1 ,Dep We conducted an experiment to verify the effects of combinations of these attention maps.Several combinations of different types and sizes of attention maps were applied to the image-to-image translation networks.Figure 6 and Table 3 present visual and quantitative comparisons of the results for each network combination. The network only adopts small-sized attention maps. 3The network only adopts large-sized attention maps. 4The network only adopts semantic attention maps. 5The network only adopts geometric attention maps.The up and down arrows next to the metrics indicate that a larger and smaller numerical value corresponds to a better outcome, respectively.The bold numbers highlight the best results.
In the visual comparison, translation failures in the sky area, sandwiched between the buildings, are observed in all other combinations except the proposed method.Additionally, the expressed size of the streetlight depends on the combination of attention maps, and the streetlights tended to appear larger when the image translation network adopted combinations including {m
}.
The quantitative results indicate that the image translation network with all types and sizes of attention maps achieved the best results in both metrics.
With both the visual and quantitative results, we utilize all attention maps for our proposed method.
Discussion
Based on our assumption that the semantic segmentation and depth decoders can extract the semantic and geometric information from the input image, our method infers the semantic and geometric attention maps from the related feature maps extracted by the decoders.The attention maps were created from two different sizes of feature maps calculated from each decoder and applied to the image translation network.To investigate the necessity of these diverse types and sizes of attention maps for image translation, we conducted an experiment.The experimental results show that employing all types of attention maps yielded the best results in both the qualitative and quantitative evaluations.
Additionally, an experiment was performed to identify the optimal conditions for the pipeline of decoder feature maps in the image translation and reconstruction networks during the night-to-day cycle.The comprehensive results validate the effectiveness of our approach, demonstrating that introducing attention maps inferred from two differentsized feature maps from each decoder and extracting feature maps separately for the image translation and reconstruction networks during the night-to-day cycle yield the best performance.
Looking ahead, future efforts should include exploring the application of the proposed model to adverse weather conditions, such as rain or fog.In addition, research efforts are necessary that focus on leveraging translated images for the enhancement or evaluation of model training.
Figure 1 .
Figure 1.The concept of our proposed method.Semantic and geometric information of input images is extracted as feature maps by pre-trained semantic segmentation and depth network.Utilizing attention modules, semantic and geometric spatial attention maps are deduced from these feature maps.Subsequently, both semantic and geometric attention maps are integrated into the image-toimage translation network.
Figure 2 .
Figure 2. The overview of the proposed method.The training process consists of two distinctive steps, (a) the semantic segmentation and depth multi-task network and (b) the image-to-image translation network.During the second step, encoders and decoders utilize the pre-trained parameters obtained in the first step, and attention modules infer spatial attention maps from the feature maps of decoders.Throughout only the day-to-night cycle, the feature maps of decoders are shared between the image translation and reconstruction processes.
SegX
, depth D Dep X , and four CBAM [55] modules {A k X } k∈{Seg 1 ,Dep 1 ,Seg 2 ,Dep 2 } .The encoder E X and the decoders D Seg X and D Dep X utilize the parameters of the encoder E Mul and the decoders D Seg and D Dep of Section 3.1.As shown in Figure 3, the decoders D Seg X and D Dep X are utilized for the extraction of feature maps { f k X ∈ R H×W×C } k∈{Seg 1 ,Dep 1 ,Seg 2 ,Dep 2 } .Then, the attention maps are generated from these feature maps: {Ak X ( f k X ) =: m k X ∈ R H×W } k∈{Seg 1 ,Dep 1 ,Seg 2 ,Dep 2 } =: M X .The inferred attention maps M X are applied to the generator G X by pixel-wise multiplication.Additionally, channel attention maps are inferred and implemented within the generator G X .
Figure 3 . 2 T
Figure 3. Structure of the semantic and geometric attention module.The proposed method adopts four spatial attention modules (i.e., A Seg 1 T , A Dep 1 T , A Seg 2 T , and A Dep 2 T fixed, and only those of the night-to-day image translation networks E Y , D Seg Y , and D Dep Y are retrained during the training of the image translation network. e., {E Mul , D Seg , D Dep }) are trained exclusively on daytime data.In the subsequent step, the parameters of the initially trained networks {E Mul , D Seg , D Dep } are transferred to the image translation networks {E X , E Y }, {D Following this transfer, the day-to-night and night-to-day image translation networks {T X→Y := G X (E X (•)), T X→Y := G Y (E Y (•))} and the discriminators {Disc X , Disc Y } are trained using the CycleGAN framework [22].During the training of the image translation networks, the transferred parameters for the daytime domain networks, specifically {E X , D Seg X , D Dep X }, remain fixed.At the same time, the networks associated with the nighttime domain, denoted as {E Y , D Seg Y , D Dep Y }, undergo a retraining process.This selective retraining allows the model to adapt and fine-tune its parameters for the unique characteristics and challenges posed by nighttime data.
Figure 5 .
Figure 5.Comparison of pipelines for feature maps from semantic segmentation and depth generators.We present (a) input images from the source domain, followed by the outputs from networks employing three distinct pipelines with attention modules.The pipeline in (b) shares feature maps during both day-to-night and night-to-day cycles, while that in (c) shares feature maps exclusively during the day-to-night cycle, without adopting attention maps during the night-to-day cycle.Additionally, the pipeline in (d) shares feature maps only during the day-to-night cycle and separately extracts them during the night-to-day cycle.The red-colored boxes denote the sky area between buildings or trees, while the yellow-colored boxes highlight areas containing street lights.
Figure 6 . 1 T , m Seg 2 T
Figure 6.Comparison for combinations of attention maps.Each row shows, from top to bottom, (a) input images from the source domain and subsequently the outputs from networks employing four distinct combinations of attention maps.The combinations are (b) small-sized attention maps {m Seg 1 , m Dep 1 }, (c) large-sized attention maps {m Seg 2 , m Dep 2 }, (d) generated from feature maps of the depth decoder {m Dep 1 T , m Dep 2 T }, (e) generated from feature maps of the semantic segmentation decoder {m Seg 1 T , m Seg 2 T }, and (f) all feature maps {m Seg 1 , m Dep 1 , m Seg 2 , m Dep 2 }.The red-colored boxes denote the sky area between buildings or trees, while the yellow-colored boxes highlight areas containing streetlights.
1 ,Seg 2 ,Dep 2 } , where T ∈ {X, Y} indicates the time domain.Each attention map is inferred from a different size or type of feature map.{m } are generated from the feature maps of the semantic segmentation and depth decoders.Moreover, {m } are generated from relatively small-sized feature maps, whereas {m
Table 2 .
Quantitative evaluation of pipelines of the feature maps for attention maps.The up and down arrows next to the metrics indicate that a larger and smaller numerical value corresponds to a better outcome, respectively.The bold numbers highlight the best results.
Table 3 .
Quantitative evaluation of attention map combinations.
m Seg 1 T m Dep 1 T m Seg 2 T m Dep 2 T
1T indicates time domain: T ∈ {X, Y}. | 7,231 | 2024-02-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Composition of carbopol 940 and HPMC affects antibacterial activity of beluntas ( Pluchea indica (L.)) leaves extract gel
Indonesia is a country known for its source of biological wealth, one of which is beluntas leaves. Beluntas leaves have the potential to be an antibacterial, so it is appropriate to be formulated in the form of medicinal preparations, especially gels. This study aims to find out the influence of variations between carbopol gel base 940 and hydroxypropyl methylcellulose (HPMC) on the physical properties of gel preparations beluntas leaf extract ( Pluchea indica (L.), and know the influence of gel of extract of beluntas leaves on antibacterial activity. The extract is obtained by the maceration method using ethanol solvent 96%. Each formula uses 15% of the extract of beluntas leaves. Gels are made in four gel base variations namely F0 (0.5% carbopol, 1% HPMC), FI (1% carbopol, 1.5% HPMC), FII (1.5% carbopol, 2.5% HPMC), and FIII (2% carbopol, 3% HPMC). Gels evaluated for their physical properties include organoleptic, viscosity, pH, homogeneity, scattering power, adhesion and Freeze-thaw cycling . Then the gel tested antibacterial activity against bacteria Staphylococcus aureus and Pseudomonas aeruginosa by cup-plate diffusion method . The data obtained were analyzed with One Way Anova and LSD with a 95% confidence level. The results showed that beluntas leaf extract gel meets the organoleptic requirements, homogeneity, good gel adhesion (> 4sec), good gel viscosity (2000-50.000 cps), and good gel pH (4.5-6.5). However, the gel does not meet the requirements of good scattering power (5-7 cm) and Freeze-thaw cycling. Based on the test results that have been done with some of the parameters above, The best composition of carbopol 940 and HPMC in the beluntas leaf extract gel which has antibacterial activity against Staphylococcus aureus and Pseudomonas aeruginosa is 1% carbopol and 1.5% HPMC. The antibacterial activity of the formula is categorized as strong.
INTRODUCTION
time. If the moisture content is high, it can cause the process of mushroom growth. The dried leaves were then pollinated and sifted. The purpose of pollination was to obtain maximum of the active substance due to the area of contact with large solvents. The powder was stored at room temperature in a tightly wrapped glass container, protected from sunlight, and ready for extraction (Mamonto, 2014).
Extraction
Powder simplicial beluntas leaves weighed as much as 500 g, then macerated with 2 L ethanol 96% for 3x24 hours while occasionally stirred at room temperature, then filtered. The pulp is recoupled with the same method and solvent until the solvent is almost transparent. The filtrate obtained then combined, then evaporated with a rotary evaporator at a temperature of 40-50ºC. The evaporation of the extract is continued on the water bath so that it is obtained the condensed extract (Bahar et al., 2015).
Topical gel formulation beluntas leaf extract Formula
The formula in this research is a modification that refers to the research of Saraung et al. (2018). Beluntas leaf extract gel formula is made in gel preparations with variations of carbopol 940 0.5% (F0), 1% (FI), 1.5% (FII), 2% (FIII), and HPMC 1% (F0), 1.5% (FI), 2.5% (FII), 3% (FIII), as presented in Table 1. Topical beluntas leaf extract gel making procedure Gel formulation with base combined between HPMC and carbopol 940. The HPMC is developed into hot water for fifteen minutes. Carbopol on different mortar developed with hot water until homogeneous, then added TEA until clear. The developed HPMC is inserted into a mortar contains carbopol and it is stirred until homogeneous. Methyl parabens are dissolved with propylene glycol, mixed into the base, and stirred until homogeneous. Aquadest is added little by little stirs until homogeneous. The extract is added last to the gel, then stirred until homogeneous (Al-Suwayeh et al., 2014;Lane, 2013).
Evaluation of gel preparations Organoleptic test:
An organoleptic test is conducted visually by observing the shape, color, and smell of gel preparations beluntas leaf extract (Widia, 2012).
Viscosity test
The gel determined its viscosity with Viscosimeter Brookfield and used spindle no.4 attached to the device, regardless of the flow type. The preparation is put in a glass container, and then the mounted spindle is lowered until the spindle boundary is dipped into the preparation. The tool's speed is mounted at 3 rpm then the scale is read by observing the red needle in a stable position. Viscosity test value based on the SNI is 2000 -50.000 cps (SNI, 1992;Sinko, 2011).
The pH test
The pH measurement is done by using a pH meter tool. The tool is first calibrated using a neutral buffer solution (pH 7.01) and an acidic buffer solution (pH 4.01) until the tool shows the price of the pH. Then the electrodes are washed with axle and dried with a tissue. The samples made a concentration of 1% that is weighed 1 gram of preparation and dissolved with axle up to 100 mL. Then stir homogeneously. Then the electrodes are dipped in the solution, and the result is recorded. An important pH test is performed to see the acidity level of the gel preparation so as not to irritate the skin. The pH test value has fulfilled SNI (1992) at a pH value between 4.5 -6.5 (Naibaho et al., 2013).
Homogeneity test
The gel with an amount of 0.5 g is placed on a transparent glass and then covered with transparent glass and observed the rough grains on the transparent glass. A good gel has no coarse grains (Widia, 2012).
Dispresion test
A total amount of 0.5 g of gel is placed on round glass, then another glass is placed on it and left for one minute, then the diameter of the gel spread is measured. After that, the 150 grams gel was added and let stand for 1 minute. The diameter of its spread was observed. The result of the scattering power test of preparation must refer to the SNI standard (1992) that applies 5-7 cm (SNI, 1992;Naibaho et al., 2013).
Adhesion test
The sticky power test is carried out as much as 1 g of gel placed above the glass object with one glass object to the other. The gel applied between two glass objects is pressed with a load of 1 kg for 5 minutes on the test tool. After 5 minutes, the load is removed and is recorded the time when the two objects are detached (Naibaho et al., 2013). The valid stickiness requirements for topical preparations are more than 4 seconds (Mukhlishah et al., 2016).
Freeze-thaw cycling test
Gel preparations are put in tightly closed glass pots, then frozen at a temperature of -18ºC for 24 hours, after which they are thawed at 45ºC for 24 hours (1 cycle). Then place the gel preparation at room temperature for 24 hours. The treatment is repeated in three cycles. Observe the physical changes at the end of the cycle, i.e., pH test, scatter power, adhesion, and viscosity (Wang and Xie, 2013).
Sterilization tools
The tools used in this study were cleaned first, then wrapped in opaque paper, then inserted into the autoclave at 121°C for 15 minutes (Bahar et al., 2015).
Media preparation
The media Brain Heart Infusion Broth (BHI-B) is made by weighing 37 grams of media powder and then dissolved in 1 liter of aqua dest while heated and stirred until homogeneous. Media is sterilized in autoclaves at 121°C for 15 minutes (Bell et al., 2016;CLSI, 2012).
Mueller-Hinton Agar (MHA) media is made by weighing 38 grams of MHA media powder dissolved into 1 liter of an aquadest while heated and stirred until homogeneous. The media is sterilized in the autoclave at 121°C for 15 minutes. Then the MHA media is put in a petri dish as much as 20 mL and left to harden (Bell et al., 2016;CLSI, 2012).
Making standard turbidity solution (Solution 0,5 Mc. Farland)
The Mc Farland standard turbidity of 0.5 is made with a mixture of H2SO4 1% of 9.95 mL and BaCl2 solution of 1.175% of 0.05 mL. Then it is whisked until a murky solution is formed. This turbidity is used as a standard for bacterial suspension turbidity test, and it is equivalent to a bacterial density of 10 8 CFU/mL (Paputungan et al., 2019).
Bacterial suspension making
A total of 100 μL of bacterial suspension from stock is inserted into 1 mL of BHI media and incubated at a temperature of 37 o C for 4-6 hours. Furthermore, it is taken as much as 100 μL and diluted with 0.9% sterile NaCl until the same turbidity is obtained with Mc Farland 10 8 CFU/mL solution (Bell et al., 2016;CLSI, 2012).
Antibacterial activity testing
Antibacterial activity test of gel of extract of beluntas leaves is done by Cup-Plate diffusion method. Media Mueller Hilton Agar (MHA) prepared by pouring MHA as much as 25 ml into six Petri dishes in a warm state, left solid. The manufacture of wells is carried out using sterile pipettes on media MHA. Wells are made, given the same distance between wells, to form a good well. The suspension of scratch bacteria on the MHA uses the sterile cotton swab; by swabbing to the entire surface of the media evenly. They are filling the well using a sterile micropipette to fill each well hole. Labeled to each well F0 (negative control) without extract (0.5% carbopol, 1% HPMC), F1 with 15% extract (1% carbopol, 1.5% HPMC), F2 with 15% extract, (1.5% carbopol, 2.5% HPMC), and F3 with 15% extract (2% carbopol, 3% HPMC). The positive control used Medi-Klin® Clindamycin phosphate gel 1%. After that, the media is put into an incubator at a temperature of 37 o C for 24 hours. Antibacterial activity is determined by measuring the diameter of the bland zone using the sarong term (Bahar et al., 2015).
Data Analysis
The data from beluntas leaf extract activity test on the growth of Staphylococcus aureus and Pseudomonas aeruginosa bacteria, statistically analyzed using One Way Anova and LSD with a 95% confidence level.
RESULT AND DISCUSSION
The results showed that the different formulas of carbopol 940 and HPMC in the beluntas leaf extract gel had an effect on all the parameters measured. Formula I which contains 1% carbopol and 1.5% HPMC is the best formula based on the organoleptic, viscosity, pH, homogeneity, dispersibility, adhesion, freeze-thaw cycle, and antibacterial activity.
Evaluation of beluntas leaves gel Organoleptic test
Organoleptic testing is carried out to observe the shape, color, and smell of the gel made. Organoleptic test results can be seen in Table 2. Figure 1 and Table 2 show the organoleptic test observations of gel preparations conducted during three weeks of storage at room temperature. In formulations (F0), (FI), and (FII) from week-0 to week-3, there was no significant change (smell, dosage shape, and color). In the formulation (FIII) week-0, the preparation smells typical of the extract; thick texture and greenish-brown color. However, there was a significant change in week-1 to week-3. The change that occurred was that the texture of the preparation became thick a bit rigid. The color and texture differences in each formulation are influenced by the various base concentrations of HPMC gel and carbopol. The concentration of HPMC gel base and carbopol is getting smaller, providing an increasingly concentrated color intensity with the addition of the same concentration of extracts in each formula. In addition, the resulting texture or shape is also different. The higher the base concentration of HPMC gel and carbopol used, the thicker texture can become (Rajalakshmi et al., 2010). Stable gel properties can be affected by the use of a combination of HPMC and carbopol gel bases. HPMC forms a gel base by adapting the solvent so that the liquid is held back, and it increases fluid containment by forming a compact fluid mass. At the same time, carbopol can easily disperse in water, and in a small concentration, it can serve as a gel base with sufficient viscosity (Migliozzi et al., 2019). The stability of the physical properties of the combination of HPMC and carbopol based on research conducted by (Hasyim et al., 2011), shows the stability of the most optimal physical properties of the gel with the use of HPMC gel base. The variation in the composition of the gelling agent of carbopol 940 and HPMC affected the gel's physical properties and antibacterial activity. A good antibacterial gel is a gel which does not change in texture, smell, and color and has antibacterial activity..
Viscosity testing
Viscosity testing aims to determine the viscosity value of gel preparations expressed in centipoises (cps) and is related to ease when applied on the skin. The gel is determined viscosity by using a viscometer. The viscometer used is Viscosimeter Brookfield. The results of viscosity testing can be seen in Table 3. Based on Table 3, the higher the viscosity of a gel, the more stable the gel will be. It is because it experiences the movement of particles that tend to be more difficult with the viscousness of a gel. The combination of carbopol and HPMC gel bases showed the increased viscosity. The higher the base concentration of carbopol and HPMC gels, the higher the viscosity of the gel. Formula F0 (control (-)), gel without the additional extract has a lower viscosity than other formulas with extract additions. However, all formulas qualify for a better gel preparation viscosity value of 2000 -50.000 cps (Sinko, 2011).
pH test
The pH testing of preparations is carried out using a pH meter. The pH of gel preparations should correspond to the pH of the skin, which is 4.5-6.5. the pH test results can be found in Table 3. statistical test results showed no significant difference in pH values between the formulas p>0.05. The obtained result showed the largest value of carbopol coefficient has a positive influence compared to HPMC. The higher number of carbopol the more acidic the pH. The higher the concentration of carbopol and the lower the concentration of HPMC in a formula, the more acidic the preparation pH will be obtained. This is because carbopol is acidic, so that along with the amount increased, the pH of the preparation is also getting acidic (Dantas et al., 2016;Kaur, 2013). Carbopol when it is added with water will disperse and partially decompose forming hydrogen bonds with water; carbopol will be acidic due to the ionization of the carboxyl group (Migliozzi et al., 2019). It indicates that beluntas leaf gel preparations meet the requirements of pH test preparations because it is still along with the pH of the skin (4.5-6.5), so it is safe to use and does not cause irritation to the skin (Naibaho et al., 2013).
Homogeneity test
Homogeneity tests are carried out to see the homogeneity of the gel created. Testing is carried out by applying gel to the glass object. The gel is said to be homogeneous in the nonexistence of coarse particles. The results of observations of gel preparation homogeneity test are: in week-0 to week-3 shows that, the absence of coarse particles on the glass object, then in the four formulations of gel preparations are declared homogeneous. Gel preparations of each formula shows an even color, so it can be concluded that the four formulas made have a good homogeneity. The homogeneity test results of this gel preparation did not affect the variations towards HPMC and carbopol concentrations on gel homogeneity. Prior research conducted testing of beluntas leaf extract cream with concentrations of 5%, 10%, and 15% showed that the homogeneity of cream preparations before and ISSN: 2088 4559;e-ISSN: 24770256 Pharmaciana Vol. 11, No. 3, Nov 2021 after storage obtained homogeneous dosage results and the absence of coarse grains on the cream (Suru et al., 2019).
Dispersion test
Gels with good dispersion power will provide a good spread of medicinal ingredients so that the treatment will be more effective. The greater the scattering power, the easier the gel preparation to apply on the skin surface. It is related to the distribution of the active substance in the preparations (Naibaho et al., 2013). HPMC has a positive value coefficient which means it has an effect in increasing gel spread power. The higher number of HPMC, the more scatter power will decrease because the preparation is getting thicker. Meanwhile, carbopol has a negative coefficient which means it has low scattering power.
Scatter power test results showed that the gel formulation with the best scatter power is the formulation (F1) with an average of 4.71 cm. The condition of scattering power for topical preparations is about 5-7 cm (SNI, 1992;Naibaho et al., 2013). However, in this study, the scattering power obtains under the specified terms. This condition is affected by the consistency of the masseuse gel resulting in a less than maximum spread. Differences in scattering power greatly influence the speed of diffusion of the active substance passing through the membrane. The wider the membrane where the preparation spreads, the greater the diffusion coefficient that results in drug diffusion is increasing, so that the greater the dispersion power of preparation better (Kermany, 2010).
Adhesion test
Based on Table 3, the test results of gel preparation adhesion from beluntas leaf extract showed that the fastest adhesion test result is a gel with carbopol base concentration 0.5% formula (F0) with an average value of 6.40 seconds. This condition happens because gels with a formula carbopol base concentration of 0.5% have more water content. The first adhesive power is a gel with a base concentration of 3% HPMC formula (FIII), with an average value of 12.70 seconds. Due to the higher HPMC levels, the more colloids formed will be and able to increase adhesivity. Reasonable stickiness requirements for topical preparations are more than four seconds (Mukhlishah et al., 2016). Carbopol can form colloids with the addition of water because carbopol condenses water to become thick and sticky. It indicates that testing the adhes to gel beluntas leaves with variations of HPMC and carbopol meets reasonable adhesor requirements due to the adhesiveness produced in more than 1 second. The results of the sticky power test showed that the increasing concentration of HPMC used by each formula, the longer the time attached to the gel. It is because HPMC can form colloids with the addition of hot water. Colloids are formed because the dispersed substances disorbtion of the dispersing medium so that it becomes viscous and sticky, therefore the higher the HPMC levels, the more colloids formed will be and able to increase its adhesion. HPMC is a positive coefficient which means it affects the increasing adhesion while carbopol has a negative coefficient which means to decrease adhesion. This indicates that the adhesivity increases in line with the increase in HPMC concentration because HPMC has a positive effect on increasing adhesivity (Migliozzi et al., 2019)
Freeze-thaw cycling testing
Freeze-thaw cycling testing is one way to accelerate the evaluation of the physical stability of gel preparations carried out as many as three cycles. Each cycle is stored in the refrigerator at -18ºC for 24 hours, then transferred into the climatic chamber at 45ºC for 24 hours. Then place at room temperature for 24 hours. After each cycle, a physical gel test is done. It includes: pH, scatter power, adhesion, and viscosity.
The results of freeze-thaw cycling for three cycles occurred a significant change to pH preparations. It is due to the difference in base concentration of HPMC and carbopol used. The higher the concentration of carbopol and the lower the HPMC concentration used, the more acidic the pH value. It is also affected by extreme temperature differences from storage at a temperature of 18ºC to a temperature of 45ºC during storage and causes an increasingly acidic pH (Dantas et al., 2016;Kaur, 2013). The evaluation of scattering power after the freeze-thaw cycling test showed significant changes in all formulas. The obtained scatter power value is low and does not meet the specifications of better gel scattering power. It is due to the changes in the resistance of gel preparations, resulting in a change in the consistency of gel preparations (Rajalakshmi et al., 2010). The yield after freeze-thaw cycling showed a significant decrease from before storage, but the obtained adhesion value has still met the requirement of more than 1 second. Decreased adhesion is due to the decreasing viscosity value (Amalia, 2012). Viscosity results after the freeze-thaw cycling test showed significant changes to the formula (F0) and (FI). It is due to the small carbopol concentration and the greater concentration of HPMC used. The use of small concentrations in carbopol causes the resistance of gel preparations to be reduced due to the structure of less three-dimensional colloidal tissue, making it difficult to absorb water for a long time at low-temperature storage. The smaller the concentration of carbopol used in the preparation, the more unstable the preparation in the storing while the preparation with a high concentration of carbopol tends to be more stable in storage (Kermany, 2010).
Antibacterial activity test results
The testing of antibacterial activity of beluntas leaf extract gel is done by the Cup-Plate diffusion method. It is done by making well holes filled with gel preparations to be tested. It is incubated at 37oC for 24 hours. Then it observed the bland zone around the well hole indicates the nonexistence of bacterial growth. The diameter of the bland zone is measured using the length of the funnel by measuring horizontally and vertically and the result has lessened the diameter of the well by 5 mm (Hanum and Mimiek, 2015). According to Davis Stout, there are several categories of bacterial inhibitory strength, namely bacterial inhibitory strength with a ≥ 20 mm inhibition diameter as the powerful categories, bacterial inhibitory strength with a diameter of 10-20 mm including the strong inhibition, bacterial inhibitory strength with an inhibition diameter of 5-10 mm including medium categories, and bacterial inhibitory strength with a ≤ 5 mm inhibition including weak categories (Rita, 2010). The test results of Staphylococcus aureus and Pseudomonas aeruginosa antibacterial activity can be observed in Table 4. In-gel preparations made, F0 as a control (-) used gel formulations that contain only gel base without the active substance of beluntas leaf extract. Gel base serves as a correction factor because there are preservatives in the form of methyl parabens that may have antibacterial activity. However, the results showed the gel base had no antibacterial activity due to the unless formation of the bland zone. At the same time, other gel formulas formed bland zones. At the control (+) is used Clindamycin phosphate gel 1% which has a bland power with a very strong categorical. HPMC gel and carbopol with a small concentration and addition of beluntas leaf extract have greater bland power than the one with the base of HPMC gel and carbopol with a large concentration. It has been mixed with beluntas leaf extract resulted in a more minor bland power. It is because the gel base is difficult to diffuse so that the active substance of beluntas leaf extract can not be appropriately separated from the gel base. Thus, the blandness of bacteria is getting smaller. The reduction in blandness value can be correlated with the gel quality to viscosity due to the influence of increasing carbopol and HPMC levels that vary by formula. The greater the levels of carbopol and HPMC will increase the viscosity of preparation, and the greater the viscosity of preparation, the greater the resistance. Thus, it prevents the release of the active substance and results in a decreased taste of gel formulations against bacteria Staphylococcus aureus and Pseudomonas aeruginosa (Sinko, 2011).
Here are some previous studies on beluntas leaf extract in inhibiting bacteria. Manu's research (2013) on "Antibacterial activity of ethanol extract of beluntas leaves against Staphylococcus aureus, Bacillus subtilis, and Pseudomonas aeruginosa" obtained the results of bland power test against Staphylococcus aureus at a concentration of 60% of 15.91 mm, Bacillus subtilis at a concentration of 60% of 14.32 mm, and Pseudomonas aeruginosa at a concentration of 60% of 15.21 mm. Therefore, ethanol extract of beluntas leaves has antibacterial activity in the strong category. Bella's research (2018), showed that ethanolic extract of beluntas leaves can inhibit Staphylococcus aureus bacteria in a row 16.00 mm, 17.22 mm, and 18.12 mm with strong categorical in inhibiting staphylococcus aureus bacteria.
CONCLUSION
Variations in the concentration of carbopol 940 and HPMC as a gel base for extracts of beluntas (Pluchea indica (L.) Less) leaves affect the physical properties of the gel preparation. The base composition of 1% carbopol 940 and 1.5% HPMC is the best formulas with strong antibacterial activity against Staphylococcus aureus and Pseudomonas aeruginosa. | 5,721.8 | 2021-10-23T00:00:00.000 | [
"Materials Science"
] |
Low-mass planets falling into gaps with cyclonic vortices
We investigate the planetary migration of low-mass planets ($M_p\in[1,15]M_\oplus$, here $M_\oplus$ is the Earth mass) in a gaseous disc containing a previously formed gap. We perform high-resolution 3D simulations with the FARGO3D code. To create the gap in the surface density of the disc, we use a radial viscosity profile with a bump, which is maintained during the entire simulation time. We find that when the gap is sufficiently deep, the spiral waves excited by the planet trigger the Rossby wave instability, forming cyclonic (underdense) vortices at the edges of the gap. When the planet approaches the gap, it interacts with the vortices, which produce a complex flow structure around the planet. Remarkably, we find a widening of the horseshoe region of the planet produced by the vortex at the outer edge of the gap, which depending on the mass of the planet differs by at least a factor of two with respect to the standard horseshoe width. This inevitably leads to an increase in the corotation torque on the planet and produces an efficient trap to halt its inward migration. In some cases, the planet becomes locked in corotation with the outer vortex. Under this scenario, our results could explain why low-mass planets do not fall towards the central star within the lifetime of the protoplanetary disc. Lastly, the development of these vortices produces an asymmetric temporal evolution of the gap, which could explain the structures observed in some protoplanetary discs.
INTRODUCTION
Protoplanetary discs are composed of gas and a small fraction of dust (about 1% of the total gas density).Remarkably, even with such a low dust density, recent observations of the distribution of dust in several protoplanetary discs around stars of low and intermediate mass (using several techniques and different instruments, e.g: the Atacama Large Millimeter/submillimeter Array (ALMA), Very Large Array (VLA), Keck II and Hubble telescopes) allow us to identify different large scale structures such as: spiral arms (Garufi et al. 2013;Grady et al. 2013;Benisty et al. 2015;Reggiani et al. 2018), gaps (Flock et al. 2015(Flock et al. , 2016)), bright rings (Quanz et al. 2013) and large central cavities (Andrews et al. 2011;Carrasco-González et al. 2019).Each of these observed structures can be explained from different theoretical approaches including dust-induced instabilities, secular gravitational instabilities, hydrodynamic or magnetohydrodynamic turbulence, among others (see Bae et al. (2023) and references therein).In addition, gaps and vortices can also be produced by embedded planets at an early stage of their formation (Keppler et al. 2018;Müller et al. 2018;Pinte et al. 2018Pinte et al. , 2019)).
These structures could play a crucial role in the formation and evolution of planets.In particular, positive radial gradients in the surface density (or in vortensity) may lead to the formation of migration traps (e.g.Masset et al. 2006;Bitsch et al. 2014;Romanova et al. 2019; ★ E-mail: raul@sirrah.troja.mff.cuni.cz(ROC) Chrenko et al. 2022).Density changes may occur in the edges of the dead zones, at the magnetospheric boundary, at the dust sublimation radius, or at the snowline radius (Hasegawa & Pudritz 2011).
Local bumps of high density can be formed in the edges of the dead zones because of the differential mass accretion rate between the active zones and the dead zone (e.g.Varnière & Tagger 2006;Regály et al. 2013).A weak gradient in the Ohmic resistivity may lead to a transition in the accretion flow rate (Dzyurkevich et al. 2010;Lyra & Mac Low 2012;Lyra et al. 2015).These local density maxima can trigger the formation of anticyclonic vortices as a consequence of the Rossby-wave instability, which are efficient dust traps (e.g.Lyra et al. 2008).
Using hydrodynamical simulations, Ataiee et al. ( 2014) study the interaction of a planet with a stationary massive anticyclonic vortex created by a density bump.Initially, the planet migrates towards the bump.Later on, the planet interacts with the vortex and becomes locked to it.Faure & Nelson (2016) investigate the interaction of planets with migrating vortices in the inner edge of a dead zone.Interestingly, intermediate mass planets remain trapped but low-mass planets may eventually escape and continue their inward migration.Chametla & Chrenko (2022) study the impact of vortices formed after destabilization of two pressure bumps on planetary migration.They find that the vortex-induced spiral waves may slow down or even halt the migration of the planets.
In this work we study the effect of a large scale vortex formation at the edges of a surface density gap resulting from an MRI active zone We consider 0 = 5.2 au and ★ = 1 ⊙ when scaling back to physical units.
in the outer parts of a protoplanetary disc (e.g., Flock et al. 2015).Due to the computational cost of 3D resistive magnetohydrodynamic models, we consider purely hydrodynamic 3D models and include a bump in the viscosity of the gas disc to generate a gap in the density profile.Note that unlike the density traps in the inner cavity of protoplanetary discs as previously studied (see for instance Romanova et al. 2019, and references therein), the transitions in density generated in our gap can arise at different radial distances, not necessarily in the inner disc.Interestingly, our gaps lead to the formation of cyclonic vortices in their edges.Unlike the elongated and anticyclonic vortices reported for instance in Ataiee et al. (2014), these cyclonic vortices rotate in the same direction to that of the whole disc and exhibit lower densities and pressures compared to their surroundings.As a consequence, whereas anticyclonic vortices have the capability of dust trapping (e.g., Barge & Sommeria 1995), cyclonic vortices are expected to disperse dust particles.The objective of this work is to study the role of these vortices in the migration of Earth and super-Earth mass planets ( ∈ [1, 15] ⊕ , with ⊕ the Earth mass) located outside the gap.In particular, we analyze whether the density gradient can stop migration.11).Middle: Radial profile of the surface density after a time 5000 0 , when the disc is viscously relaxed (solid line).The dotted line shows the starting power-law surface density profile and the vertical dashed orange lines represent the position of the edges of the gap, which are determined when the slope of the line tangent to the power-law radial profile coincides with the slope of the line tangent to the evolved density profile (see the parallel gray line segments).Bottom: Gas density distribution in the − plane when the gap profile reaches steady state.Note that at this time the planet is introduced in the disc (that is, ( = 5000 0 ) ≡ 0 planet ).
The paper is laid out as follows.In Section 2, we describe the gap disc-planet model, code and numerical setup used in our 3D simulations.In Section 3, we show the results of our numerical models.We present a discussion in Section 4. Finally, the conclusions are given in Section 5.
DESCRIPTION OF THE NUMERICAL MODEL
In this section, we describe the components of our physical model: the gas disc, the gravitational potential, and we present the code used to solve the set of equations of hydrodynamics.
Governing equations
We consider a 3D non-self-gravitating gas disc whose evolution is governed by the following equations: where , v and f denote the density, velocity of the gas and the viscous force, respectively.Furthermore, Φ denotes the gravitational potential and is the gas pressure.For the latter, we consider the globally isothermal equation of state where is the isothermal sound speed.
The aspect ratio of the disc ℎ ≡ /, where is the vertical height of the disc ( = /Ω Kep with Ω Kep is the Keplerian angular frequency) and the distance to the central star, can be written as: where is the flaring index and ℎ 0 is the aspect ratio at the radial position 0 (see Table 1).In the globally isothermal case = 0.5.
We initialize the density and the gas velocity components in a similar way as described in Appendix A in Masset & Benítez-Llambay (2016) for a globally isothermal disc: where is the polar angle and with Σ 0 is the surface density 1 at = 0 .
For the velocity components we assume that = = 0 and where ★ is the mass of the central star.
The gravitational potential Φ is given by where and are the stellar and planetary potentials, respectively.In Eq. ( 10), is the planet mass, ′ ≡ |r − r | is the cell-planet distance, is the azimuth with respect to the planet, and is a softening length used to avoid computational divergence of the potential in the vicinity of the planet.The second term on the right-hand side of Eq. ( 10) is the indirect term arising from the reflex motion of the star.Our simulations were performed with = 0.1, which is comparable to the size of two cells of our numerical mesh (see below).We have done some experiments with a larger and found similar results.
1 The surface density and the midplane volumetric density can be related by
Code and set-up
We use the publicly available hydrodynamic code FARGO3D computational mesh grid with uniform spacing in radius, azimuthal and polar directions.We simulate the disc over the radial range ∈ [0.5, 2.0] 0 , an azimuthal extent of = [−, ] and a polar extent of = [/2 − 3ℎ 0 , /2].The number of grid cells is ( , , ) = (768, 3200, 76).For our adopted value of ℎ 0 = 0.05, this corresponds to a grid size of 1.97 × 10 −3 0 in each direction or, equivalently, 26 cells per pressure length-scale.This resolution is similar to that used in Masset & Benítez-Llambay (2016), where the dynamics of the 3D horseshoe region is studied.
Particular attention was paid to avoid reflections of the spiral waves excited by the planet in the radial boundaries.To this end, we use damping boundary conditions as in de Val-Borro et al. (2006).The width of the inner damping ring is 3.9 × 10 −2 0 and that of the outer ring being 8.54 × 10 −1 0 .The damping timescale at the edge of each damping ring equals 1/20 ℎ of the local orbital period.Since we only model one hemisphere of the disc, we use reflecting boundary conditions at the midplane.At the upper boundary of the disc, the gas density and azimuthal velocity component are extrapolated from the initial conditions, whereas for the radial and polar velocity components we apply reflecting boundary conditions.
In Table 1 we present the set of parameters used in our numerical models.
Gap modeling through radial viscosity transitions
To make a gap in the surface density of the disc, we set up an axisymmetric viscosity bump in a ring around 0 .The viscosity bump, which is kept constant with time, is given by the function: with In Eq. ( 12), 0 is the viscosity parameter outside the bump, 1 and 2 represent the radial positions of the inner/outer viscosity transitions in the gap edges, and is the width of these transitions.The viscosity bump has a width Δ = 2 − 1 and it takes a peak value, max , given by at a radius = ( 1 + 2 )/2.Hence, the fractional change of across the bump, max / 0 , decreases with if Δ is kept fixed.We start with a disc with a power-law surface density with a slope given by ≡ − ln Σ/ ln = 1, and then we bring the disc to equilibrium by performing axisymmetric (, ) runs without including the planet's gravitational potential.Figure 1 shows the viscosity and the relaxed surface density at 5000 0 , as a function of , for 0 = 10 −3 , 1 = 0.9, 2 = 1.1 and = 0.05.The density map in the − plane is also shown.We see that the gap does not have pronounced bumps at its edges, similar to the gap formed in a turbulent disc (see, for instance, Fig. 2 of the D2G_e-2 model in Flock et al. 2015).Once we know the volume density in the − plane, we expand the grid into the azimuthal direction and the planet is included.
For clarity of presentation, we will focus on simulations with the values of 0 , 1 , 2 , and , as quoted above, throughout the main body of the paper.However, the results for a shallower gap (a larger value of ) are described in the Appendix.We also show in the Appendix that a reduction in the height of the bump (ie., a lower value of max / 0 ) can generate a gap where vortex formation is not viable.
The adopted value for the background viscosity, 0 = 10 −3 , falls within the upper range of the observed values (Pinte et al. 2016;Flaherty et al. 2020;Jiang et al. 2024).Nevertheless, when adopting a smaller value of 0 (keeping 1 , 2 and fixed), the relaxed surface density profile remains essentially unaltered, as the new () is just a rescaled profile (e.g., Lynden-Bell & Pringle 1974).However, a reduction of 0 by a factor of, say, 10 implies that the stationary surface density is reached in a timescale 10 times longer.To avoid this additional computational cost, we chose 0 = 10 −3 .If vortices survive for 0 = 10 −3 , it is likely that they will also survive for smaller values of 0 because their lifetimes typically increase with lower values of (e.g., Regály et al. 2017).
Stability analysis
Pressure bumps and gaps can be unstable to axisymmetric perturbations.Since our disc is globally isothermal (barotropic), the Rayleigh condition for local axisymmetric stability is 2 ≥ 0, where is the epicyclic frequency.Note that the angular velocity of the disc Ω differs from Ω Kep because of the thermal pressure gradient.The top panel of Figure 2 shows that the minimum value of 2 is 0.26Ω 2 Kep and, therefore, the gap is stable to axisymmetric perturbations.Additionally, we mention that although the gas flow is not completely Keplerian in the disc, it is dynamically stable except at the edges of the gap where the shear parameter shear → 2 (see lower panel in Fig. 2) and nonlinear instabilities can arise (Hawley et al. 1999).
On the other hand, Rossby wave instability (RWI) may occur (not necessarily) if L ≡ Σ/(2 ) has a maximum (Lovelace et al. 1999;Li et al. 2000).Here is the vertical vorticity = ( ì ∇ × ì ) .The central panel of Fig. 2 shows that L presents two maxima in our disc, one at each edge of the gap.Thus, our gap is potentially prone to the RWI.Moreover, Chang et al. ( 2023) find empirically that the RWI takes place if the condition 2 + 2 ≲ 0.6Ω 2 Kep is satisfied somewhere in the disc.In our case, as the Brunt-Väisälä frequency is zero, this condition implies 2 ≲ 0.6Ω 2 Kep , which is fulfilled in the edges of the gap (see Fig. 2).Hence, we expect the gap to be unstable to the RWI.
RESULTS
In this section we present the results of our 3D simulations of a planet migrating in our gapped protoplanetary disc.We are mainly interested in the orbital evolution of the planet and in the torques that drive its migration.We introduce the planet on a circular orbit with initial radius 1.25 0 and migrates towards the gap.To avoid any artificial disturbance due to the sudden introduction of the planet, we have introduced the planet using the following mass-taper function: where mt is the time-scale over which the mass of the planet grows to its constant value, which we set to five orbital periods in all our simulations.
Cyclonic vortices
Fig. 3 shows the density perturbation in the midplane of the disc, ( − 0 planet )/ 0 planet , where 0 planet is the unperturbed volume density, at 5000 0 (just when the planet is inserted).We also show the residual vortensity at the same time.Contrary to classical gaps where the vortices are formed in the pressure maxima of gap edges (e.g., Li et al. 2000), we see that vortices are formed within the gap.It is remarkable that the vortices formed at the edges of gap have a cyclonic circulation.As a consequence, the surface density in the vortex and its immediate vicinity is lower than that of the surrounding regions (see Fig. 3).This gives rise to a novel interaction between the vortices in protoplanetary discs and planetary bodies which is the focus of the present work.It is usually argued that cyclonic vortices in protoplanetary discs are rapidly destroyed by the shear flow (e.g., Godon & Livio 1999).However, Lovelace et al. (2009) envisage the potential formation of cyclonic vortices as a consequence of the RWI in locally non-Keplerian discs in regions where Ω/ > 0. Our simulations indicate that the condition Ω/ > 0 is not a necessary condition (see bottom panel in Figure 2).The formation of cyclonic vortices is an interesting feature found in our study, and will be analized in detail in a follow-up paper currently in preparation.planets are still migrating towards the gap.However, the two most massive planets rapidly migrate down to = 1.18 0 , and then halt their inward migration.In fact, they remain within the outer edge of the gap which is located at = 1.25 0 (see Fig. 1).
Orbital evolution
Note that prior and after reaching the stalling radius, the semimajor axes of the planets exhibit oscillations.As we will see in Section 4.1, these oscillations are due to the interaction of the planet with the vortices formed at the edges of the gap (see Fig. 3).On the other hand, we mention that for the range of planetary masses studied here, the eccentricity of the planet does not develop considerably.Since the maximum value of the eccentricity that we find is = 5.12 × 10 −4 which is damped quickly after ≈ 100 0 .
The torque acting on the planet
Figure 5 shows that the torque on the planets with 1 ⊕ ≤ ≤ 5 ⊕ exhibits a clear periodic pattern since = 100 0 .This pattern stretches as time goes by.For planets with ≥ 7 ⊕ , at some point, it changes and initiates a new different pattern.The pattern transition occurs approximately at 460 0 for = 7 ⊕ , at 200 0 for = 10 ⊕ , and at 110 0 for = 15 ⊕ .Inspection of Figure 4 indicates that these transition times roughly coincide with the times at which planets stop migrating.For = 15 ⊕ , the planet migrates inward so fast to the stalling radius that the first pattern appears only twice.
Figure 6 shows the radial torque distribution, Γ/, for planetary masses of 1 ⊕ and 3 ⊕ at different times.The shape of the profile is different to the profile in the classical problem of a planet in a disc.In the latter, Γ/ is antisymmetric with respect to the location of the planet.Here, the vortex formation causes that the maximum and minimum values of the radial torque distribution occur very close to the edges of the gap, which are located at ie = 0.75 0 and oe = 1.25 0 , respectively.We emphasize that the peak values of the radial distribution of the torque occur in time periods similar to the time it takes for the vortex to align azimuthally with the planet.For instance, in the case of 1 ⊕ at = 500 0 , Γ/ has not yet reached its maximum amplitude and in turn the main vortex is at about 70 degrees behind the planet.On the other hand, at = 502 0 , the radial distribution of the torque reaches its maximum value, which occurs when the main vortex and the planet are close to aligning in azimuth.This behaviour is also reflected in the total torque; its maximum value occurs when the planet is almost at its closest distance to the main vortice, as can be observed in the cases of = 1 ⊕ , = 7 ⊕ and = 15 ⊕ at = 500 0 (see red dashed lines in Fig. 5 and the respective density maps in Fig. 3).
A simple model for the torques acting on the planet in the migrating phase
In order to gain physical insight on the origin of the temporal behaviour of the torque, on the migration rate, and on the stalling radius, we consider a simplified semi-analytical 2D model.The total specific torque exerted on the planet has three components where Γ L is the Lindblad torque, Γ CR the corotation torque and Γ is the torque arising from the underdense vortices.For the Lindblad torque, we will use the formula derived by Tanaka et al. ( 2002) for 3D isothermal discs: where Γ 0 is a reference torque given as with Ω is angular frequency of the planet, ℎ the disc aspect ratio at , and ≡ / ★ is the planet-to-star mass ratio.The index as a function of for the initial surface density radial profile used in our simulations is shown in the upper panel of Figure 7.
For the corotation torque we use the unsaturated value.The validity of this assumption is discussed in Section 4.3.The unsaturated corotation torque is given by where is the half-width of the horseshoe region (e.g., Ward 1991; Paardekooper et al. 2010).The bottom panel of Figure 7 shows the specific torque Γ CR / , together with Γ L / in this model, as a function of the orbital radius of the planet .Note that the corotation torque is positive for orbital radius larger than 1.We warn that the expressions for the torques are only valid for masses smaller than the thermal mass.Finally, the torque component Γ arises from the two underdense banana-shaped regions, representing the low-density vortices, rotating with different angular velocity around the central star.In this analytical approach, we will assume that the underdense regions themselves do not feel differential rotation; they preserve their shape with time.We denote Σ to the decrement in the surface density of the disc associated to the vortices, i.e. after substracting the axisymmetric part (which does not contribute to the torque).Figure 8 shows Σ (, ) in this model at two different times.We have assumed that the inner and outer underdensities rotate in the same direction that the planet, with constant angular frequencies Ω in = 1.20 and Ω out = 0.78, respectively.For illustration, if we assume that the planet is located at = 1.236, the orbital frequencies in the frame corotating with the planet (synodic frequencies) are Ωin = 0.47 and Ωout = 0.052, respectively.
The torque Γ acting on the planet is given by where Δ and Δ are the grid spacing in the radial and azimuthal direction and = 0.6 ( ) is the smoothing length in our 2D semianalytical model.Note that the specific torque Γ / does not depend on .As Σ is more negative at the core of the vortex, the force acts to increase the planet's angular momentum when the planet is in front of a vortex.The evolution of the semi-major axis of the planet is given by where is the velocity of the planet and ∥ is the disc force tangential to the orbit (e.g., Burns 1976;Murray & Dermott 1999).Since the planetary eccentricities remain small, we will approximate ≃ Ω and ∥ ≃ Γ / .
When the planet entries to the edge of the gap, the local gradient of the disc surface density increases ( decreases) and the positive coorbital torque becomes larger (see Eq. ( 18); Masset et al. 2006;Romanova et al. 2019).As a consequence, the migration rate of the planet decreases.In our simplified model, we can estimate the stalling radius.We first note that as the vortex structures are assumed to maintain their shapes, the average of Γ over synodic periods is zero.Consequently, the torque Γ does not contribute to a net radial migration.Thus, planet migration stalls when Γ L + Γ CR = 0. Assuming that ≃ 1.1 √︁ /ℎ, it implies = −0.864,regardless of the mass of the planet.For the unperturbed surface density in our simulations, an index of −0.864 occurs at = 1.205.
Figures 9 and 10 illustrate how the different components of the torque vary on time in our semi-analytical model, for a planet initially at = 1.25, with = 1 ⊕ (Figure 9) and = 5 ⊕ (Figure 10).The temporal evolution of the semi-major axis is also shown.We took Ω in = 1.20 and Ω out = 0.78.
We see that Γ has the same shape as obtained in the simulations.Γ follows a pattern that is repeated every 1/ Ωout (in units of 0 ).Note that the synodic frequency Ωout increases with time as the planet migrates inward.Γ also shows large-frequency oscillations of small amplitude, which have a period of 1/ Ωin (in units of 0 ), and are produced by the interaction with inner vortex.
The model reproduces the general features found in the simulations: the amplitude of Γ , the shape of Γ , the migration rate, and the radial incursions of the planet.Note that the amplitude of Γ is dominated by Γ because it is much larger than the amplitude of Γ CR and Γ L .
The amplitude of the specific torque Γ / hardly changes when we vary the mass of the planet.However, the migration rate does depend on the mass of the planet, because the radial migration is driven by (Γ L +Γ CR )/ .In fact, the 5 ⊕ planet has almost reached the stalling radius after 900 orbits.As can be seen in the middle panel of Figure 10, Γ CR + Γ L approaches to zero (with some oscillations) at later times.
The rapid switch in the temporal pattern of the torque seen in Fig. 5 for ≥ 7 ⊕ , when the planet is close to the stalling radius, is because the planet-vortex interaction is able to break up the outer vortex forming two vortices.The planet with = 1 ⊕ at = 500 0 is still migrating towards the gap and it is still unable to break up the outer vortex.In the map for the planet with = 1 ⊕ in Fig. 3, only two vortices are visible in the disc.However, the planets with = 7 ⊕ and 15 ⊕ have already halted their radial migration, and, in fact, three vortices are present in the disc.These vortices survive until the end of the simulations.The innermost vortex has the largest orbital frequency about the central star and drives the small peaks in the torque.The two outer vortices rotate around the star at similar orbital frequencies.
To summarize this section, we find that this simple model can qualitatively account for the general behaviour of the torque and planetary orbital radius observed in the simulations, as far as the backreation of the planet on the vortices is not important.However, we note that as the planet approaches to the vortices (especially to the outer vortex), the vortices cannot be treated as rigid low-density structures rotating at constant frequency around the central star.As shown in Figures 3 and 11, the planets can even pass through the vortices.In the next sections we discuss different aspects of the vortex-planet interaction when the planets are close to the vortices.Masset et al. (2006) studied the migration of low-mass planets in the edge of a cavity (or surface density transition), and showed that the migration is halted due to the positive contribution of the corotation torque.Damping radial incursions of the planet (modulations) were also observed when the planet is close to the trapping radius due to the interaction with the anticylonic vortex formed at the top of the pressure bump.
Vortex-planet interaction: Comparison with previous work
Ataiee et al. ( 2014) studied the gravitational interaction of an anticyclonic vortex with a planet.They found that the planet is captured in a triangular equilibrium point in the three body system formed by the star, the planet and the vortex.This capture is possible because anticyclonic vortices have a larger density as behave as blob of matter.In our case, the vortices are underdense and produce a repulsive effect on the planet in the azimuthal direction.It is easy to show that there is no triangular equilibrium points in the circular restricted three body problem composed by the central star, the planet and the cyclonic vortex.If trapped in a corotation resonance, the only possibility is that the planet lies in a collinear Lagrangian point.Whereas this is not the case in the simulations described in Section 3, we show a case where the planet and the vortex are collinear with the central star in the Appendix.
Very small, nearly unnoticeable, radial modulations are observed in the experiments of Ataiee et al. ( 2014).The reason is that being the vortex at the top of a dense ring, the Lindblad and corotation torques lead the planet to rapidly migrate inwards and to become locked in corotation with the vortex.Thus, the interval of time at which the planet can be pushed to an outer orbital radius becomes very short.
In the experiments of Fig. 4, the radial incursions are much larger because the planet stops its migration at a radius slightly larger than the orbital radius of the core of the outer vortex.In the Appendix, we show a case where the radial incursions damp with time because the planet is swallowed by the vortex.
Horseshoe region analysis
The libration timescale in which a fluid element orbiting at radius = (1 + / ) execute two orbits in the frame corotating with the planet, is given as (see Masset 2001Masset , 2002)): 24) as a function of , using the horseshoe half-width given in Eq. ( 25) (shaded region).The lower bound given in Eq. ( 24) but now using sim at = 500 0 is also shown (blue curve with crosses).if this timescale is smaller than the timescale of the viscous evolution of the disc, then the corotation torque will stay unsaturated.If the latter occurs, the fluid elements in the librating region are pushed into the horseshoe region.
Other possible mechanisms that may give rise to new fluid elements within the horseshoe region are the migration of the planet itself (Paardekooper et al. 2010;Paardekooper 2014;Romanova et al. 2019), and the disturbances generated by a vortex and the spiral waves that it emits (Chametla & Chrenko 2022;Chametla et al. 2023).
Here, U−turn ≈ ℎ lib is the horseshoe U-turn time.Eq. ( 23) may be cast as where is the pressure scale of the disc calculated at = .Remarkably, in all our simulations, the outer vortex yields to a widening of the horseshoe region (see, for instance, Fig. 11).This implies that the half-width of horseshoe region may differ from the value inferred in the absence of any vortex: (Jiménez & Masset 2017).Note that the width of the horseshoe region does not depend substantially on the surface density profile (Masset et al. 2006).In our simulations, the widening of the horseshoe region is mainly caused by the low pressure generated in the cyclonic vortex.The low pressure region inside the vortex produces an asymmetrical distortion of the streamline pattern close to the planet and an azimuthal shift of the stagnation points (see Fig. 11).The widening of the horseshoe region, ≡ sim / , is not constant for the different planets, but it is a function of the mass of the planet.When the vortex and the planet are aligned azimuthally (that is, null angular separation) or trapped in corotation resonance (as in the case shown in the Appendix), we find that where 0 = 0.5 ⊕ / ★ .For instance, for a planetary masses of = 1 ⊕ and = 15 ⊕ , we find that = 8 and 2.5, respectively.Fig. 12 shows the range of that satisfies the inequality (24) for different values of , using given in Eq. ( 25).For = 7 ⊕ , we see that range of to keep the corotation torque unsaturated spans from 10 −3 to 10 −2 .Since these value of are similar to the values in the gap of our simulations, the corotation torque would remain unsaturated if were given by Eq. ( 25).However, if instead of as given by Eq. ( 25), we use sim , the viscosity needed to keep the corotation torque unsaturated should be larger than = 3.3 × 10 −2 for = 1 ⊕ and larger than = 9.3 × 10 −2 for = 15 ⊕ (see curve with crosses in Fig. 12).However, the viscosity in our simulations is below these values.Since the corotation should not be saturated in order to balance the differential Lindblad torque, we argue that the corotation torque remains unsaturated (or partially unsaturated) due to the action of the cyclonic vortex in the horseshoe region of the planet (see Fig. 11).
Effects on planet formation
Some of the potential effects of the planet trapping scenario presented here are the following.
The radial excursions of the planet due to the interaction with vortices in the gap, as illustrated in Fig. 4, can lead to an expansion of the so-called feeding zone for the migrating planet, the region within the protoplanetary disc from where solids can be gravitationally captured by the planet.Typically it is considered to include the area within a few Hill radii of the semi-major axis of the planet (Armitage 2009).An estimation of this effect for the planet masses under consideration, 1 -15 ⊕ , around a 1 ⊙ star, indicates that the feeding zone width, and hence the final planet mass, is generally less than 50% greater than the normal value.In addition, our results seem to indicate that the amplitude of the radial excursions is not strongly dependent on the planet mass, so this is most probably not a runaway scenario.
In the classical picture, anticyclonic vortices form at the pressure maxima at the edges of the gaps.It is thought that these anticyclonic vortices can trap efficiently solid particles (Barge & Sommeria 1995;Ataiee et al. 2014).On the contrary, cyclonic vortices are characterized by a central pressure minimum and, hence, do not trap solid particles.This implies that, if gaps in protoplanetary discs are inhabitated by vortices as those studied here, planet formation is not favoured in such regions, and dust would preferably by driven to the edges of the gap and/or dispersed azimuthally.In this sense, cyclonic vortices would operate as expelling eddies that could contribute to prevent the radial drift of large dust particles.However, the enhancement in the -viscosity parameter within the gap will increase the turbulent diffusion of dust particles.As a result, we expect the formation of wider dust rings in gaps with cyclonic vortices than in the classical case of a pressure bump with anticyclonic vortices.
An additional interesting possibility, at the moment a speculation to be explored in future studies, is the consequence of trapping additional Earth and super-Earth planets as they migrate inward from radial locations further out in the disc.In the presence of a gaseous discs, it is possible that such planets can be trapped in mean motion resonance with the inner planet, as suggested in the scenario studied by Pierens & Nelson (2008).
Observational implications
Since the vortices formed at the edges of the gap are underdense and cyclonic (see Fig. 3), they can prevent dust accumulation, so it can be very difficult to observe them through standard techniques.However, effects that these produce can be identified, since they generate asymmetries at the edges of the gap that evolve over time.Therefore, the vortices reported in this study can be considered as candidates to explain the different asymmetries observed in the gaps of protoplanetary discs (Andrews et al. 2018;Isella et al. 2019;Stephens et al. 2023).
Another interesting consequence of these vortices is that they could be used as tracers of the stagnation orbital radius of the planets.As can be seen in Fig. 3, the planet stops radial migration at the same radius as the orbital radius of the vortex.In fact, the planet passes through the vortex (being trapped inside or not) where the asymmetry in the gap occurs (see for instance Fig. 13).We expect to have low-mass planets in regions near the edge of the gap.This can help observationally search for low-mass planets.Since these vortices should have a strong impact on the local kinematics of the gas (see Fig. 14), it should be possible to detect and identified them through the analysis of perturbations in the velocity field, similar to those used in the indirect detection of massive accreting planets in gaps (see Pinte et al. 2023, for a review).
So far there are only very few massive planets detected observationally (e.g., Benisty et al. 2021).A search for low-mass planets can be even more challenging.Our results that low-mass planets can generate cyclonic vortices give an important indirect possibility of detecting them in protoplanetary discs that exhibit asymmetric gaps.
CONCLUSIONS
We have performed 3D hydrodynamical simulations of the migration of low-mass planets embedded in a globally-isothermal disc with a gap in its surface density.The gap is sustained by a viscosity bump.When the planet is introduced into the disc, it excites density waves that promote the formation of underdense cyclonic vortices at the edges of sufficiently deep gaps.While the increasing density gradient in the outer edge of the gap should yield to an enhancement of the positive corotation torque that counteracts inward migration, the vortices formed in the gap modifies the flow structure in the coorbital region of the planet and, therefore, alter the corotation torque.Our main aim has been to investigate the interaction of the planet with the cyclonic vortices as the planet migrates towards the gap.
Initially, when the planet is far away from the gap, the vortices can be treated as rigid underdense entities.In this stage, the total torque acting on the planet varies periodically over the mutual (vortex and planet) synodic period, and its amplitude is dominated by the gravitational interaction with the main vortex.Afterwards, the overall inward migration of the planet stops at the outer edge of the gap.However, in some cases, the planet executes radial incursions even in the trapped state, due to the "repulsive" interaction with the underdense main vortex.In other cases, the planet and the vortex are locked in corotation and, thus, the flow reaches a steady state in the frame corotating with the planet.In this steady state, the flow around the planet presents a front-back asymmetry, which is responsible for a positive torque on the planet that counteracts the negative Lindblad torque.
The formation of this type of underdense cyclonic vortices can have important observational implications since it can generate asymmetries in the gas density within the gap that could explain the similar asymmetries that have been observed in different protoplanetary discs (see Andrews et al. 2018).
APPENDIX A: PLANETS TRAPPED IN SHALLOWER GAPS
For the gap parameters considered in Sections 2 and 3, we have shown that the planet is continuously interacting with the cyclonic vortices formed in the gap.When the planet is in the migrating phase (far enough from the vortices), the vortices can be treated as "rigid" underdense structures.However, when the planet approaches to the orbital radius of the vortices, the vortices-planet dynamics becomes very complex; the vortices break up into smaller vortices and the planet can go through the outer vortex repeatedly.
It is expected that the number, strength and circulation of the vortices depend on the depth and width of the gap.The probability of the formation of cyclonic vortices is likely to be reduced as the gap is shallower.In order to explore if the formation of cyclonic vortices is still feasible in a shallower gap, we have reduced the amplitude of the viscosity bump.In a shallower gap, we expect less intense vortices and, therefore, a reduction in the strength of the interaction between the planet and the vortices.
We have adopted = 0.058 (we keep the same values of 1 = 0.9 and 2 = 1.1), implying that the viscosity at = 1 is a factor 4/7 smaller then it is for = 0.05. Figure A1 shows the viscosity and surface density profiles.
We now present the results when a planet of 7 ⊕ is inserted in the disc.Figure A2 shows the density perturbations = − 0 planet and velocity perturbations ′ ≡ − ⟨ ⟩ and ′ ≡ − in the equatorial plane, at three different times ( = 250 0 , 400 0 and 1000 0 ).Here the brackets ⟨...⟩ denote azimuthally averaged values.In the maps of the density perturbations (left panels), the outer spiral wave excited by the planet plus two arc-shaped regions (one overdense and one underdense, which corresponds to a cyclonic vortex) are visible.The signature of the vortex is also seen as twolobed structures in the maps of the perturbations in radial velocity and as two stripes in the maps of the azimuthal velocity perturbations.In the vortex, the radial velocity perturbation changes sign along the azimuthal, whereas the azimuthal velocity perturbation changes sign along the radial direction.
At = 250 0 , the vortex is approaching to the planet, being the angular separation between the planet and the vortex of ∼ 40 • .At = 400 0 , even though the vortex and the planet are in conjunction, the maps of the velocity perturbations show a complex pattern around the planet.At = 1000 0 , the planet has been captured by the vortex and both corotates in a steady state.A zoom of the flow around the planet is shown in Fig. A3 and the streamlines in Fig. A4.
Figure A5 shows the total (specific) torque on the planet as a function of time.Between = 70 0 and 300 0 , the amplitude of the total torque on the planet, Γ, is dominated by the gravitational interaction with the underdense arc-shaped region plus the overdense region.The tamdem effect of both contributions produce that Γ/ oscillates between ∼ −9 × 10 −5 and ∼ 2 × 10 −5 .After = 300 0 , the arc-shaped overdense weakens.At 400 0 , Γ undergoes a transition in its behaviour and a new phase, where the planet will eventually be locked to the cyclonic vortice, starts.After = 400 0 , Γ displays a classical damping pattern in time, until it becomes strictly zero in the stationary state (see upper panel in Figure A5).
The lower panel of Fig. A5 shows the temporal evolution of the semi-major axis of the planet.We see that between 0 and 400 0 , 2 / 2 < 0 or, in other words, the migration rate increases until it stops suddenly at 400 0 .The stalling radius is 1.17 0 , which is slightly smaller than for = 0.05.After 400 0 , the planet and the vortex rotate in tandem, with a small damping librational motion (which are visible as oscillations in the semi-major axis between = 400 0 and = 750 0 ).After = 900 0 , the libration is totally damped and the flow becomes stationary in the frame rotating with the planet.In this stationary configuration, the planet and the vortex are aligned in opposition (see lower panels in Figure A2).Fig. A3 and A4 clearly show that the vortex modifies the flow around the planet, producing a front-back asymmetric, which is responsible for a positive torque on the planet that balances the differential Lindblad torque.The origin of this asymmetry can be explained as arising from the radial shift between the planetary radius (with = 1.17 0 ) and the vortex center (at = 1.04 0 ; this corresponds to the radius at which ′ changes sign).Finally, we have carried out a simulation with = 0.085 and 1 = 0.88 and 2 = 1.12.In this case, the viscosity parameter is a factor of 3 smaller than it is in our reference model.A snapshot of the density and velocity perturbations at = 50 0 is shown in Figure A6.No vortex is formed.The pattern velocity around the planet is similar to the pattern in Figure A2 at = 400 0 .This indicates that the flow around the planet at = 400 0 , when the vortex is in conjunction in the simulation of Figure A2, has time to relax to the same configuration as if there was no vortex.
This paper has been typeset from a T E X/L A T E X file prepared by the author.
Figure 1 .
Figure 1.Top: Radial profile of the kinematic viscosity , parameterized by Eq. (11).Middle: Radial profile of the surface density after a time 5000 0 , when the disc is viscously relaxed (solid line).The dotted line shows the starting power-law surface density profile and the vertical dashed orange lines represent the position of the edges of the gap, which are determined when the slope of the line tangent to the power-law radial profile coincides with the slope of the line tangent to the evolved density profile (see the parallel gray line segments).Bottom: Gas density distribution in the − plane when the gap profile reaches steady state.Note that at this time the planet is introduced in the disc (that is, ( = 5000 0 ) ≡ 0 planet ).
Figure 2 .
Figure 2. Radial profile of some quantities along the gap before the planet is inserted.From top to bottom: (1) squared epicyclic frequency relative to the squared Keplerian frequency, (2) inverse of the potential vorticity L, (3) Ω/ as a function of radius and (4) shear parameter shear ≡ − (/Ω) Ω/.
Figure 4 .
Figure 4. Temporal evolution of the semi-major axis for different planetary masses.
Figure 5 .
Figure 5. Temporal evolution of the total specific torque on the planets.The vertical red dashed lines indicate = 500 0 , which corresponds to the time at which the snapshots of Figure 3 were taken.
Fig. 4
Fig.4shows the semi-major axis versus time, for planets with masses ∈ [1, 15] ⊕ .At the end of the simulation, the two less massive
Figure 6 .
Figure6.Radial torque distribution (in code units) for planetary masses of = 1 ⊕ , = 3 ⊕ at different times, including those times at which the torque reaches its maximum and minimum values.The blue dot shows the radial position of the planet, and the colored area denotes the relative radial extent between the inner and outer edges of the gap.
VFigure 8 .Figure 9 .
Figure 8. Non-axisymmetric decrement in the surface density, Σ (in code units), representing two underdense vortices, at = 0 (left) and at = 5.5 0 (right).The two vortices and the planet are rotating counterclockwise around the central object.The frame is corotating with the planet (white dot).In this illustrative example, it was assumed that the planet is fixed at = 1.236.For reference, a unit circle has been drawn (solid line).The dashed circle indicates the planetary orbital radius.
Figure 11 .
Figure 11.Midplane perturbed density, = − 0 planet , around the corotation region for planetary masses of = 7 ⊕ (top panel) and = 15 ⊕ (bottom panel), at = 500 0 after inserting the planet.Note that the gas streamlines (solid blue lines) execute a U-turn in the azimuthal position where the underdense vortex is located.The dot-dashed lines show the half-width of the horseshoe region predicted by Eq. (25).
Figure 12 .
Figure 12.Viscosity domain according to Eq. (24) as a function of , using the horseshoe half-width given in Eq. (25) (shaded region).The lower bound given in Eq. (24) but now using sim at = 500 0 is also shown (blue curve with crosses).
Figure 13 .
Figure 13.Asymmetric gas density distribution in the gap region produced by a cyclonic vortex when a low-mass planet of = 3 ⊕ is embedded in the disc, at = 500 0 .
Figure A1 .
Figure A1.Top: Radial profile of the kinematic viscosity , parameterized by Eq. (11).Bottom: Radial profile of the surface density when the planet is introduced into the disc.
Figure A3 .
Figure A3.Zoom of the flow pattern around the planet at = 1000 0 .Left: Perturbations in density.Middle: Radial velocity perturbations.Right: Azimuthal velocity perturbations.
Figure A4 .
Figure A4.Midplane perturbed density, = − 0 planet , around the corotation region for a planet of mass of = 7 ⊕ .As in the previous cases, the gas streamlines (solid blue lines) execute a U-turn in the azimuthal position where the underdense vortex is located.
Table 1 .
Initial conditions and main parameters of our simulations ( 0 denotes a reference radius * ). | 11,250.4 | 2024-06-18T00:00:00.000 | [
"Physics"
] |
Review Article Cryptographic Accumulator and Its Application: A Survey
. Since the concept of cryptographic accumulators was first proposed in 1993, it has received continuous attention from researchers. The application of the cryptographic accumulator is also more extensive. This paper makes a systematic summary of the cryptographic accumulator. Firstly, descriptions and characteristics of cryptographic accumulators are given, and the one-way accumulator, collision-free accumulator, dynamic accumulator, and universal accumulator are introduced, respectively. Cryptographic accumulator can be divided into two types: symmetric accumulator and asymmetric accumulator. In the asymmetric accumulator, three different cryptographic accumulator schemes were classified based on three security assumptions. Finally, this paper summarized the applications of cryptographic accumulators in ring signature, group signature, encrypted data search, anonymous credentials, and cryptographic promise.
Introduction
e concept of cryptographic accumulators was first proposed in 1993 by Benaloh and de Mare [1], who developed a one-way accumulator encryption protocol that could be used for timestamp and membership testing through a hash function with quasi-commutativeness and one-way property. at is to say, for all x ∈ X and y 1 , y 2 ∈ Y, this one-way hash function h: X × Y ⟶ X satisfies the quasicommutativeness: h h x, y 1 , y 2 � h h x, y 2 , y 1 .
(1) e cryptographic accumulator scheme allows the accumulation of elements from a finite set X � x 1 , . . . , x n into a concise value acc X of constant size, known as a cryptographic accumulator. Because the cryptographic accumulator satisfies the characteristic of quasi-commutativeness, the accumulated value acc X does not depend on the order of the accumulated elements. Choose g ∈ G as the base, and the original cryptographic accumulator is defined as acc X � h h h . . . h h h g, x 1 , x 2 , x 3 , . . . , x n−2 , x n−1 , x n . (2) e witness wit x i of each element x i ∈ X in the set is calculated to verify h(wit x i , x i ) � acc X , that is, to effectively prove the membership of element x i . At the same time, it is not feasible to find a membership witness for any unaccumulated element y ∉ X because of the collision resistance of one-way hash function. e cryptographic accumulator has several important characteristics, such as being dynamic, robustness, universality, security assumption, and compactness, as shown in Table 1.
Although cryptographic accumulators have been roughly described in the review of cryptographic accumulators published by Ozcelik et al. [6], the summary of this paper is not comprehensive. erefore, this paper makes a more comprehensive and detailed summary. e roadmap of this paper is organized as follows: Section 2 introduces the descriptions of cryptographic accumulator. Section 3 classifies the cryptographic accumulators into symmetric accumulator and asymmetric accumulator. In Section 4, cryptographic accumulators based on various security assumptions are introduced in detail. Section 5 describes the cryptographic accumulator scheme of hidden order group and known order group. In Section 6, the applications of cryptographic accumulator are introduced. e seventh section gives a summary.
One-Way Accumulators.
e concept of the cryptographic accumulator originated from the one-way accumulator first proposed by Benaloh and Mare [1]. A one-way accumulator is defined as a set of one-way hash functions with quasi-commutativeness.
One-Way Hash Functions [1,7]. A family of one-way hash functions is an infinite sequence of families of functions H λ λ∈N , where H λ � h k : X k × Y k ⟶ Z k (k is a security parameter), with the following properties: (1) For any integer λ and any h k ∈ H λ , h k (., .) is computable in time polynomial in λ.
(2) Any probabilistic, polynomial-time algorithm A satisfies
Pr h k R ⟵ H λ ; x R ⟵ X k ; y ,R ⟵ Y k ; x , ⟵ · A 1 λ , x, y, y , : h k (x, y) � h k x , , y , < negl(λ), where the probability depends on the random selection of h k , x, y, y ′ and random output of A.
From the above description, it is seen that the one-way hash function is computable and one-way; that is, given x and y, the calculation of z � h(x, y) can be completed in polynomial time, and if given x, y, and y ′ , the probability of finding x ′ satisfying h k (x, y) � h k (x ′ , y ′ ) is too small to be ignored; that is, the conflict between the outputs generated by different inputs is very little.
(4)
If a one-way hash function satisfies the quasi-commutativeness, first of all, the forward calculation is easy according to the one-way property, while the reverse calculation is difficult. Second, satisfying the quasi-commutativeness means that, under the condition of given initial value (Seeds), the results of multiple hash operations will not change with different calculation order.
A one-way hash function with quasi-commutativeness can be used to verify whether a value y i is in a specified set Y � y i . Specifically, the accumulative results z of Y can be calculated by the one-way accumulative function h ∈ H, using the following formula: e accumulated value (called partial accumulated value) of y i � y|y ∈ Y, y ≠ y i other than y i can also be calculated using a one-way accumulative function: e above conclusion holds because if the attacker does not know y i , according to the description of one-way function, it will face the computational difficulty that constructing y ′ makes z � h(z i , y ′ ) established. Hence, (y i , z i ) can be regarded as a witness of y i ∈ Y. During the following discussion, Z N represents the set of all positive integers, and Z n represents the set of positive integers with length within n.
One-Way Accumulators [1,7]. (y i , z i ) is the witness of y i ∈ Y meaning that it meets the following condition: . (6) However, there is an obvious problem with the above analysis: Assume that the attacker can only randomly select predictive values y ′ in a given set Y. In fact, it is entirely possible for an attack to easily find y ′ satisfying z � h(z i , y ′ ) beyond the value domain set Y, thus destroying the above description of the witness. A strong description is obtained if the attacker's optional range of predictive values is extended beyond the specified set Y.
Strongly One-Way Hash Functions [7]. A family of strongly one-way hash functions is an infinite sequence of families of functions H λ λϵN , where H λ � h k : X k × Y k ⟶ Z k } (k is a security parameter), having the following properties: (1) For any integer λ and any h k ϵH λ , h k (., .) is computable in time polynomial in λ. Characteristics Dynamic [2] e cryptographic accumulator has efficient algorithms for adding, deleting, witnessing, and updating elements.
Robustness [1] e administrator of the cryptographic accumulator does not need to be trusted, and trapdoor information cannot be used to forge witnesses. Universality [3] e cryptographic accumulator can provide not only membership proof but also nonmembership proof. Security assumption [4] Under the premise of security assumption declaration, the member verification function of the cryptographic accumulator is not affected by attackers.
Compactness [5] e cryptographic accumulator can map a large set to accumulation value of a smaller order of magnitude, which is manifested by the small storage space required for accumulated value and witness, as well as the low time complexity of updating algorithm.
(2) Any probabilistic, polynomial-time algorithm A satisfies e probability is taken over the random choice of h k , x, y and random output of A.
Collision-Free Accumulators.
Strongly one-way property does not completely solve the problem of ensuring security in the case of an adversary actively participating in the selection of values to be accumulated (i.e., x and y in the above description are no longer randomly chosen but carefully chosen by the adversary). In order to fill this gap, Baric and Pfitzmann [5] proposed the concept of collisionfree accumulators.
Baric and Pfitzmann [5] proposed that the cryptographic accumulator needs to be more strict when building FSS mechanisms. Under the strongly one-way property, the attacker may still carefully forge the member value (y 1 ′ , y 2 ′ , . . . , y n ′ ) to construct witness accu ′ for y ′ . erefore, a collision-free accumulator is introduced. On the strongly one-way property basis, the member value (y 1 ′ , y 2 ′ , . . . , y n ′ ) does not need to be given.
Cryptographic Accumulator Scheme [5,7]. e scheme of a cryptographic accumulator is a 4-tuple containing 4 polynomial time algorithms (Gen, Eval, Wit, and Ver): (1) Gen (key generation algorithm): it is a probabilistic algorithm for generating initial parameters. Gen receives two parameters: a security variable 1 λ and an accumulator threshold N, an upper bound on the total number of values that can be securely accumulated, and finally returns an accumulator key k, k ∈ k λ,N . (2) Eval (evaluation algorithm): it is a probabilistic algorithm for finding accumulated values. Calculate all accumulated values in the set L � y 1 , y 2 , . . . , y N′ , N ′ ∈ N, where y i ∈ Y k , k ∈ K λ,N . Eval inputs (k, y 1 , y 2 , . . . , y N′ ) and outputs an accumulated value of z ∈ Z k and some auxiliary information of aux, which will be used as an input to other algorithms. Note that Eval outputs the same accumulated value for the same input, and the auxiliary information may be different. (3) Wit (witness extraction algorithm): it is a probabilistic algorithm for generating member witnesses based on relevant information. Wit inputs an accumulator k ∈ k λ,N , a value y i ∈ Y k , and auxiliary information aux outputted by Eval (k, y 1 , y 2 , . . . , y N′ ); if y i is in L, a witness w i ∈ W K is outputted to prove that y i is accumulated within z; otherwise, it returns symbol ⊥. (4) Ver (verification algorithm): it is a deterministic algorithm for verifying the membership of a value by witness. Ver inputs (k, y i , w i , z) to verify that y i is accumulated into z and outputs Yes or No according to witness w i .
N-Times
Collision-Freeness [5,7]. A cryptographic accumulator scheme is said to be N-times collision-free when it satisfies the following property: A cryptographic accumulator scheme is said to be N-times collision-free if, for any integer λ and for any probabilistic, polynomial-time algorithm A, where the probability is taken from random output of Gen, Eval, and A.
Collision-Freeness [5,7]. A cryptographic accumulator scheme is collision-free if it is in all N-times collision-free.
Dynamic Accumulators.
e application of member authentication requires that the selected cryptographic accumulator can not only enable the verifier to authenticate efficiently but also ensure the security. When a member set changes (added or deleted), the accumulated value and witness of each member can be updated efficiently; otherwise, whenever members are added or deleted, all members need to recalculate the current accumulated value and their respective witness. When the member set changes dynamically, the cryptographic accumulator cannot operate efficiently to meet the practical application requirements. For this reason, researchers put forward the concept of dynamic accumulator, which can add, delete, and update operations on the basis of the original 4-tuple.
Dynamic Accumulator Scheme [2,7]. A dynamic accumulator scheme is a seven-tuple containing seven polynomial time algorithms (Gen, Eval, Wit, Ver, Add, Del, and Upd), where Gen, Eval, Wit, and Ver are the same as in the cryptographic accumulator scheme: k, an accumulated value z obtained as the accumulation of some set L of less than N elements, where L⊆Y K , z ∈ Z k , and the value y ′ ∈ L to be deleted, it returns a new accumulator value z ′ corresponding to the set L∖ y ′ , along with a witness w ′ ∈ W k for y ′ and some updated information aux Add which will be used by the Upd algorithm.
(2) Del (element deletion algorithm): it is usually a deterministic algorithm. Given an accumulator key k, an accumulated value z obtained as the accumulation of some set L of less than N elements, where L⊆Y K , z ∈ Z k , and the value y ′ ∈ Y k to be added, it returns a new accumulator value z ′ corresponding to the set L ∪ y ′ , along with some update information aux Del which will be used by the Upd algorithm. (3) Upd (witness update algorithm): it is a deterministic algorithm used to update the witness w ∈ W k of each existing element in the set y ∈ Y k after adding or deleting elements in L. Upd takes k, y, w, op, and aux op as input (where op is either Add or Del) and returns an updated witness w ′ to prove that y has been accumulated into z ′ .
Universal Accumulators.
Universal accumulators are dynamic and support (non)membership proofs [3]. Cryptographic accumulators that support membership proof are called positive accumulators, those that support nonmembership proof are called negative accumulators, and those that support both are called universal accumulators [9].
Assuming that k is a security parameter, the safe universal accumulator of the input {χ k } family is a family of functions {F k } with the following properties [3]: (i) Effective generation: there is an effective probabilistic polynomial time algorithm G, which generates a random function F k on input 1 k . Moreover, G also outputs some auxiliary information about f, expressed as aux f . (ii) Efficient evaluation: each f ∈ F k is a polynomial time function, which outputs a value h ∈ g f when inputting (g, x) ∈ g f × χ k , where g f is the input domain of the function f and χ k is the input domain to accumulate the element. (iii) Quasi-commutativity: for all f ∈ F k , all g ∈ g f , and all en, the universal accumulator scheme is safe. Table 2 provides description of different types of cryptographic accumulators.
Symmetrical Accumulators.
e symmetric cryptographic accumulator is a trapdoor-free structure and does not require witness verification. In random oracle models, the existing structures are secure. e symmetric accumulator [14] basically consists of a one-way function f: Y ⟶ X and a vector x ∈ X of length l, initialized to the 0 vector. is set of values y 1 , y 2 , . . . , y n accumulates as vector z: z � x∨f(y 1 )∨f(y 2 )∨ . . . ∨f(y n ), where ∨ is contained by bit. Given the accumulative vector z and values y i , verify that membership in the accumulative vector includes calculating v � f(y i ) and verifying that, ∀k ∈ [[0, l − 1]], v k � 1 means z i � 1. Symmetric accumulator does not need to calculate the witness. But it is stuck with the long output of cryptographic accumulators. Actually, the length of the cryptographic accumulator depends also on the number of values added to the cryptographic accumulator and not only on the security parameters.
Nyberg [15] proposed a symmetric accumulator. e idea is to use the hash function to generate hash values for the values to be accumulated. Each hash value h is considered to consist of r blocks of size d bits h 1 , h 2 , . . . , h r composition. en, by mapping each block to one bit, map such code to an r bit string. Accumulated value z is calculated as the coordinate directional bit product corresponding to the string to be accumulated. To verify the membership, the values y and the corresponding bit string y ′ with r length can be calculated. Check that, for all 1 ≤ i ≤ r, when y i ′ � 0, z i � 0. Bloom filter [16] can be used as a cryptographic accumulator. Furthermore, Yum et al. [17] proved that it is superior to other symmetric accumulators. Secure Bloom filter consists of k hash functions f i : Y ⟶ X . ese functions actually belong to the hash family. Each hash function uniformly returns a vector index. To add a value to the cryptographic accumulator, it is fed to each hash function to get k indexes. e bit of x at these indexes is set to 1. To verify that a given value is accumulated, k hash functions are applied again to obtain the vector index. If any bit of the accumulative vector is 0 at these indexes, then the value is definitely not accumulated. If all the bits at these indexes are 1, then an incorrect positive response may be obtained. Another variant of Bloom filter has been studied in the past, where the hash function is replaced by a hash-based message authentication code (HMAC).
It can be noted that, in the case of symmetric accumulators, the size of l increases as the number of elements in the filter increases or the false positive rate is set as low.
Asymmetric Accumulators.
e first cryptographic accumulator proposed is asymmetric and requires witness verification [1].
is construct takes the modulus f (x, y) � x y mod N as a one-way and quasi-commutative function because it satisfies For power operations for one-way accumulators, the module is chosen as the product of two safe prime numbers p and q of equal size. If (p − 1)/2 is also a prime, prime p is safe. Malicious attacker who knows the accumulated value z may forge witness w for the randomly selected value y by finding the initial value x verifying x y mod N � z. However, this is not feasible under the RSA assumption. Table 3 shows the development of symmetric and asymmetric accumulators. Table 4 shows the evolution of different types of security assumptions.
Accumulator Based on Hash Tree
Hash tree, in cryptography and computer science, is a tree data structure in which every leaf node is labeled with the hash of the data block, while the node other than the leaf node is labeled with the encrypted hash of its child node label. Hash trees can efficiently and securely validate the contents of large data structures. A prime resolution algorithm is selected to build a hash tree [20].
Consecutive primes starting at 2 are selected to build a tenlevel hash tree. e node of the first layer is the root node, and there are two nodes under the root node. e second layer has three nodes under each node, and so on; that is, the number of children of each node layer is a continuous prime number. By the tenth level, there are 29 nodes under each node. e children of the same node, from left to right, Table 2: Descriptions of the cryptographic accumulator.
Description
One-way accumulator [10] One-way hash function [11] A family of one-way hash functions is an infinite sequence of families of functions H λ λϵN , where H λ h k : X k × Y k ⟶ Z k , with the following properties: ① For any integer λ and any h k ϵH λ , h k (., .) is computable in polynomial time in λ; ② for any probabilistic, polynomial-time algorithm A, (3) is satisfied, where the probability is taken over the random choice of h k , x, y, y , and the random coins of A. (4) is satisfied.
One-way accumulator A one-way accumulator is defined as a family of one-way hash functions with quasicommutativeness. is description is elegant and simple, but, in order to clarify the basic function of the security cryptographic accumulator, the ability to intuitively accumulate set L as a small value can be proved only for element y ∈ L. In fact, the oneway property imposed by the second requirement is often too weak for applications where the attacker can choose some value to accumulate.
Strongly one-way hash function
A family of strongly one-way hash functions is an infinite sequence of families of functions H λ λϵN , where H λ � h k : X k × Y k ⟶ Z k , having the following properties: ① For any integer λ and any h k ϵH λ , h k (., .) is computable in polynomial time in λ; ② for any probabilistic, polynomial-time algorithm A, (7) is satisfied, where the probability is taken over the random choice of h k , x, y, y , and the random coins of A.
Collision-free accumulator [5] Cryptographic accumulator scheme e cryptographic accumulator scheme is a 4-tuple of polynomial-time algorithm (Gen, Eval, Wit, and Ver)
N-times collision-freeness
A cryptographic accumulator scheme is said to be N-times collision-free if, for any integer λ and for any probabilistic, polynomial-time algorithm A, probability is taken from Gen, Eval, and random coins of A.
Collision-free When a cryptographic accumulator scheme is N-times collision-free for any value of N polynomial in λ, it is called collision-free. Dynamic accumulator [12] Dynamic accumulators include polynomial-time algorithms (Gen, Eval, Wit, Ver, Add, Del, and Upd) for 7-tuples. Universal accumulator [13] Universal accumulators are dynamic and support membership and nonmembership proofs.
Security and Communication Networks 5 represent different remainder results. For example, the second layer node has three children. So, from left to right, 0 is divided by 3, 1 is divided by 3, and 2 is divided by 3. e remainder of the mod operation on a prime number determines the path of processing.
Accumulator Based on Hash
Tree. In a hash tree, values are associated with the leaves of a binary tree. e value of the sibling node is hash in order to calculate the value associated with its parent node, and so on, until the value of the tree root is obtained. e root value of the tree is defined as the cryptographic accumulator of the set of values associated with the leaves of the tree [20]. e hash tree cannot be directly used to obtain the functions of general and dynamic accumulators. In fact, cumulative sets need to add and remove elements (tree node values if a hash tree is used), while generating nonmembership proof. So, instead of associating values with the leaves of the tree, a pair of continuously accumulated set elements are associated. To prove that element x is not in the accumulative set, it is now equivalent to indicating that a pair (x ∝ , x β ) (where x ∝ < x < x β ) belongs to the tree, but pairs (x ∝ , x) and (x, x β ) do not belong to the tree.
Development Process of the Accumulator Based on
Hash Tree. Buldas et al. [18,19] proposed the first universal dynamic accumulator satisfying nonrepudiation (called the nonrepudiable certifier and formalized in the context of the cryptographic accumulator). Its construction is based on collision-resistant hashes and hash trees. en, a universal accumulator structure based on hash tree is proposed, which satisfies the concept similar to nonrepudiation (the scheme is called strong universal accumulator). Recently, another cryptographic accumulator based on hash tree has been introduced, which uses the promise of modular operations on RSA composite modules based on binary polynomials as a collision-resistant hash function.
Accumulator Based on RSA Assumption
4.2.1. RSA Assumption. RSA hard problem means that, ∀y, z, n ∈ Z 1 n , ∃x ∈ Z n : z � x y mod n is known. e RSA assumption refers to the fact that the RSA assumption is computationally infeasible for all polynomial-time algorithms A [5]; that is, Pr y, z, n ∈ Z n :: ∃x: z � x y mod n ≤ 1 A(n) . (11) According to the RSA hard problem assumption, first, the function z � x y mod n satisfies the one-way property. Second, the function z � x y satisfies the quasi-commutativeness. at is, ∀ y 1, y 2 : z(z(x, y 1, ), When the modulus N is large enough and is generated randomly and the exponential y and value z are given, it is difficult to calculate x satisfying x y mod N � z. However, as informally noted in [1] and later recognized in Nyberg [15], the one-way property imposed in the description may not succeed for applications where certain adversaries have access to the list of values to accumulate. To remedy, a stronger property called strongly one-way property should be considered, where choices do not impose y ′ on the attacker as one-way hash functions.
Strong RSA Assumption.
e strong RSA hard problem means that, ∀z, n ∈ Z s , ∃x ∈ Z p , y: z � x y mod n is known, where Z p is the set of prime numbers. e strong Table 3: Development process.
Symmetric accumulator
Bloom filter constructs an cryptographic accumulator. A symmetric accumulator is proposed to generate hash values for the values to be accumulated hash functions. Asymmetric accumulator e first cryptographic accumulator proposed in 1993 is asymmetric and requires witness validation. 1993 [1] Proposing the first cryptographic accumulator which is asymmetric and requires witness verification.
[20]
A universal accumulator structure based on the hash tree is proposed, which satisfies the concept similar to the nonrepudiation, called a strong universal accumulator.
RSA assumption 1996 [15] Imposing one-way property (applications where some adversaries can access the list of values to accumulate may not succeed).
RSA hard problem assumption means that the strong RSA hard problem is computationally infeasible for all polynomial-time methods A [2]; that is, Pr z ∈ Z n , n ∈ Z s :: ∃x ∈ Z n , y ∈ Z p : z � x y mod n ≤ 1 A(n) .
In contrast to general RSA, the strong RSA hard problem assumption allows free choice of combinations (x, y); that is, the attacker can choose not only the base of the exponential function but also the exponent. In addition, the strong RSA assumption also requires that the exponent be prime, while the general RSA assumption has no special requirement for the exponent. For the strong RSA hard problem assumption, there is no strict proof that it is computationally feasible. Again, there is no rigorous theoretical proof that it works on a computer.
When the modulus N is large enough and randomly generated and given the value z, it is difficult to find x and y that satisfy x y mod N � z as previously demonstrated; impact resistance can be obtained under strong RSA assumptions only if the value to be accumulated is prime.
Cryptographic accumulators without trapdoor should be able to be constructed. Trapdoors are unnecessary in the cryptographic accumulator scheme. e side that provides N during system setup also knows trapdoors p and q. Unfortunately, the side that knows p and q can completely bypass the security of the system. Because by knowing p and q, it is possible to recover the initial value and then independently accumulate additional values and generate false witnesses. A trapdoor-free solution will not rely on trusted online or offline services. en a trapdoor-free accumulator is introduced, which is proved to be safe in the standard model. e authors suggest the use of a generalized RSA module with unknown complete factorization and call it RSA-UFOS. A number N is an RSA-UFO, and if N has at least two large prime factors p and q, then no participant in the union, including those that produce N, will be able to find an N that splits into factors N 1 and N 2 , thus making P|N 1 and q|N 2 . A probabilistic algorithm is also proposed to generate such numbers. Under the standard model, security is proved under a new assumption called "strong RSA-UFO assumption." is assumption is very similar to the strong RSA assumption, with the only difference being that module N is set to RSA-UFO.
Accumulator Based on Strong RSA Assumption.
All schemes in this setting are [1,5] extensions. e accumulator acc x is defined as acc x ⟵ g x∈X x mod N, where N is an RSA modulus consisting of two large safe prime numbers p and g, which is randomly drawn from the cyclic group of the quadratic remainder of N. ere are sk acc , pk acc � (p, q, N) and the witnesses of the value x i given by wit x i ⟵ acc x mod N, then the strong RSA assumption will be broke. Because of the product relation of the accumulated value in the exponent, the domain of the accumulated value is limited to prime number. Note that when a given witness wit a (i.e., wit b ≡ wit c a (mod N)), accumulating a compound number will allow a � b · c derivation of the witness for each of its factors, to accumulate sets from more general domains, an appropriate mapping from these domains to prime numbers will be required (see [27]).
Certain cryptographic accumulator schemes in this setting [2] also provide dynamic functionality. Simply summing the cryptographic accumulator and its witness can add values to the cryptographic accumulator without any secret. On the contrary, if the value x j is to be deleted, the x j − th root of the cryptographic accumulator must be calculated, which is difficult to solve under strong RSA assumptions without sk acc . However, after removing the value, membership witnesses can still be publicly updated using arithmetic techniques. To update the witness acc x/x j of the value x i , find a, b ∈ Z, so that ax i + bx j � 1 and calculate the new witness as wit x i ′ ⟵ wit b x i · acc x/x j mod N and original witness is wit x i .
Moreover, cryptographic accumulator scheme provides general functionality because it supports nonmembership witnesses: acc X is accumulator for set X and y j ∉ X. Now it holds that gcd x∈X x, y j � 1 or equivalently for a, b ∈ Z, a x∈X x + by j � 1. erefore, d ⟵ g − b mod N is calculated, where g is the initial value of the empty cryptographic accumulator and forms a nonmembership witness wit y i ⟵ (a, b).
en, the verification of nonmembership witnesses is completed by checking whether acc a x ≡ d y i · g(mod N) is established. Similar to what is done for membership, nonmembership witnesses can also be publicly updated (see [24]).
t-SDH Assumption.
Given a tuple t � (p, G, P), where p is prime, G is a cyclic group generated by P and a tuple in the form of value (P, sP, . . . s t P) in Z/pZ, where s ∈ Z/pZ 0 { } [8]. For any probabilistic polynomial-time algorithm A, the following probabilities can be negligible: Tartary et al. [28] made requirements for the conflict resistance performance of the scheme, thus refuting previous claims against cryptographic accumulators. Attack is based on improperly defined security models in which adversaries have access to functions f and g. e proposed patch includes providing compound functions g (f(.)) to the adversary instead of providing functions f and g, respectively. However, the patches proposed by the authors cannot prevent other types of attacks and have proved the scheme to be unsafe. Camenisch et al. [25] proposed another cryptographic accumulator based on dynamic pairing, which provides a more efficient witness update algorithm.
Fazio and Nicolosi [7] pointed out in their investigation of the cryptographic accumulator that the original structure makes the time to update the witness after m changes the cryptographic accumulator proportional to m. ey raised Security and Communication Networks the question of whether batch updates are possible, that is, whether it is possible to build a cryptographic accumulator where the time to update the witness is independent of the number of changes to the cryptographic accumulator set. Wang et al. [29] designed a cryptographic accumulator with batch processing update and then made improvements to solve the above problems. e scheme is based on the Paillier cipher system and is proven to be secure under a new assumption called the extended strong RSA assumption, which is a variant of the strong RSA assumption with modulus N 2 . However, contrary to this claim, Camacho and Hevia [30] have shown evidence of an attack and further demonstrated that the time to update the witness in the worst case must be at least Ω(m). erefore, this provides impossible results on a cryptographic accumulator with batch update capabilities.
Previous works have produced only membership witnesses, but, in some cases, nonmembership witnesses may be unavoidable. e authors present a dynamic accumulator that supports both membership and nonmembership short witnesses, which they call the universal accumulator. e initial value of the cryptographic accumulator must be public so that nonmembership witnesses can be verified.
is construct is based on the RSA function, so only prime numbers are allowed to accumulate.
Karlof et al. [23] used elliptic curves to construct cryptographic accumulators. To add up the values (scalars), multiply them by the public key (i.e., scalars multiply the base point of the curve). Witness generation follows the same algorithm but does not include corresponding values. Validation is simple; if the product of the witness and the value is equal to the accumulated value, it is necessary to check for equality. [22] proposed a t-bound accumulator. e cryptographic accumulator uses a group G of prime number p generated by g and has bilinear maps e: G × G ⟶ G T . Here, pk acc � (g, g s , g s 2 , . . . , g s t , u) and sk acc � s. e accumulator acc x of set X � x 1 , x 2 , . . . , x n ∈ Z p (n ≤t) is defined as acc x ⟵ g u x∈X (x+s) , and the membership witness
Accumulator Based on t-SDH Assumption. Nguyen
en, check whether acc X contains the value x i by verifying whether eacc x , g � e(g x i g s , wit x i ) is true or not. e scheme allows the public evaluation of cryptographic accumulators; that is, g h(s) is obtained by extending polynomial hx � x∈X (x + s) ∈ Z p [X] and by evaluating it in G through pk acc . e public calculation of the witnesses of x i also works on set X/ x i . Furthermore, these witnesses can be updated at a constant time without knowing the secret key (see [22]).
Nguyen's scheme is extended by nonmembership witnesses, and the random value u is eliminated [31,32]. Previous work also showed how to publicly update nonmembership witnesses within a fixed period of time. Note that these adjustments can also be applied to the latter [31]. e calculation of nonmembership witnesses with value y j ∉ X makes use of the following facts: hx � x∈X (x + X) is divided by the polynomial division remainder of y j + X. Such witnesses take the form of a, b � (g hs− d/y i +s , d) ) and may be validated by eacc x , g ? � ea, g y i , g s e(g, g s )).
Accumulator Based on t-DHE Assumption
Diffie-Hellman Exponent (DHE) Assumption. e t-DHE problem in a group G of prime order q is defined as follows: On input {g, g 1 , g 2 , . . ., g t , g t+2 , . . ., g 2t } ∈ G 2t , output g t+1 . e t-DHE assumption states that this problem is hard to solve.
Camenisch et al. [25] gave a scheme of t-bound accumulator based on t-DHE assumption, like the cryptographic accumulator in t-SDH settings, which uses a group G of prime number p generated by g and has bilinear mapping e: G × G ⟶ G T . Besides, it needs a signature scheme with corresponding key pairs (sk sig , pk sig ). Here, sk acc � sk sig , public key is pk acc � g 1 , . . . , g t , g t+2 , . . . , g 2t , z, pk sig � (g c 1 , . . . , g c t , g c t+2 , . . . , g c 2t , e(g, g) c t+1 , pk sig ), and c R ⟵ Z * p . X x 1 , . . . , x m can be accumulated by calculating acc X ⟵ m i�1 g t+1−i and signing g i with x i using sk sig , where m ≤ t, thus assigning the value of x i to g i . e witness wit x j of x j ∈ X is acc X ⟵ m i�1, i ≠ j g t+1−i+j . e membership of x j can be verified by checking whether e g j , acc X � z · e(g, wit x j ) is valid and verifying the signatures of g j and x j under pk sig .
is scheme allows public updates for witnesses and cryptographic accumulators to be deleted, as this requires only pk acc . However, if the value x i is to be added to the cryptographic accumulator, a secret signature key sk acc is required to create signatures on g i and x i to link the value x i to this parameter. erefore, the public addition of the cryptographic accumulator requires that a signature be included for each potential value to be stored in the public parameter. Obviously, this seems impractical except for the small accumulative domain.
Cryptographic Accumulator Schemes in the Hidden Order Group and Known Order Group
Since the introduction of cryptographic accumulator, many cryptographic accumulator schemes with different characteristics have been proposed. Basically, the main work is to construct schemes in hidden order group and known order group [33].
Hidden Order Group.
e original RSA-based schemes have been developed by Baric, which enhance the original concept of collision-free safety. Sander [21] suggested using unknown decomposed RSA modules to construct trapdoorfree accumulators. Camenisch extended the previous scheme to have the ability to dynamically add/delete values to the cryptographic accumulator, which constitutes the first dynamic accumulator scheme. eir plan also supports public updates of existing witnesses, that is, updates without knowing any trapdoor. After that, support for nonmembership witnesses was added, so a universal dynamic accumulator was obtained. ey also proposed an optimization scheme to update the documents of nonmembership witnesses more effectively but later found shortcomings [34,35]. Lipmaa [36] generalized the RSA accumulator to a module over a Euclidean ring. In all the above schemes, the accumulative domain is limited to primes to ensure that there is no conflict. Tsudik and Xu [37] proposed a variant, which allows the accumulation of semiprimes. Assuming that the semiprime used is difficult to decompose and its decomposition is unknown to the public, a collision-free accumulator is obtained. In addition, a cryptographic accumulator scheme is proposed, which allows arbitrary integers to be accumulated and supports batch updates of witnesses. However, the scheme was eventually broken.
Known Order Group.
Nguyen proposed a dynamic accumulator scheme, which is suitable for paired-friendly groups with prime p. It is secure under the t-SDH assumption and allows up to t values to be accumulated from domain Z p . Later, Damgard, Triandopoulos, and Au et al. extended the scheme of Nguyen with general functions. Recently, Acar and Nguyen [38] removed the upper limit t for the number of elements accumulated by the t-SDH accumulator. To do this, they used a set of cryptographic accumulators, each of which contained a subset of the entire set to be accumulated. Camenisch et al. introduced another cryptographic accumulator scheme for pairing-friendly prime arrays. It supports public updates of witnesses and witnesses, and its security depends on the t-DHE assumption. Table 5 shows the development of cryptographic accumulator schemes.
Application of the Cryptographic Accumulator in Digital Signature
6.1.1. Ring Signature. In anonymous authentication on trusted platform, the length of ring signature is positively related to the number of ring members, while large members lead to low efficiency. erefore, Xu et al. [40] proposed a ring signature anonymous authentication method based on the one-way accumulator and constructed its solution in detail. In the signature phase, the length of the ring is determined by a one-way accumulator, which accumulates the information of all members so that the ring is not too large for a considerable number of members. During the verification period, the efficiency is improved, and the hash computing time, encryption computing time, and decryption computing time are reduced. Compared with the typical ring signature, it is shown that the new solution has lower time complexity and space complexity. At the same time, the new solution ensures anonymity and validity, which not only makes up for the weakness of traditional ring signature but also has high efficiency under the premise of security.
Group
Signature. Based on the knowledge of an accumulative composite dynamic accumulator and an effective protocol to prove that the factorization of a submitted value develops a novel, efficient, and provably secure group signature scheme [37], it allows authorization and ownership proof at the same time as factorization based on cumulative synthesis. It enables a group member to perform lightweight authorization proof so that the complexity of proof and verification is independent of the number of current or all deleted members. Using a dynamic accumulator to facilitate authorization, it is required that the group manager propagate certain information such as the value deleted from the cryptographic accumulator whenever a member (or group of members) joins or leaves the group.
Encrypted Search.
e dynamic accumulator is introduced into the encrypted search scheme [41,42], and the existing search scheme of decentralized storage based on block chain is improved. e new scheme takes advantage of the efficient verifiability of the witness in the dynamic accumulator and the dynamic addition and deletion of elements in the accumulated value and takes into account both efficiency and flexibility. In the encryption search scheme based on CCS'14 Hahn in [43], a dynamic accumulator is introduced and improved for the decentralized storage application scenario based on blockchain.
Revoking Anonymous Credentials.
e dynamic accumulator can be used to revoke normal credentials (and certificates): First, add a unique value to each credential. en, the accumulator value of the unique value of all valid credentials is truly published [44]. Now, users can convince the verifier that the credential is still valid by providing a witness for the unique value contained in their credential. erefore, to check the credential, the verifier must check the publisher's signature to obtain the current accumulator value and use the witness provided by the user to verify that the unique value contained in the credential is included in the accumulator value.
For anonymous credentials, the same method can be used. However, the witnesses and values contained in the cryptographic accumulator can no longer be disclosed to the validator because this completely endangers anonymity. Instead, the user can apply zero-knowledge proof to convince the verifier that the values contained in its credentials are also included in the cryptographic accumulator. erefore, if a valid protocol is found to prove that the values contained in the commitment are also included in the certificate, any anonymous certificate scheme can be effectively revoked.
Cryptographic Accumulator in Vector Commitment.
Catalano and Fiore [45] proposed a black box construction of cryptographic accumulator based on vector commitment.
Security and Communication Networks
Vector commitment allows concise commitment C to be formed for vector X � x 1 , . . . , x n . Here, it is not computationally feasible to open position i of C to a value x i ′ different from that of x i . e accumulative domain in the black box construction is set D � 1, . . . , t { }. e cryptographic accumulator is modeled as a commitment to a binary vector of length t; that is, each bit i represents the existence or nonexistence of element i ∈ D in the cryptographic accumulator. en, the (non)membership of value i can be proved by opening position i that is committed to 1 or 0, respectively.
Other Applications.
e applications of the cryptographic accumulator are shown in Figure 1.
Cryptographic accumulators can be applied to membership testing, distributed signatures, responsible certificate management, and authenticated dictionaries and can also be used as editable, sanitary processing [46,47], homomorphic signatures [48,49], and privacy protection data outsourcing building blocks as for authenticated data structures [50,51]. In addition, the cryptographic accumulator scheme can be used to prove the zero knowledge of (nonmembership) witnesses [52,53], and undisclosed values are now widely Known order group 2005 [22] A dynamic accumulator scheme is proposed, which is suitable for paired-friendly groups with prime p. It is secure under the t-SDH assumption and allows up to t values to be accumulated from the domain. 2008 [32] Extended 2005 scheme with general functions.
Hidden order group e accumulative domain is limited to primes 1997 [5] Improved the original RSA scheme in 1993 and strengthened the original concept of collision-free safety. 1999 [21] It is recommended to use unknown decomposed RSA modules to construct trapdoor-free accumulators.
2002 [2] e scheme in 1997 is extended to have the ability to dynamically add/delete values to the cryptographic accumulator, and the first dynamic accumulator is constructed. 2007 [24] In 2002, support for nonmembership witnesses was increased, so a universal dynamic accumulator was obtained, and an optimization scheme was proposed to update the documents of nonmembership witnesses more effectively. 2012 [34] e RSA accumulator is broadly defined as a module over a Euclidean ring. e accumulative domain is limited to semiprimes 2003 [37] It is allowed to accumulate semiprimes.
2007 [29] e cryptographic accumulator scheme allows arbitrary integers to be accumulated and supports batch updates of witnesses.
2019 [39] Dynamic accumulator based on hash greatly reduces storage space. 2011 [38] e upper limit t for accumulating elements of the t-SDH accumulator is canceled. used to revoke group signatures and anonymous credentials [54,55]. Recently, cryptographic accumulators are also used in Zerocoin [56,57], and Zerocoin is an anonymous extension of bitcoin cryptocurrencies. erefore, the cryptographic accumulator can be applied to many aspects, and readers can understand the specific applications of the cryptographic accumulator in these aspects by consulting the above literature.
Conclusion
Cryptographic accumulator is a basic and important tool in the field of cryptography, which has been widely used in many aspects.
is paper firstly introduces the types of cryptographic accumulators. Secondly, in the asymmetric accumulators, three different cryptographic accumulators schemes are classified through three security assumptions.
irdly, several cryptographic accumulators based on security assumptions are introduced. Fourthly, this paper presents the cryptographic accumulator scheme under different characteristics. Finally the applications of cryptographic accumulators in different aspects are summarized. With the rapid development of big data security and blockchain, cryptographic accumulators are used more and more widely, and there is still much development space in the future.
Data Availability
All data supporting the findings of this study are included within the article. | 11,221 | 2022-03-07T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Bubble Baryogenesis
We propose an alternative mechanism of baryogenesis in which a scalar baryon undergoes a percolating first-order phase transition in the early Universe. The potential barrier that divides the phases contains explicit B and CP violation and the corresponding instanton that mediates decay is therefore asymmetric. The nucleation and growth of these asymmetric bubbles dynamically generates baryons, which thermalize after percolation; bubble collision dynamics can also add to the asymmetry yield. We present an explicit toy model that undergoes bubble baryogenesis, and numerically study the evolution of the baryon asymmetry through bubble nucleation and growth, bubble collisions, and washout. We discuss more realistic constructions, in which the scalar baryon and its potential arise amongst the color-breaking minima of the MSSM, or in the supersymmetric neutrino seesaw mechanism. Phenomenological consequences, such as gravitational waves, and possible applications to asymmetric dark-matter generation are also discussed.
I. INTRODUCTION
The standard model is incomplete: it does not accommodate the observed baryon asymmetry and therefore new physics is required. Substantial effort has been devoted to constructing theories that dynamically generate this asymmetry and some prominent contenders include GUT baryogenesis [1], electroweak baryogenesis [2], thermal leptogenesis [3], and Affleck-Dine baryogenesis [4,5]. This paper proposes a new mechanism, which we dub 'bubble baryogenesis'.
Like the Affleck-Dine mechanism, our setup employs a complex scalar baryon φ, represented in a polar decomposition as where R(x) and θ(x) are four-dimensional real scalar fields. Under baryon-number transformations U (1) B , φ rephases, R is invariant, and θ shifts. The charge density of φ is identified with the number density of baryons so a baryon asymmetry is present in field configurations that have 'angular momentum' in field space. Constraints on B violation today imply that φ is currently at the origin of field space-so as not to spontaneously break B-and that the potential there has approximate U (1) B -so as not to explicitly break it. In the early Universe, however, we take φ to be displaced from this minimum, to a place in the potential where B violation is more substantial. The observed baryon asymmetry is dynamically generated during the field's journey towards the origin.
In the Affleck-Dine mechanism φ evolves classically, relaxing uniformly towards the B-symmetric minimum. B-violating potential terms torque φ during its evolution, and so instead of moving in a straight line through field space, φ takes a curved trajectory; φ develops non-zeroθ and consequently non-zero B. The phase transition from the B-violating vacuum in the past to the B-symmetric vacuum today is second-order or higher-order, and the end result is a spatially homogeneous condensate carrying a non-zero baryon asymmetry.
But what if there is no classical trajectory connecting φ to the symmetric minimum? Bubble baryogenesis occurs when φ evolves via bubble nucleationeither through quantum tunneling or thermal excitation. Spherical bubbles of true, B-symmetric vacuum nucleate inside the false B-violating background. The bubbles expand, collide, and eventually percolate; the phase transition completes when the entire Universe is in the B-symmetric phase. During this process, baryons are produced through two distinct mechanisms. First, just as φ receives a torque in Affleck-Dine, the instanton that mediates bubble nucleation also receives a torque from B-violating interactions. Consequently, the bubble wall takes a curved trajectory through field space, and it therefore accumulates B as it expands. Second, when the bubble walls collide, φ can be excited back into a region of the potential where B-violating terms are large, generating additional baryon asymmetry. In bubble baryogenesis, the phase transition is first-order, and the end result is a spatially inhomogeneous distribution of baryons. After percolation, the baryon asymmetry is assimilated into the thermal plasma of the early Universe.
Like bubble baryogenesis, electroweak baryogenesis also relies on a first-order phase transition in the early Universe. In that case, however, the tunneling scalar is the Higgs field, and baryon number is generated indirectly through scattering off bubble walls.
In Sec. II, we outline the basic elements of bubble baryogenesis and present a general analysis of the vacuum structure, nucleation rate, asymmetry generation, bubble collisions, and washout. In Sec. III, we define an explicit toy model, outline its cosmological history, and ascertain the final baryon asymmetry. Some more realistic examples-involving the neutrino seesaw mechanism, and color breaking minima-are then presented in Sec. IV. We discuss phenomenological signatures in Sec. V and conclude in Sec. VI. lating first-order phase transition; the radial part of our potential has this behavior. We define a 'clock' parameter z that indexes the shape of the potential. At early times, z > 1 and the true minimum is B-violating and at φ = 0. At late times, z < 0 and the only minimum is B-symmetric and at φ = 0. Vacuum decay is possible in the interval 1 > z > 0.
II. GENERAL CONSIDERATIONS
Of the potential for the scalar baryon V (φ), we require two features: first, that it vary with time in such a way as to yield a percolating first-order phase transition in the early Universe; and second, that the potential accommodate explicit B-and CP -breaking dynamics. We discuss how these two criteria can be accommodated in Sec. II A and Sec. II B, respectively. Afterwards, we tackle the dynamics of baryon production, which occurs at two times: first, at nucleation, which we discuss in Sec. II C; and second, at collision, which we discuss in Sec. II D. Once generated, the baryon asymmetry must persist, and migrate into the standard-model sector; we discuss the washout and decay of φ particles in Sec. II E.
A. Vacuum Structure and Tunneling
A potential that achieves a first-order phase transition must evolve with time as in Fig. 1. In the early Universe, the stable minimum is B-violating and at φ = 0. As time evolves, the energy density of this B-breaking vacuum increases until it is no longer the true vacuum; the lowest energy vacuum is now B-symmetric and at φ = 0, and bubble nucleation begins. At even later times, the energy density in the B-breaking vacuum increases so much that the minimum disappears entirely, ensuring that no region of the Universe remains stuck there. For the transition to be first-order, the bubbles must percolate before this time.
It is convenient to introduce a dimensionless 'clock' parameter z that characterizes the evolution of the potential. The two most important events-when the minima become degenerate and when the B-breaking minimum disappears-are taken to be at z = 1 and z = 0, respectively, as in Fig. 1. So, In bubble baryogenesis, z decreases monotonically; it is a reparameterization of time. Bubble nucleation occurs in the window 1 > z > 0. Tunneling is mediated by an instanton, a solution to the Euclidean equations of motion with a single negative mode. The rate per unit volume per unit time is given by where K is a determinant factor of order the characteristic mass scale of the potential to the fourth power, and ∆S is the difference in Euclidean action between the instanton and the false vacuum [6]. At z = 1, tunneling is forbidden, ∆S is divergent, and Γ is zero. At z = 0, ∆S is zero and semiclassical tunneling through the barrier is overcome by classical evolution down the potential. After nucleation, the energy difference across the bubble wall causes it to accelerate outward, rapidly approaching the speed of light. Bubbles nucleate, expand, and collide until-at percolation-the entire Universe is in the true vacuum. When does percolation occur?
Pick a point in space at time z. The expected number of bubbles N that have overlapped this point is where V (z, z ) is the three-volume at time z of the past lightcone that emerged from our point in space at time z, and dt/dz is a Jacobian factor. The integrand therefore is the probability that a bubble nucleates at a time z in the right position to convert our point to the true vacuum, and it is integrated over all past z. Percolation occurs at a time z * satisfying so that at least one bubble has nucleated in the past lightcone of every point in space. As long as z is changing slowly enough with time, then N (z * ) ∼ Γ * H −4 * , where throughout the * subscript will denote a quantity evaluated at z = z * . The integral is dominated by values of z close to z * and the percolation condition can be approximated by This means that bubbles typically nucleate one Hubble time before percolation, with roughly one bubble per Hubble volume at percolation, as shown in Fig. 2. Though some bubbles do nucleate before this time, the rate is too small to induce percolation.
A time-varying potential of the form in Fig. 1 can arise naturally in two ways, depending on whether percolation completes before or after reheating. If the phase transition occurs before reheating, then a direct coupling of the scalar baryon φ to the inflaton field will give rise to a time-dependent effective potential. This is the same type of coupling that is used to generate the evolving potential in the Affleck-Dine mechanism [7]. From an effectivefield-theory standpoint, such couplings are mandatory unless forbidden by a symmetry, and while they are often non-renormalizable they can nonetheless play an essential role in the physics. In this scenario, the phase transition takes place between the end of inflation-so as not to dilute the baryons-and reheating. During this matter-dominated period the inflaton is oscillating about its minimum, but has not yet decayed to standard model particles.
If the phase transition occurs after reheating, then a direct coupling of φ to the big bang plasma will give rise to a time-dependent thermal correction to the effective potential. The same couplings that allow φ to decay-so that the baryon asymmetry can migrate to the standard model sector-can generate such terms. Note that similar thermal effects give rise to the first-order phase transition in electroweak baryogenesis [8], with the crucial difference that the relevant scalar field there, the Higgs boson, is not charged under U (1) B . In electroweak baryogenesis the purpose of the first-order phase transition is merely to provide an out of equilibrium environment for particle and anti-particle scattering processes.
While bubble baryogenesis can occur in either scenario, the models we study in Sec. III and Sec. IV are in the former category, where it is easier to suppress thermal washout.
B. B and CP Violation
The Sakharov conditions [9] state that successful baryogenesis requires both B-and CP -violating dynamics. Under B and CP transformations the angular field component transforms as By Eq. (8), B violation requires a potential that violates the shift symmetry on θ, i.e. carries explicit dependence on θ. Such terms are necessary for asymmetry generation because in their absence the field has no reason to move in the θ-direction of field space, so by Eq. (2) no asymmetry is produced. In the Affleck-Dine mechanism, these B-violating terms torque the field in the θ-direction on its journey back to the origin; in bubble baryogenesis, they force the instanton, which solves the Euclidean equations of motion, to arc in the θ-direction as a function of spacetime. By Eq. (9), CP -violation requires either potential couplings with complex phases or spontaneous breaking by an initial φ localized at a CP -odd minimum. CP violation is necessary for asymmetry generation because in its absence, though the potential can exert a torque, φ's trajectory is just as likely to curve in the +θ-direction as it is to curve in the −θ-direction. That is, in a CP -conserving theory, two CP -conjugate instantons contribute equally to the path integral. The percolating transition would therefore be comprised of an equal number of bubbles with positive and negative B, which average out to a B-symmetric Universe.
Explicit CP violation breaks the degeneracy between these two CP -conjugate instantons. For example, one of them may disappear entirely if it is no longer a saddle point of the action. Alternatively, both CP -conjugate instantons can persist, but the one with a larger associated Euclidean action will be exponentially subdominant to the process of vacuum decay. This will be true in the models we consider here, so we will only be concerned with the dynamics of the dominant instanton contribution.
In general, it is useful to characterize the degree of B-and CP -violating effects with a dimensionless 'efficiency' parameter which is proportional to B-and CPviolating parameters in such a way that n B ∝ . From an effective-field-theory perspective, 1 is technically natural, but 1 is also allowed. Bubble baryogenesis generates baryon asymmetry in two ways. First, the instanton itself is asymmetric, which manifests itself as a surface density of baryons on the bubble walls. Second, bubble collisions excite the field back into the B-violating region of the potential. The net number density of baryons is given by a sum We will discuss the two contributions in detail in Sec. II C and Sec. II D, respectively, and show that, for a broad class of models, both of these contributions scale as where is the dimensionless measure of B and CP violation, R F = |φ F | in the false vacuum, and H * is Hubble at the time of percolation.
C. Asymmetry Generation: Instanton
In the presence of B-and CP -violating potential terms, the instanton will arc in the θ-direction. We are interested in computing both the net torque, which fixes the baryon asymmetry in the walls, and the bubble nucleation rate, which sets the percolation time z * . For both reasons, we need to find the instanton, since it characterizes the most likely bubble configuration to nucleate, and gives the rate via Eq. (4). Assuming SO(4) symmetry of the instanton, then the field components R(ρ) and θ(ρ) are functions of the Euclidean radial variable ρ alone, and the equations of motion for the instanton are Boundary conditions are regularity at the origin, so R (0) = θ (0) = 0, and that far from the bubble the fields settle into their false vacuum values, so R(∞) = R F and Here, we are assuming that bubble nucleation happens by quantum tunneling through the potential barrier, as would be the case if percolation occurs before reheating. If instead, bubble nucleation occurs primarily by thermal activation over the potential barrier, then the Euclidean time coordinate is periodic, the SO(4) symmetry becomes SO(3) × U (1), and the equations of motion change correspondingly [10].
The field value at the center of the bubble is near, but not exactly at, the true vacuum. Solutions are found by adjusting the field value at ρ = 0 so that the boundary conditions are satisfied at ρ → ∞; that is, we apply t B FIG. 5: The total B that results from a numeric simulation of an expanding 1+1-dimensional domain wall. At nucleation, the bubble wall has B = 0; but as the wall accelerates, B rapidly asymptotes to µ 2 , indicated by the dashed gray line. In higher dimensions than 1+1, total B would grow with the surface area of the bubble, but in 1+1-dimensions, 'surface area' is constant. The simulation was run in an expanding FRW background; gravitational expansion does not affect the value of µ 2 .
Coleman's overshoot/undershoot algorithm generalized to two scalar field directions. A sample instanton, and its curved trajectory through field space, are shown in Figs. 3
and 4.
To estimate the extent of the curving, consider a simple potential set by only two parameters: m, the characteristic mass scale of the potential in the R-direction, and , a dimensionless parameter that characterizes the degree of the B-violation. Parametrically, Because m is the only dimensional parameter, R F ∼ m, and the instanton solution varies in ρ on scales of order ρ ∼ m −1 . The parametric scaling of Eq. (13) is then such that To determine the O(1) factors here requires finding the instantons numerically as above. Given instanton profiles R(ρ) and θ(ρ), the evolution of the bubble post nucleation follows from analytic continuation of the instanton from Euclidean to Minkowski signature. The typical size of a bubble at the time of nucleation is much smaller than Hubble, so we can ignore the expansion of the Universe and simply continue ρ → √ r 2 − t 2 , where r is the radial coordinate away from the center of the bubble, and t is time. At nucleation, t = 0 and so ρ = r, and the field profile that nucleates is a slice through the center of the instanton.
Because the bubble nucleates at rest,θ = B = 0. However, as the wall accelerates outwards, spacetime points in the wall traverse an angle in field space ∆θ in less and less time, so the baryon density inside the wall grows and grows. At the same time, as the bubble expands, the thickness of the wall becomes Lorentz contracted, so the baryon density is supported on a smaller and smaller region. As we will now show, these two effects cancel at late times, and the accelerating bubble wall asymptotes to a constant surface density µ 2 of baryons.
To compute the baryon asymmetry contained in a single bubble wall, we integrate Eq. (2) on a fixed time slice t = τ long after nucleation where we plug in the analytically continued classical instanton profile R(ρ) and θ(ρ); we are working in the semiclassical approximation where loop corrections to this formula are small. Using spherical symmetry, this becomes where the prime indicates a derivative with respect to ρ.
In the first line, the integral is taken from τ to ∞ because analytic continuation of the instanton only gives the field profile outside of the light-cone; inside the light-cone, the field relaxes towards the B-symmetric minimum, producing negligible baryons. The second line is obtained by changing integration variable from r to ρ = √ r 2 − τ 2 . Lastly, we add the approximation that τ is a long time after nucleation, considerably bigger than the size of the bubble at nucleation. Since R 2 θ dies off exponentially at large ρ, τ ρ over the region where the integrand has support, and. (19) which is the bubble surface area at late times (4πτ 2 ), multiplied by the number of baryons per surface area Because it is a line integral along the instanton field trajectory, µ 2 is constant in time. In the spirit of Eq. (15), a parametric estimate is that Note that the sign of µ 2 depends on the direction in which θ arcs, which in turn depends on the imaginary phases in the potential. Fig. 5 shows the total baryon number of a 1+1dimensional expanding bubble. The stationary bubble wall carries zero baryon asymmetry, but as it accelerates up to the speed of light the number of baryons per surface area of the bubble wall grows and quickly asymptotes to µ 2 . The integrated number of baryons scales with the surface area; in 1+1 dimensions the 'wall' is point-like, and its 'surface area' is constant. Though we derived Eq. (20) in flat space, it remains true in an FRW background. Expansion acts globally on the bubble, affecting the growth of proper surface area with time, but it does not act locally-because the bubble wall is much thinner than H −1 , the field profile is not significantly affected, and so µ 2 remains as in Eq. (20). Though the scale factor a(t) does modify the equation of motion, its effect is far smaller than the gradient and potential terms, which set the shape of the wall [11].
Because the baryon asymmetry scales with the surface area of the bubble wall rather than the volume, the baryon number density from a single bubble dilutes with time, vanishing as τ → ∞. The baryon asymmetry that is produced is carried away to infinity by the accelerating bubble walls. Thus, to explain the observed asymmetry, there must be many bubbles, and the bubbles must percolate-not only to ensure that φ is in the B-symmetric minimum today, but also to preserve the asymmetry we have generated from escaping to infinity.
At percolation, there is on average one bubble wall stretched across each Hubble volume. The expected num-ber of baryons per Hubble volume at percolation, therefore, is the surface area of that wall (∼ H −2 * ) times the baryon surface density (µ 2 ). The number density of baryons right before collisions, therefore, scales like In the next subsection, we will argue that these baryons are approximately conserved by the collision, so that this n B,instanton contributes directly to the total n B in Eq. (11).
D. Asymmetry Generation: Collisions
To gain insight into the dynamics of bubble collisions, we have run numeric simulations for a variety of different models. Specifically, we have studied the collisions of 1+1-dimensional bubble walls in an expanding, matterdominated, FRW background with scale factor a(t) for an array of different potentials. The equation of motion for φ, ignoring gravitational backreaction, is As initial conditions, we used the exact 1+1-dimensional instanton profiles. The dynamics of Eq. (23) are complicated, but the moment of collision is simple. The colliding walls are relativistic-they are moving very fast and are very thin, by Lorentz contraction. This means that the time scale on which they cross is far smaller than both H −1 (the time scale on which FRW expansion acts) and m −1 (the time scale on which the potential acts). Gravity and the potential, therefore, can both be ignored, and the field approximately obeys the free wave equationφ − φ = 0, where we have rescaled x by a(t) the time of collision. Linear superposition of waves is an exact solution to the free wave equation, so the impinging walls merely pass through one another, and the field in between is deposited at −φ F , as shown in Fig. 6. This behavior is generic: as long as the walls are moving fast enough, the field value at the intersection of the walls is −φ F , independent of the precise shape of the bubble wall or the structure of the potential.
On longer time scales, of order m −1 , linear superposition is no longer a good approximation and the field begins to evolve under the force of the potential. The field in between the crossed walls rolls down the potential and begins to oscillate around a local minimum. There are two behaviors, depending on which local minimum.
I. Oscillation about the true minimum. The field between the walls, deposited on the other side of the potential at −φ F , is no longer in a vacuum state; under the force of the potential, it evolves back towards the origin, as shown in the bottom panel of Fig. 6. By the time the field reaches the true minimum, it has lost enough kinetic energy to gradients and Hubble friction that it cannot escape; it oscillates, Hubble friction damps the oscillations, and eventually the field settles into the true minimum. Fig. 7 shows a collision that illustrates this behavior, and Fig. 8 depicts the corresponding baryon number density.
As the field at the collision site evolves back towards the origin, it moves through a B-violating region of the potential, so a second wave of baryon generation is taking place. Fig. 9 shows the integrated baryon number as a function of time, which in d + 1-dimensions is Before the collision B(t) was constant, as in Fig. 5, but it makes an abrupt jump upwards at the moment of collision. The evolution of the field from −φ F generates new baryons inside the collision lightcone, visible in Fig. 8. A simple estimate can be made for the baryon number generated during this evolution. The process in which the field in the collision region evolves from −φ F to the origin can be thought of as a localized Affleck-Dine condensate forming and dissolving at the collision site. If the field takes a time ∆t to evolve from −φ F to 0, then the spatial width of this condensate is of order ∆t, since the bubble walls propagate at nearly the speed of light. The Affleck-Dine mechanism, were it to occur in this potential, would generate a number density of baryons which scales like R 2θ ∼ R 2 F m. Multiplying this by the width of the condensate and by the surface area of the collision site per unit volume at percolation H * gives the expected number density of baryons generated by the collision as where ∆t ∼ 1/m since m is the characteristic scale of the potential. The contribution to n B from bubble collisions has the same parametric dependence as the contribution from the instanton. After the collision, the energy in the field and the walls dissipates, and the asymmetry spreads; the way in which this happens is potential-dependent. Potentials which grow more slowly than quadratically at the origin, admit two related non-topological solitons that can temporarily trap energy and baryon number [12,13]. Field excitations that move solely in the θ-direction, locally orbiting the origin, are called Q-balls [14,15]; their charge contributes a centrifugal term to their effective potential that makes them absolutely stable. Field excitation that move solely in the R-direction, locally oscillating along a line through the origin, are called oscillons [16]; these excitations are long-lived, but not eternal. Our collisions produce a hybrid: it oscillates predominantly in the Rdirection, and can therefore be seen in Fig. 7, but it also carries non-zero baryon number, and can therefore be seen in Fig. 8. In other words, the field is locally executing very elliptical orbits about the origin. A number of these hybrids are visible: a stationary one emerges from the collision site, and several boosted ones fall off the wall as it propagates. Because these non-topological field configurations probe larger field amplitudes, they are still sensitive to U (1) B violation, as we discuss in the next subsection. Fig. 7. Superimposed, in red, is a plot of Re(φ(x = 0)) during the collision. B is flat before the collision, as in Fig. 5. At the collision, the field is deposited at −φ F and evolves back towards the origin; during that evolution, B surges.
II. Pockets of a false vacuum. For certain potentials instead of oscillating around the true vacuum, the field at collision site ends up in a false vacuum. This can happen in two ways. First, if the potential at −φ F is very sloped, then the field can overshoot the true minimum and land back in the original false vacuum, as was noticed in the earliest simulations of bubble collisions [17]. Second, if the potential happens to have an additional local minimum at −φ F , then the field never has to evolve anywhere, since it is already in a false vacuum [18]. The presence of an additional minimum at −φ F does not necessarily require tuning; potentials with approximate U (1) B have this feature automatically.
The false vacuum provides a locally stable state for the field, around which it can execute small oscillations. Though locally stable, the field does not remain in the false vacuum forever. The walls, moving away from the collision, now have true vacuum on the outside and false vacuum on the inside. This induces a pressure that pushes the walls back towards the collision site, so the walls eventually slow, turn around, and re-cross on a time scale of order H −1 * , which is far longer than the time scale of oscillations about the minimum. The formation and collapse of long-lived pockets of false vacuum is shown in Fig. 10. While the field lingers in the false vacuum, B is not conserved, and the fact that the field is oscillating around the false vacuum can yield large fluctuations in the baryon asymmetry. In this case, the asymmetry is presumably still non-zero, but it is difficult to get an analytic handle on it, and numerical simulations are required.
For potentials with broken U (1) B , there is no reason for −φ F to be a local minimum, and the steepness of the potential at −φ F determines which behavior occurs. For potentials with approximate U (1) B , the physics depends on the nature of the false vacuum. When −φ F has a strong basin of attraction, which happens at larger z * , The field oscillates about this vacuum, while the bubble walls move outward, slow, turn around and re-cross. This process can occur several times.
collisions tend to exhibit behavior II; when −φ F has a weak basin of attraction, which happens at smaller z * , collisions tend to exhibit behavior I. In the toy model of Sec. III, percolation tends to happen at smaller z * .
E. Washout and Decay
After percolation, a baryon asymmetry density of order ∼ R 2 F H * is inhomogeneously distributed throughout the Universe in the form of the φ field. In order to explain observation, this asymmetry must persist and it must migrate to the standard-model sector.
For the asymmetry to persist, we must avoid two types of washout: classical and thermal. Classical washout refers to depletion of the asymmetry by evolution under the classical equations of motion from B-violating operators present in the early Universe. The dynamics of classical washout depend on the dimensionality of these operators. In the case of higher-dimension operators, washout can only be effective at large field values. But, since the expansion of the Universe and the growth of the bubble both tend to damp field excitations toward the origin, classical washout from higher-dimension interactions is typically evaded, as is true in the Affleck-Dine mechanism. In the case of marginal or super-renormalizable operators, classical washout may be active even as the fields damp to origin. Consider, for instance, the case in which the potential for φ near the origin is of the form m 2 (|φ| 2 + φ 2 + h.c.), where is a dimensionless mea-sure of B violation. The term induces an 'ellipticity' to the potential that splits the mass eigenstates, causing the field to precess as it orbits the origin. The total baryon asymmetry, therefore, oscillates around its initial value. The frequency of these oscillations scales like ∼ ( m) −1 . Whether the baryon number is spread out evenly, or localized in non-topological solitons, the effect of classical washout is to make it oscillate.
For small , this precession frequency is far lower than the characteristic oscillation time of the field m −1 ; the field must sit at the origin through many oscillations if the asymmetry is to appreciably change. As long as the φ condensate decays before this time, the asymmetry is preserved. Besides, even if it does not decay in time, the asymmetry oscillates around its initial value, it does not damp; unless there is a conspiracy between the decay time and the oscillation time, the final asymmetry will be an O(1) fraction of the initial asymmetry. Alternatively, in certain models B-violation is O(1), in which case numerical simulation is necessary to evaluate the degree of washout.
Thermal washout refers to depletion of B that occurs after the asymmetry is absorbed into the plasma of the early Universe, through scattering processes which involve B-violating interactions. Such scattering tends to restore chemical equilibrium and therefore deplete the asymmetry. Even if these interactions arise from higherdimension operators, they can still be significant since the associated scattering rates grow with temperature. However, as we will discuss in Sec. III, our models evade thermal washout because B-violation is sourced by interactions between φ and the inflaton. B-violation occurs when the temperature is small, before the inflaton has decayed; when the inflaton decays and the temperature becomes large, the B-violating interactions have shut off and the asymmetry is frozen in. Thermal washout is avoided because B-violating interactions and the thermal plasma are never present at the same time.
Finally, for the baryon asymmetry to migrate to the standard-model sector, the φ condensate must decay. A direct coupling of φ to some operator comprised of standard-model fields suffices to transfer the asymmetry; the decay rate of φ depends on the strength of this coupling. For homogenous fields, the decay rate of a condensate is more or less the same as the decay rate of φ quanta in the vacuum [19]. In bubble baryogenesis, however, the field configuration is highly inhomogeneous. In particular, because the bubble walls are boosted to near the speed of light shortly after nucleation, one might worry that these field fluctuations will be long-lived due to a Lorentz boost factor. However, as discussed earlier, after the collision these boosted field configurations are not solutions to the equations of motion so they broaden and dissolve. As the quanta become softer, they can decay. In certain models of bubble baryogenesis, the decay of φ can be very fast-much faster than the Hubble parameter at the time of percolation. In such cases, the φ condensate decays to particles nearly instantaneously after the nucleation event. Afterwards, there is no classical field, meaning classical washout is straightforwardly evaded.
III. TOY MODEL
Our discussion in Sec. II was framed rather broadly, so in this section we now study a concrete setup. A working model of bubble baryogenesis must accommodate the following criteria: • Percolation. The Universe must efficiently transit from the false to true vacuum.
• Asymmetry. The theory parameters must accommodate the observed baryon asymmetry.
• Perturbativity. The effective couplings cannot blow up and all energy scales are bounded by the cutoff.
• No Washout. B-violating effects, classical or thermal, must be under control.
In this section we present a toy model which satisfies these criteria. As we will see, despite its simplicity the toy model may actually be phenomenologically viable.
A. Model Definition
Our toy model is defined by a potential of the form where is a small parameter characterizing B-and CPviolating interactions. The B-symmetric part of the potential is In general, the shape of the potential will vary in time due to couplings between φ and the inflaton χ, whose vacuum expectation value is time-dependent. This occurs in Affleck-Dine models of baryogenesis, where such couplings induce Hubble-dependent parameters in the action. For simplicity, we take λ and A to be constant, but where ρ = 3H 2 m 2 Pl is the energy density of the Universe. Here we require Λ 2 > 0 andm 2 > 0 so that B is spontaneously broken at early times but restored in the present day. In a supersymmetric context, such a ρ dependence would originate from |χ| 2 |φ| 2 /Λ 2 in the Kahler potential.
If the couplings between φ and χ are B-and CPviolating, then V 1 will also contain time-dependent terms. At least two B-breaking operators are requiredotherwise all CP phases can be removed by a field redefinition. As discussed in Sec. II E, if B violation is higher-dimensional, e.g. φ 5 and φ 6 , then explicit breaking is localized far from the origin and classical washout is ameliorated by Hubble damping of the field. In contrast, renormalizable B-violating operators, e.g. φ 2 and φ 3 , typically mediate classical washout with model-dependent effects. Importantly, we assume that B-violating interactions are sourced by the inflaton alone, so B is restored after reheating-thus, thermal washout is evaded. This setup can be easily engineered by invoking additional symmetries under which both φ and χ transform. Let us outline the cosmological history of this model. We begin during inflation, when H and therefore all model parameters are constant in time. During this epoch, m 2 < 0 and the field resides at φ = 0. After inflation ends, the inflaton starts to oscillate around its minimum and the Universe shifts from vacuum-energy domination to matter domination. As H decreases, m 2 eventually becomes positive, growing monotonically until the φ = 0 vacuum becomes metastable. Once the nucleation rate rises sufficiently, percolation occurs. The B-and CP -violating interactions in the potential cause the nucleated bubble walls to accumulate baryons; subsequent bubble collisions yield an additional asymmetry component.
In the subsequent sections, we will analyze the vacuum structure, nucleation rate, and asymmetry generation in this model.
B. The Instanton
Neglecting effects proportional to , the Euclidean action for this theory is While the model parameters vary in time, they do so on scales H 1/m, so we treat them as constant in our analysis of the instanton. It is convenient to transform to dimensionless variables, wherê As a result of our change of variables, the rescaled Euclidean actionŜ is a function of alone.
As discussed in Sec. II A, the variable z is a convenient reparametrization of time in which 1 > z > 0 corresponds to the epoch in which tunneling is allowed. We can express in terms of z aŝ where 1 > z > 0 maps onto the range 2 > > 32/9. Within this interval, there is a true and false vacuum located at Next, we evaluate ∆Ŝ by solving the associated Euclidean equation of motion for an SO(4) symmetric ansatzR subject to the initial condition ∂R/∂ρ = 0 at ρ = 0. Solving for ∆Ŝ numerically, we find that for z 0.25, ∆Ŝ is very well fitted by the function ∆Ŝ = 431.5z 0.679 + 8139.4z 2.27 (34) As z → 0 the phase transition shifts from first-order to second-order. As z → 1 the bubble becomes thin-walled. From Fig. 11, it is clear that our numerics agree with analytic expressions in this regime. The determinant prefactor can also be straightforwardly estimated. From [20] we can compute the K factor in Eq. (4) where φ(ρ) is the instanton solution and det indicates the determinant with zero modes removed. In the second line we used that ρ varies on scales of order m −1 to estimate the determinant factors.
Up till now we have neglected B-and CP -violating effects proportional to . Employing our numerical code, we have generated instanton profiles for the potential at finite . Given these numerical solutions, we can compute µ 2 , defined in Eq. (20) as the measure of the baryon asymmetry in the bubble wall. As shown in Fig. 12, at small , these numerical results match the simple estimate for µ 2 described in Eq. (21). Here we used V 0 corresponding to z = 0.2 and V 1 = R 2 cos(2θ + π/4) + R 3 cos(3θ).
C. Before Percolation
Consider the cosmological history of this model leading up to percolation. As shown in Sec. III B, the vacuum structure of the theory depends solely onÂ, which varies in time with the energy density, ρ. Shortly after inflation, m 2 < 0 and the potential has a single minimum at large field values. As the universe cools, eventually m 2 > 0 and an additional local minimum forms at the origin-at this point is divergent. As m 2 continues to increase, A monotonically decreases, and a first-order phase transition becomes possible in the window 2 > > 32/9.
Plugging ρ = 3H 2 m 2 Pl into Eq. (32) and Eq. (28), we obtain expressions for the important physical quantities as a function of z, whereÂ(z) is defined in Eq. (32). Given the parametric dependences in Eq. (38), it is clear that during the firstorder phase transition, the dimensionful parameters, H, m, and A are all well below the cutoff Λ and so the effective theory description remains valid. Eventually, the universe cools sufficiently that Hubble decreases enough that there is a first-order phase transition. Percolation occurs when Eq. (6) is satisfied, which is roughly when Γ * H 4 * ; in the context of our model this approximation is accurate to 15%. Recall that Γ * implicitly depends on z * through the model parameters in Eq. (38). The criterion for percolation can be rewritten as The exponential factor essentially fixes the solution to this equation, and so the prefactors are only logarithmically important. Solving for S * yields where the ellipses denote logarithms of O(1) numbers. If S * 1, then the nucleation rate is very high at the onset of percolation, indicating that the phase transition is bordering on second order. While this is not necessarily bad in and of itself, in this regime φ evolves like a slow roll Affleck-Dine condensate. Of course, we will be interested in the first-order regime, whereby tunneling is the dominant mode of the phase transition. To achieve this, we require a modestly sized instanton action, so Λ m Pl , which is to say that the higher dimension operators coupling the inflaton to φ cannot be Planck slop operators, and must be suppressed by a lower scale.
In this toy model z is typically small and we are in the weakly first-order regime. Allowing both A and λ to also vary with time can yield any value of z between 0 and 1.
D. After Percolation
Once percolation occurs, bubbles of true vacua nucleate and soon fill the volume of space. Using Eq. (21), which applies for 1, we find that the surface density of baryons on the walls is The total baryon number density n B arising from the initial instanton plus the subsequent bubble collisions is given in Eq. (11). To compute the observed baryon asymmetry today, we need to consider the remainder of the cosmological history. Because the inflaton sources the B-and CP -violating interactions of φ, we require that the decay of the inflaton, and thus reheating, occur after the percolating phase transition. Consequently, the asymmetric yield at the time of reheating is given by where H R and T R are the Hubble parameter and the temperature, respectively, at the time of reheating. The above estimates neglect the effects of classical washout, but as discussed in Sec. II E these effects are expected to change the asymmetry by an O(1) fraction and are model-dependent.
To accommodate the present day observed baryon asymmetry, we require that η B ∼ 6 × 10 −10 . The energy density of the Universe at reheating is bounded from above by the energy density at percolation, so T 4 R ∼ ρ R ρ * . In turn, this places an absolute upper limit on η B from Eq. (44). Given the observed baryon asymmetry, this upper bound can be rephrased as a lower bound onm in terms of the other fundamental parametersm where for simplicity we have dropped the O(1) factors from the z * dependence. This limit is a substantial constraint onm in this toy theory, and is depicted in the blue region of Fig. 13. Next, we briefly discuss how the asymmetric yield in Eq. (44) is actually transferred from φ into standard model fields. In a supersymmetric context this is achieved, for example, by the operator U DD/M in the superpotential, where M is the mass scale of some connector field which has been integrated out. This is the lowest-dimension operator which can link standard model B to a gauge singlet field φ. Given this operator, the field φ has a decay rate of which can be much greater than H * , the Hubble parameter at the time of percolation. In this case the φ field within each bubble of nucleated true vacuum decays very shortly after percolation. Classical washout from B-violating interactions are thus minimized since φ decays so fast into particle quanta. Alternatively, fast decays can occur if φ decays promptly to additional nongauge-singlet particles which in turn decay to standard model fields. Finally, while our discussion thus far has been framed within the context of B, the asymmetry can of course be converted into L via the operators φLH u , φLLE, or φQLD, or into a dark matter asymmetry via similar interactions.
IV. MORE REALISTIC MODELS
In this section we discuss possible realizations of bubble baryogenesis in more realistic contexts. The examples here are supersymmetric, but we emphasize that this is not a requirement for bubble baryogenesis.
A. Neutrino Seesaw
Consider the MSSM augmented by a supersymmetric neutrino seesaw. The superpotential is where N denotes the sterile neutrinos and we have suppressed all flavor indices. Explicit violation of U (1) L is present in the form of the mass parameter, M . Integrating out the heavy N fields yields the active neutrino masses, whose sum is bounded by cosmological measurements to be less than 0.17 eV [21]. Requiring that λ 1 implies that M 10 14 GeV. The supersymmetric F -term and D-term contributions to the potential are If supersymmetry breaking approximately preserves L, then the corresponding contributions to the potential are of the formṼ where is small and Because the parameters inṼ acquire contributions from both zero-temperature supersymmetry breaking and Hubble-induced supersymmetry breaking, they are in general time-dependent. Which specific contributions arise depends on the symmetry structure of the ultraviolet theory, which dictates the coupling between the MSSM fields, the inflaton, and the supersymmetry breaking sector. For instance, if the inflaton carries R-parity, then L-violating terms like LH u or N 3 could be present inṼ 1 . The scalar potential is complicated, and contains a large number of fields and parameters. However, it is clear that all the required features of bubble baryogenesis are present. First, the A parameter may be large in the early Universe and produce global minima far from the origin of field space. Second, these B-breaking vacua are stabilized by the quartic λ which can be O(1) in order to achieve a sufficiently large nucleation rate. Third, Land CP -violating interactions can torque the Euclidean instanton solution during a first-order phase transition. Hence, we do not expect bubble baryogenesis in this potential to differ in any qualitative way from the toy model in Sec. III.
A complete analysis of the multi-field potential of the supersymmetric neutrino seesaw would be non-trivial. However, we take note of certain simplifications to the theory which can reduce the potential to a solvable one. In particular, the lepton flavor indices serve largely to complicate the analysis, so the potential is greatly simplified by taking a single flavor of L and N to be the only fields active in the dynamics. Furthermore, the D-term contribution in Eq. (48) tends to fix |L| = |H u | 1 , which effectively eliminates another set of field directions.
B. Color-Breaking Minima
Bubble baryogenesis may also be possible within the context of tunneling from color-breaking minima [22] to the electroweak vacuum within the MSSM. If supersymmetry-breaking A-terms are too large, then deep minima can form at field values away from the origin, inducing an instability for the electroweak vacuum. The vacuum dynamics are dominated by the field directions of the top squark and the Higgses, where the superpotential is dominated by the usual top quark Yukawa coupling, Given approximately B-symmetric supersymmetry breaking, the corresponding terms in the potential are as in Eq. (49) but with Considering the D-flat field direction |Q 3 | = |U 3 | = |H u |, then the absence of color-breaking minima implies that if the electroweak vacuum is to be absolutely stable. For our purposes we take the opposite tack-we want the electroweak vacuum to be unstable in the early Universe. Couplings to the inflaton can be easily arranged so that Eq. (53) fails in the early Universe, so that the fields reside in the true, color-breaking vacuum. As the Universe cools, Eq. (53) is eventually satisfied, and the fields can tunnel from the color-breaking minimum to the electroweak vacuum.
Of course, to generate a baryon asymmetry, this phase transition requires B-and CP -violating interactions. The natural candidate for this is the Hubble-induced cubic term, U i D j D k , whose coupling can carry nonzero CP phases. Because this operator involves multiple squark flavors, we are then required to understand the tunneling from the color-breaking minimum in other squark directions besides the stop. We leave a proper analysis of this scenario for future work.
V. OBSERVATIONAL CONSEQUENCES
The byproduct of bubble baryogenesis is a frothy mixture of standard model baryons, inhomogeneously distributed in the early Universe. This inhomogeneity provides a potentially dangerous relic, since the observed baryon density is known to be homogeneous at the epoch of big bang nucleosynthesis (BBN). On average, inhomogeneities are small at BBN: the percolating bubbles have a length scale set by H −1 * , which is far smaller than (a * /a BBN )H −1 BBN , the size of an inhomogeneity at percolation that would grow to be Hubble size at BBN. The potentially dangerous relic doesn't come from the average bubble, it comes from the rare bubble that nucleates early enough and avoids enough collisions to grow big by percolation. Constraints from big bubbles were studied in [24] in the context of extended inflation [25], which also features a first-order transition around the end of inflation. Though the context was different, the constraints are purely geometrical, so the analysis in [24] carries over. Big bubbles constrain the decay rate considerably before percolation to be small, so that big bubbles are exceedingly unlikely. Bubble baryogenesis is aided in avoiding this constraint by the fact that the nucleation rate can be completely shut off at early times.
Constraints aside, the first-order phase transition also opens up new observational signatures, like gravitational waves. Bubble collisions are an efficient producer of gravitational waves; numeric estimates in [26] show that as much as .1% of the energy released in the transition can end up in gravitational waves. Unfortunately, the energy for us is small-most of the energy density of the Universe is in the inflaton-but the gravitational waves have a distinct signature which may make observation feasible. Because the colliding bubbles at percolation have roughly the same size, the gravitational wave spectrum has a spike at H * [27]. This observational signature is distinct from Affleck-Dine.
Additionally, black holes might form at the collision sites. This is an intriguing possibility, because a current tension in the data of the mass distribution of early quasars would be alleviated by a source of primordial 'seed' black holes [28]. Also, because these black holes form with a characteristic size, there could be a second bump in the gravitational wave spectrum from their coincident evaporation.
Lastly, as we discussed in Sec. II D, bubble collisions can spawn non-topological solitons, like oscillons and Qballs. Oscillons, though long-lived, are not typically stable on present Hubble time-scales [29][30][31]; they radiate energy and dissociate. Q-balls, on the other hand, can be stable and persist since they are charge-stabilized [32,33]. In Affleck-Dine baryogenesis, Q-balls typically form in gauge-mediated theories [34], and may or may not form in gravity-mediated theories, depending on the supersymmetric spectrum [35,36]. Analogous statements are likely true for bubble baryogenesis. If Q-balls form, their subsequent evolution is model-dependent: Q-balls that carry B will be absolutely stable if their mass-to-baryoncharge ratio is smaller than that of a proton; Q-balls that carry L, however, typically decay, since the massto-lepton-charge ratio of the neutrino is so small.
VI. FUTURE DIRECTIONS
Bubble baryogenesis is a novel scheme for the generation of the cosmological baryon asymmetry. A scalar baryon undergoes a first-order phase transition in the early Universe, and a baryon asymmetry is generated by the process of bubble nucleation and the subsequent bubble collisions. We have presented an explicit toy model to illustrate the basic features of the mechanism, and introduced a handful of realistic models. In addition to fleshing out these more realistic theories, there exists a variety of interesting directions for future work.
First, in Sec. II C, we argued that a single bubble could never explain the observed baryon asymmetry because the baryons are carried away to infinity. This is not strictly true: a loophole is provided in theories with large extra dimensions-the same loophole that boomand-bust inflation [37] exploits to provide a graceful exit to old inflation. If a bubble nucleates smaller than the size of the extra dimensions, then as it grows, it wraps the extra dimension and collides with itself on the other side. The bubble wall no longer runs off to infinity; the self-collision preserves the baryons in the wall, and distributes them uniformly throughout the interior of the bubble.
Second, as we discussed in Sec. II A there is the possibility of bubble baryogenesis after reheating. Thermal effects from the big-bang plasma can induce timedependent couplings which give rise to a first-order phase transition; the thermal plasma also assists the transition by enhancing the nucleation rate. To achieve such a scenario requires engineering the appropriate thermal potential and avoiding thermal washout.
Third, models considered in this paper had only a single instanton mediating decay, which imposed a relationship between the efficiency of the phase transition and the resulting asymmetry. This need not be the case. Consider a true and false vacuum connected by two instantons: a dominant one that is purely radial so that it generates zero baryons, and a subdominant one that arcs so that it alone is responsible for generating baryons. Physically, this would correspond to a percolating phase transition in which the vast majority of nucleated bubbles are B-symmetric, but some small fraction are asymmetric. The smallness of the asymmetry in such an example would arise not from small , but from the exponential suppression of the subdominant instanton. Such a model may suffer from fine-tuning issues because the baryon asymmetry would be so sensitive to the Euclidean action of the subdominant instanton.
Finally, in recent years there has been a resurgence of interest in so-called asymmetric dark matter, where the dynamics of baryogenesis and dark matter genesis are linked [38]. Such a linkage can arise naturally in hiddensector theories in which dark matter has a U (1) DM , and can have phenomenological signatures distinct from standard weakly interacting dark matter. A modification of bubble baryogenesis can achieve simultaneous generation of baryons and dark matter by extending the symmetry structure to U (1) B × U (1) DM . | 12,633.6 | 2012-05-15T00:00:00.000 | [
"Physics"
] |
Interfacial Charge Transfer and Ultrafast Photonics Application of 2D Graphene/InSe Heterostructure
Interface interactions in 2D vertically stacked heterostructures play an important role in optoelectronic applications, and photodetectors based on graphene/InSe heterostructures show promising performance nowadays. However, nonlinear optical property studies based on the graphene/InSe heterostructure are insufficient. Here, we fabricated a graphene/InSe heterostructure by mechanical exfoliation and investigated the optically induced charge transfer between graphene/InSe heterostructures by taking photoluminescence and pump–probe measurements. The large built-in electric field at the interface was confirmed by Kelvin probe force microscopy. Furthermore, due to the efficient interfacial carrier transfer driven by the built-in electric potential (~286 meV) and broadband nonlinear absorption, the application of the graphene/InSe heterostructure in a mode-locked laser was realized. Our work not only provides a deeper understanding of the dipole orientation-related interface interactions on the photoexcited charge transfer of graphene/InSe heterostructures, but also enriches the saturable absorber family for ultrafast photonics application.
Introduction
Two-dimensional (2D) heterostructures, which are combined by van der Waals (vdWs) force, have attracted great attention for next-generation optoelectronic devices, because of their excellent physical properties, such as strong light-matter interactions and ultrafast interfacial charge transfer [1][2][3][4][5]. Graphene, as a well-known 2D material, has been widely used in optoelectronic areas due to its ultrafast electron relaxation time [6,7]. However, the main shortcoming of graphene for optoelectronic applications is the relatively low absorption [8,9]. Recently, an III-VI group layered semiconductor, InSe, also showed great potential in optoelectronics areas, with high carrier mobility [10], a tunable bandgap [10,11] and a high nonlinear absorption coefficient [12,13]. For example, InSe has been demonstrated as a broadband photodetector [14,15] and a saturable absorber (SA) in ultrafast fiber lasers [16][17][18][19] and solid-state bulk lasers [20]. Although photodetectors based on graphene/InSe vdWs heterostructures operating at the visible to near-infrared (NIR) wavelength range have been reported [21][22][23][24], they have been demonstrated to possess high photodetection performance for phototransistor applications due to the suitable band alignment and interfacial charge transfer; however, all the measurements presented in previous works were obtained under quasi-static conditions. To investigate the role of dipole orientation-related interface interactions in the dynamic photoexcited charge transfer process in graphene/InSe heterostructures, ultrafast pump-probe optical spectroscopy studies are required. In addition, 2D graphene/InSe-based heterostructures for nonlinear 2 of 9 photonic applications have not been studied, but it is expected that the heterostructure combines both the advantages of ultrafast relaxation and a large effective nonlinear absorption coefficient for higher performance.
In this work, we prepared a graphene/InSe (G/InSe) heterostructure (HS) by mechanical exfoliation (ME) and investigated the intrinsic interlayer charge transfer process of G/InSe HS by steady-state photoluminescence (PL), Kelvin probe force microscopy (KPFM) and transient absorption (TA) pump-probe measurements. We further demonstrated the G/InSe HS as an SA for near-IR mode-locked laser application. Stable traditional soliton pulses were obtained with a central wavelength of 1566.65 nm and 192 fs pulse duration. These results indicate that G/InSe HS is very attractive for ultrafast nonlinear photonic applications.
Preparation and Characterization
Our Bridgman method-grown InSe crystals are obtained commercially (from SixCarbon Technology, Shenzhen, China). The XRD pattern of bulk InSe is shown in Figure 1a, demonstrating that the crystal structure is β phase, which belongs to the space group P63/mmc, and the lattice parameters are a = 4.05 Å, b = 4.05 Å, c = 16.93 Å, respectively. Raman scattering measurement on the InSe bulk single crystal is performed by a commercial Raman spectrometer (Witec alpha300, D-89081 Ulm, Germany) with a λ = 532 nm laser for excitation. As shown in Figure 1b, there exist three Raman vibration modes, namely A 1g 1 (115 cm −1 ), E 2g 1 (176 cm −1 ) and A 1g 2 (226 cm −1 ), similar to the results of ref. [25], which proves that the sample is β phase with high single-crystalline quality. Figure 1c shows a typical surface topographic image taken via scanning electron microscopy (SEM); the corresponding elemental ratio of InSe is obtained by energy-dispersive X-ray (EDX) spectroscopy, which reveals an In:Se ratio of 1.16, as shown in Figure 1d. The elemental mapping of In and Se is shown in Figure 1e,f, respectively, indicating the uniform elemental spatial distribution.
For the fabrication of the graphene/InSe heterostructure, the process can be divided into three steps. Firstly, a thin flake of graphene was prepared by the ME method using Nitto tape [26] and transferred to a quartz substrate. Secondly, a thin flake of InSe was exfoliated on PDMS [27] and then transferred to the top of the part of the graphene flake, assisted by a transfer station, under a Nikon microscopy. Thirdly, the sample was then thermally annealed in an Ar atmosphere under 200 • C for 2 h to remove chemical residues and improve the interfacial contact. Figure 2a shows an optical image of the G/InSe and its Raman spectrum is shown in Figure 2b. All the Raman peaks related to InSe and graphene (113 cm −1 , 174 cm −1 , 226 cm −1 , 1581 cm −1 and 2716 cm −1 ) are observed in the heterostructure region, indicating the successful formation of the G/InSe heterostructure, and no Raman peaks are shifted compared with individual InSe and graphene. The PL measurements were also performed on a G/InSe sample derived by the same Raman spectrometer, using a 532 nm laser for excitation, with the power of 700 µW. As shown in Figure 2c, PL is quenched in the heterostructure region, indicating that the separation of photoexcited electron-hole pairs occurs at the G/InSe interface.
for excitation. As shown in Figure 1b, there exist three Raman vibration modes, namely A1g 1 (115 cm −1 ), E2g 1 (176 cm −1 ) and A1g 2 (226 cm −1 ), similar to the results of ref. [25], which proves that the sample is β phase with high single-crystalline quality. Figure 1c shows a typical surface topographic image taken via scanning electron microscopy (SEM); the corresponding elemental ratio of InSe is obtained by energy-dispersive X-ray (EDX) spectroscopy, which reveals an In:Se ratio of 1.16, as shown in Figure 1d. The elemental mapping of In and Se is shown in Figure 1e,f, respectively, indicating the uniform elemental spatial distribution. For the fabrication of the graphene/InSe heterostructure, the process can be divided into three steps. Firstly, a thin flake of graphene was prepared by the ME method using Nitto tape [26] and transferred to a quartz substrate. Secondly, a thin flake of InSe was exfoliated on PDMS [27] and then transferred to the top of the part of the graphene flake, assisted by a transfer station, under a Nikon microscopy. Thirdly, the sample was then thermally annealed in an Ar atmosphere under 200 °C for 2 h to remove chemical residues and improve the interfacial contact. Figure 2a shows an optical image of the G/InSe and its Raman spectrum is shown in Figure 2b. All the Raman peaks related to InSe and graphene (113 cm −1 , 174 cm −1 , 226 cm −1 , 1581 cm −1 and 2716 cm −1 ) are observed in the heterostructure region, indicating the successful formation of the G/InSe heterostructure, and no Raman peaks are shifted compared with individual InSe and graphene. The PL measurements were also performed on a G/InSe sample derived by the same Raman spectrometer, using a 532 nm laser for excitation, with the power of 700 μW. As shown in Figure 2c, PL is quenched in the heterostructure region, indicating that the separation of photoexcited electron-hole pairs occurs at the G/InSe interface.
To determine the work function difference of graphene and InSe, Kelvin probe force microscopy (KPFM) was used. According to the equation V = (Wsample − Wtip)/e, the difference in work function (W) represents the surface potential (V) variation of the sample. Figure 2d shows the KPFM curve of graphene and G/InSe. The result indicates that InSe is n-type doped, while graphene is heavily p-type doped, leading to the formation of a pn junction with a large built-in electric potential (the potential difference is ~0.286 eV). Figure 2e shows the surface work function mapping image of graphene and G/InSe. Thus, electron transfer occurs from graphene to InSe, enhancing the p-type carrier concentration in graphene and the n-type concentration in InSe. The band alignment of the heterostructure is illustrated in Figure 2f according to the work function. The unique band alignment of G/InSe HS allows the possible transition from graphene to InSe under low photon-en- To determine the work function difference of graphene and InSe, Kelvin probe force microscopy (KPFM) was used. According to the equation V = (W sample − W tip )/e, the difference in work function (W) represents the surface potential (V) variation of the sample. Figure 2d shows the KPFM curve of graphene and G/InSe. The result indicates that InSe is n-type doped, while graphene is heavily p-type doped, leading to the formation of a p-n junction with a large built-in electric potential (the potential difference is~0.286 eV). Figure 2e shows the surface work function mapping image of graphene and G/InSe. Thus, electron transfer occurs from graphene to InSe, enhancing the p-type carrier concentration in graphene and the n-type concentration in InSe. The band alignment of the heterostructure is illustrated in Figure 2f according to the work function. The unique band alignment of G/InSe HS allows the possible transition from graphene to InSe under low photonenergy excitation, which can be used as an SA in C/L-band pulsed lasers. In addition, the large built-in electric field existing in the G/InSe heterostructure can inhibit electron-hole recombination and reduce the recombination rate, leading to a fast electron relaxation time, which is beneficial for achieving ultrafast laser pulses. Nanomaterials 2023, 12, x FOR PEER REVIEW large built-in electric field existing in the G/InSe heterostructure can inhibit electron recombination and reduce the recombination rate, leading to a fast electron rela time, which is beneficial for achieving ultrafast laser pulses.
Carrier Dynamics
To probe the carrier transport process across the G/InSe interface and reveal trafast nonlinear optical properties of the G/InSe in the visible-near-infrared (IR) sp region, a micro-area pump-probe technique is employed. The TA measurement se shown in Figure 3a. Specifically, the output fundamental beam (λ = 1030 nm, ~170 fs duration) is split into two paths from a Yb: KGW laser (Light Conversion Ltd. Ph one is sent into a noncollinear optical parametric amplifier (OPA) to produce a pulse at near ultraviolet, visible and near-IR wavelengths, and the other is focused YAG crystal after a delay line to generate a probe pulse of white light continuum (λ 950 nm) or near-IR (λ = 1425-1600 nm) light. The pump and probe beams are recom and focused on the sample through a reflective 50× objective. The spot size of the fo femto laser is approximately 2 μm.
Carrier Dynamics
To probe the carrier transport process across the G/InSe interface and reveal the ultrafast nonlinear optical properties of the G/InSe in the visible-near-infrared (IR) spectral region, a micro-area pump-probe technique is employed. The TA measurement setup is shown in Figure 3a. Specifically, the output fundamental beam (λ = 1030 nm,~170 fs pulse duration) is split into two paths from a Yb: KGW laser (Light Conversion Ltd. Pharos): one is sent into a noncollinear optical parametric amplifier (OPA) to produce a pump pulse at near ultraviolet, visible and near-IR wavelengths, and the other is focused onto a YAG crystal after a delay line to generate a probe pulse of white light continuum (λ = 500-950 nm) or near-IR (λ = 1425-1600 nm) light. The pump and probe beams are recombined and focused on the sample through a reflective 50× objective. The spot size of the focused femto laser is approximately 2 µm. at the interface of the G/InSe HS. In addition, the TA signal is stronger compared with the signal under 2.06 eV, ~130 μJ cm −2 at the probe wavelength of 502 nm (B exciton); therefore, it can be confirmed as two-photon absorption at the pump photon energy of 2.06 eV. Overall, the ultrafast carrier dynamics revealed the dipole orientation-related interface interactions in the G/InSe HS, and the results not only provide a deeper understanding of the dynamic photoexcited charge transfer process, but also suggest a high optical modulation speed for nonlinear photonics application. Figure 3b,c, in which the A exciton (844 nm) and B exciton (502 nm) of InSe are selected as the probe wavelength, respectively. Fitting their dynamics, the G/InSe HS shows a biexponential decay with an intra-band relaxation time (τ 1 = 48 fs) and inter-band relaxation time (τ 2 = 140 fs) that is closer to graphene (τ 1 = 96 fs and τ 2 = 583 fs) and much faster than InSe (τ 1 = 1.77 ps and τ 2 = 216.11 ps) at a probe wavelength of 844 nm, as seen in Figure 3a. The A exciton of InSe is associated with the transition from the principal bandgap, which has a distinct electric dipole-like character, coupled to out-of-plane polarized photons [28], so it has less absorption in our configuration, in which the laser polarization is in-plane (as shown in insert of Figure 3a). In contrast, the B exciton is related to the transition that is coupled to in-plane polarized light. At a probe wavelength of 502 nm, as seen in Figure 3c, the TA signal is attributed to two-photon absorption, and the relaxation time of the G/InSe HS is determined to be τ 1 = 12.6 ps and τ 2 = 12.6 ps, which is also faster than that of the InSe individual (τ 1 = 3.8 ps and τ 2 = 191.3 ps). Benefiting from graphene's fast charge transfer and relaxation channel, the carrier recombination in InSe is suppressed, thus resulting in a short relaxation time. Furthermore, the TA dynamics of InSe and the G/InSe HS under the pump photon energy of 3.30 eV,~2 µJ/cm 2 are shown in Figure 3d. Compared to InSe, the relaxation time of the G/InSe HS is faster than the InSe itself, indicating the fast charge transfer at the interface of the G/InSe HS. In addition, the TA signal is stronger compared with the signal under 2.06 eV,~130 µJ cm −2 at the probe wavelength of 502 nm (B exciton); therefore, it can be confirmed as two-photon absorption at the pump photon energy of 2.06 eV. Overall, the ultrafast carrier dynamics revealed the dipole orientationrelated interface interactions in the G/InSe HS, and the results not only provide a deeper understanding of the dynamic photoexcited charge transfer process, but also suggest a high optical modulation speed for nonlinear photonics application.
Nonlinear Saturable Absorption and Mode-Locked Fiber Laser Applications
To measure the saturable absorption properties of the G/InSe HS, the micro I-scan with a balanced twin-detector measurement method is used, as illustrated in Figure 4a. The pulsed laser source is operated at 1550 nm with a 300 fs pulse duration and 100 kHz repetition rate. The sample is located at the focal point of the laser through a 50× objective lens, and an adjustable attenuation filter is used to modulate the laser power. As shown in Figure 4b, the obtained transmittance data are fitted by using the two-level saturable absorption model with the equation α = α ns + α s /(1 + I/I s ) [29]. The saturable intensity and modulation depth obtained are 1.33 GW/cm 2 and 12%, respectively. It should be noted that the large modulation depth and low saturation intensity are attributed to the lower recombination rate caused by the large built-in electric potential [30].
Nanomaterials 2023, 12, x FOR PEER REVIEW 7 of 9 electron relaxation time proved by the TA measurement at a probe wavelength of 1566 nm ( Figure S2). The internal mechanism is the reduced recombination rate and fast interlayer electron transfer due to the large built-in electric field in the G/InSe HS. The thickness of thin graphene and InSe determined by atomic force microscopy (AFM) is 5 nm and 7 nm, respectively ( Figure S3). Moreover, InSe shows a large effective nonlinear absorption coefficient (βeff ~ −2.8 × 10 2 cm/GW) [13], while the heterostructure possesses a larger nonlinear absorption coefficient [31]. Furthermore, the ultrafast electron transfer from graphene to other 2D semiconductors occurs from the visible to mid-infrared region [32]. Benefiting from the ultrafast relaxation time and strong broadband nonlinear absorption, we believe that the heterostructure could be realized for ultrafast broadband nonlinear optical applications, not only limited to the C/L band.
Conclusions
High-quality G/InSe HS material was prepared by ME and the dry-transfer method, and the carrier transport across the G/InSe HS interface was systematically investigated by PL, KPFM and TA measurements. The relatively lower saturation intensity (~1.33 GW/cm 2 ) and larger modulation depth (~12%) were obtained by nonlinear absorption To evaluate its ultrafast nonlinear optical response, the G/InSe HS SA was inserted into a ring fiber cavity. Figure 4c shows the schematic of the all-fiber G/InSe HS ring laser.
To pump a 0.4-m-long erbium-doped gain fiber (EDF; Liekki Er110-4/125, Camas, WA, USA), a commercial continuous-wave (CW) 980 nm laser diode (LD, Nozay, France) with maximum power of 800 mW was used as a pump source. To couple the pump laser into the cavity and prevent back reflection, ensuring unidirectional laser operation, a 980/1550 nm wavelength division multiplexer/isolator hybrid (WDM + ISO) was used. Two polarization controllers (PCs) were used to tune the laser polarization state in the cavity, and the cavity also comprised a 5.2-m-long single-mode fiber (SMF-28e). The dispersion parameters of EDF and SMF are 12 ps 2 /km and −23 ps 2 /km, respectively; therefore, the calculated net cavity dispersion is approximately −0.106 ps 2 . The laser was output through a 10% optical coupler (OC), and its characteristics were real-time-monitored by a 1 GHz photodetector (Thorlabs DET01CFC, Newton, NJ, USA) and then output to a 3-GHz digital oscilloscope (LeCroy WavePro7300, Chestnut Ridge, NY, USA). Moreover, a commercial optical spectrum analyzer (Yokogawa AQ6370D, Tokyo, Japan) was used to record the obtained laser optical spectrum. The laser pulse width was measured by a commercial autocorrelator (APE Pulsecheck-USB-50, Berlin, Germany), and the radio frequency spectrum was measured by a RF spectrum analyzer (Rigol DSA1030, Suzhou, China).
By carefully controlling the laser polarization state through PCs, self-starting mode locking is achieved when the pump power is beyond 40 mW, due to the saturable absorption of the G/InSe HS SA. The SA damage threshold is around 650 mW. The maximum output laser power is around 2.53 mW, corresponding to the single pulse energy of 0.06 nJ. The output pulse characteristics under the pump power of 110 mW are shown in Figure 4d,e. Figure 4c shows the typical output optical spectrum with several pairs of Kelly sidebands, which indicates the signature of conventional soliton operation. The spectrum shows a 3-dB bandwidth of 7.43 nm centered at 1566.65 nm. Figure 4e presents the typical output pulse trains. The time interval of mode-locked pulses is approximately 24.41 ns, well matched with the total cavity length. The RF spectrum around the fundamental repetition rate of 40.96 MHz with the signal-to-noise ratio of~27 dB is shown in the inset of Figure 4e, which indicates the good stability of the G/InSe HS mode-locked laser. The G/InSe HS SA is stable over two weeks in an ambient environment, and the mode-locking operation can be stable for over 10 h. The output mode-locked pulses are amplified by a home-made erbiumdoped fiber amplifier (EDFA) and compressed by a piece of dispersion-compensating fiber. As is illustrated in Figure 4f, the obtained mode-locking pulse duration is approximately 270 fs. By fitting the autocorrelation (AC) trace with a Gaussian function, the actual pulse duration is estimated to be approximately 192 fs.
Furthermore, we compared the mode-locked laser results with the naked graphene (before being stacked with InSe) SA, and it showed that the mode-locking pulse width was around 292 fs ( Figure S1), slightly slower than that of the G/InSe HS, due to the slower electron relaxation time proved by the TA measurement at a probe wavelength of 1566 nm ( Figure S2). The internal mechanism is the reduced recombination rate and fast interlayer electron transfer due to the large built-in electric field in the G/InSe HS. The thickness of thin graphene and InSe determined by atomic force microscopy (AFM) is 5 nm and 7 nm, respectively ( Figure S3). Moreover, InSe shows a large effective nonlinear absorption coefficient (β eff~− 2.8 × 10 2 cm/GW) [13], while the heterostructure possesses a larger nonlinear absorption coefficient [31]. Furthermore, the ultrafast electron transfer from graphene to other 2D semiconductors occurs from the visible to mid-infrared region [32]. Benefiting from the ultrafast relaxation time and strong broadband nonlinear absorption, we believe that the heterostructure could be realized for ultrafast broadband nonlinear optical applications, not only limited to the C/L band.
Conclusions
High-quality G/InSe HS material was prepared by ME and the dry-transfer method, and the carrier transport across the G/InSe HS interface was systematically investigated by PL, KPFM and TA measurements. The relatively lower saturation intensity (~1.33 GW/cm 2 ) and larger modulation depth (~12%) were obtained by nonlinear absorption measurement. | 4,914 | 2022-12-28T00:00:00.000 | [
"Physics"
] |
Error Estimates for Solutions of the Semilinear Parabolic Equation in Whole Space
and Applied Analysis 3 3. Error Estimates We are now in a position to investigate the explicit error estimate in this section. It should be mentioned that the global existence of the nonlinear parabolic equation can be proved by the standard contraction mapping principle (refer to [16]). Hence we only prove the error estimates. To carry out this issue, we develop some new tricks which mainly borrowed the idea in [17–20]. Denote the difference w(t) = u(t) − ?̃?(t), where u(t) and ?̃?(t) are the solutions of the semilinear parabolic equation (1) and the linear heat equation (16), respectively. Thus w(t) satisfies the following system: w t − Δw + |u| p u = 0, w (x, 0) = 0, (18) in the weak sense. It is worth noting that the following derivation should be stated rigorously for the smooth approximated solutions and then take the limits to get the results of the weak solution of the semilinear parabolic equation (18). For convenience, we directly discuss weak solutions. Multiplying both sides of (18) with w and integrating in R, it follows that
Introduction
In this study we consider the Cauchy problem of the following three-dimensional semilinear parabolic equation: Here > 5. (, ) is the unknown function at the point (, ) ∈ R 3 × (0, ∞) and 0 is the initial data.
As an important partial differential equation, the wellposedness and asymptotic behavior of solutions of semilinear parabolic equation has attracted more and more attention and many important results have been investigated (see [1][2][3][4] and references therein).The mathematical model (1) can be seen as the heat equation with damping and friction effects.From the view on the mathematics point, the nonlinear damping || −2 in (1) may increase the regularity of the weak solutions.However, it will be the main obstacle on the asymptotic behavior of the solutions to the semilinear parabolic equation (1).For the n-dimensional linear heat equation − Δ = 0, (, 0) = 0 . ( The fundamental solution is and the solution of ( 2) is expressed as In particular, the solution (, ) of the linear heat equation (2) exhibits the following asymptotic behavior (see [5]): Compared with the behavior of heat equation (2), it is an interesting problem to consider the influence of the linear damping || −2 in the semilinear parabolic equation (1).Motivated by the asymptotic results on some nonlinear differential equations in [6][7][8][9], in this study we will investigate the asymptotic error estimates between the solutions of both the semilinear parabolic equation ( 1) and the linear parabolic equation (2).Let us give an outline analysis of this question.On one hand, taking = = 2 in (5), we only derive the bounds of the solution; that is, On the other hand, from the definition of weak solution for the semilinear parabolic equation (1) (see the definition in the next section), we also only get the 2 bounds of weak solution for the semilinear parabolic equation (1) as By the direct computation, we only get the 2 bounds of the error − Δ 0 .It is obviously important to explore the explicit error estimates as time tends to infinity.In order to come over the main difficulty raised by the nonlinear damping || −2 , we will make full use of the Fourier analysis technique to explore the lower frequency effect of the nonlinear damping || −2 .Fortunately, we can control the nonlinear term This observation allows us to derive the explicit error estimates.
The remainder of this paper is organized as follows.In Section 2, we first recall some fundamental preliminaries and state our main results.In Section 3, we investigate the explicit error estimates of solutions between semilinear parabolic equation ( 1) and linear heat equation (2).
Preliminaries and Main Results
In this paper, we denote by a generic positive constant which may vary from line to line.
Theorem 2. Suppose 0 ∈ 2 (R 3 ) and (, ) is a weak solution of the Cauchy problem of the semilinear parabolic equation (1); one has where ũ() is the weak solution of the linear heat equation; namely, with the same initial date 0 .
Remark 3. The result above seems inspiring.Since according to the − estimates of the linear heat equation ( 16), we have only the 2 bounds of the solution of linear equation (5); that is, No asymptotic behavior of solution of linear equation ( 5) can be derived.Compared with the previous results on the time decay of the nonlinear partial differential equations models [12][13][14][15] where the initial data satisfies some additional conditions such as 1 (R 3 ), at the same time, for the nonlinear parabolic equation (1) with the same initial date 0 , the nonlinear damping term || −2 is obviously not helpful for the asymptotic behavior of the semilinear parabolic equation (1).Therefore, it seems impossible to derive the asymptotic behavior of the difference between the semilinear parabolic equation ( 1) and the linear heat equation (16).Fortunately, we find a new trick which is different to the − estimates to deal with the nonlinear term.This trick is mainly based on the Fourier analysis which allows us to explore successfully the lower frequency of the nonlinear damping term || −2 .
Error Estimates
We are now in a position to investigate the explicit error estimate in this section.It should be mentioned that the global existence of the nonlinear parabolic equation can be proved by the standard contraction mapping principle (refer to [16]).Hence we only prove the error estimates.To carry out this issue, we develop some new tricks which mainly borrowed the idea in [17][18][19][20].Denote the difference () = () − ũ(), where () and ũ() are the solutions of the semilinear parabolic equation ( 1) and the linear heat equation ( 16), respectively.Thus () satisfies the following system: in the weak sense.It is worth noting that the following derivation should be stated rigorously for the smooth approximated solutions and then take the limits to get the results of the weak solution of the semilinear parabolic equation (18).For convenience, we directly discuss weak solutions.
Multiplying both sides of ( 18) with and integrating in R 3 , it follows that since where we have used the Hölder inequality and the − estimates (5).Thus inserting the above inequalities into (19), one shows that Taking the Parseval inequality into consideration, it follows that Now multiplying both sides of ( 22) by (1 + ) 3 together with direct computation, then we have Let (1 + ) In order to estimate the first term on the right hand side of (27), we take Fourier transformation to ( 18) the solution of the above ordinary differential equation is written as On one hand, since is a weak solution of the semilinear parabolic equation ( 1) and according to the definition of weak solution and interpolation inequality, then for we have which completes the proof of Theorem 2. | 1,629.6 | 2014-07-08T00:00:00.000 | [
"Mathematics"
] |
introducing the interpretation of medieval Hindī texts into the Hindī curriculum : an alternative approach
The author has been trying, for several years now, to apply and further expand the method of detailed morphological analysis of Old Hindī texts first developed by the Czech Indologist Vladimir Miltner in his Old Hindī Reader and to test it in the courses given at the institute of indology at charles university, Prague. this paper demonstrates the possibilities this still little used descriptive approach offers to students who have basic knowledge of Modern Standard Hindī and wish to gain an insight into the grammatical structure of Old Hindī literary dialects. Careful use of this method helps highlight, among other things, the high degree of homonymy of grammatical morphemes and the consequent frequent ambiguity of meaning. A continuous text segmented into basic morphological units can be processed by a concordancing software and further analysed with the help of methods developed in the field of corpus linguistics. An appendix to the paper shows the method as applied to the analysis of one short pad (poem) of a medieval Hindī poet, Sant Kabīr. The use of this method in classes helps the students read and interpret a greater quantity of texts in a relatively short time. This can serve as an incentive on the one hand to work with the literary material in the source language and on the other to pay closer attention to distinctive features of written and oral traditions in their wider social contexts. Most scholars engaged in teaching courses of modern indo-aryan languages at the university level are familiar with one specific problem that invariably turns up after students finish the basic course of a modern written and spoken language and are encouraged to turn their attention toward wider cultural contexts and deeper historical perspectives behind modern literary works. Students certainly know that the literary tradition of, e.g., Hindī does not begin in the first decades of the 19th century and may be curious about its older phases. After all, it is difficult to imagine studying English as a subject at the university level without being able to read and analyse texts of Chaucer or Shakespeare. Therefore, it should be only natural for a student of Hindī to read and enjoy texts written by or ascribed to Tulsīdas, Gorakhnāth, Kabīr or Mīrā. Many scholars and teachers of Hindī would perhaps agree with another proposition, namely, that introducing students to the language (or languages) of the aforementioned Hindī poets is more difficult than encouraging them to interpret Shakespeare’s sonnets. Basically, teachers of Hindī can choose between two different approaches: ISSN 1648–2662. ACTA ORIENTALIA VILNENSIA 9 .2 (2008) : 25–38 26 J A R O S L AV S T R N A D those more philologically oriented would explain basic grammatical features of the main Hindī literary dialects, Avadhī, Braj and possibly Diṅgal, and then students would proceed to read specimens of selected works and authors. The advantage of this approach lies in the firm grammatical framework mastered by students well in advance, which makes them aware of the position of these dialects in the broad historical development of the New Indo-Aryan languages. The disadvantage is that this preparatory stage, undoubtedly worth the effort in itself, is relatively time-consuming and in the general plan of a curriculum often leaves relatively little time for reading larger sections of texts. Specimens read in class may be selected primarily with the intention to illustrate grammatical phenomena rather than to show literary creations in their own right as works of art and of intellectual or spiritual depth. Moreover, this systematic approach based primarily on acquaintance with the standard grammars of literary dialects (Braj or Avadhī) does not work well with texts ascribed, e.g., to Gorakhnāth, Nāmdev or Kabīr, which show many irregularities and aberrations from standard forms found in grammars. the other approach, perhaps more direct and intuitive, is best represented by readers working with texts in the original language and their parallel translationsan excellent example is Rupert Snell’s The Hindī Classical Tradition: A Braj Bhāṣā Reader, published by sOas, london (1991), and used in several university courses across Europe and in the USA. Here, grammatical explanations, which form the first chapter of the book, are kept to a minimum and the student is invited to delve quickly into the texts themselves and to look for additional information in copious and informative notes accompanying the text. The text itself is arranged as a mirror: the left page contains the Hindī original in Devanāgarī, and the right its close translation in English, accompanied by copious explanatory notes. Students thus can proceed more quickly, acquaint themselves, in a limited amount of time, with greater quantity and variety of texts, and therefore get a better chance to appreciate them as works of art. A slight disadvantage of this method is that it may encourage a somewhat superficial attitude towards the purely grammatical, morphological and syntactical aspects of these texts: once a correct meaning, sometimes perhaps arrived at by looking at the English translation, is established, one ceases to worry about this or that grammatical peculiarity or irregular feature. students using this particular reader get a good introduction into a literary tradition and a literary dialect, but their ability to analyse grammatical forms correctly will be exposed to a hard test, especially when they encounter texts composed not in standard dialects. Here problems arising from frequent homonymy, especially of grammatical morphs, which can sometimes lead to different interpretations of meaning, are aggravated by occasional incidence of forms that can be described either as archaisms or as borrowings from some other dialect. Kabīr’s 27 I N T R O D u C I N g T h E I N T E R p R E TAT I O N O F M E D I E VA L h I N D ī T E x T S language, for example, has been often characterized by modern Hindī scholars as khicaṛī bhāṣā, a mixture of forms coming from various dialects of western and eastern parts of the Hindī area.1 The purpose of this paper is to offer still another approach to the study of Old Hindī literary dialects and texts, a method developed during the 60s and 70s of the last century by an eminent Czech Indologist, the late Dr Vladimír Miltner. Miltner looked at the language from the point of view of descriptive linguistics and subjected selected texts to detailed and rigorous analysis of their morphological and syntactical structure. He demonstrated his method of morphological analysis of an Old Hindī text for the first time in the 1960s in his short study called Early Hindī Morphology and Syntax (Miltner 1966) and developed it further in his Old Hindī Reader, which was ready for publication in the 1970s. Due to the adversity of those times, however, it could be published only as late as 1998 (Miltner 1998). In this latter work, Miltner takes specimens of texts written by or ascribed to thirteen Hindī authors of the pre-modern era (Roḍā, Joindu, Dāmodar, Gorakhnāth, Cand, Kabīr, Vidyāpati, Jāyasi, Sūr, Tulsī, Mīrā, Gokulnāth and Biharī Lāl). He analyses each word into its constituent morphs, lexical and grammatical, orders them into an alphabetical sequence and thus obtains a detailed index of all morphological elements occurring in the texts in question. The alphabetical ordering shows a great degree of homonymy, a feature encountered frequently, particularly in the case of grammatical morphs: all homonyms are marked by index numbers. Further, as elements of a system, the occurrence of all morphs is co-determined by their immediate context, i.e., by other morphs that precede and morphs that follow the morph in question. For example, the grammatical morph -i can mean three different things when found at the end of a verbal base (as in kah-i: 3rd pers. sg. pres., 2nd pers. sg. imper., and absolutive) and has still other possible meanings when found at the end of a nominal base (dir. sg. f., as in khabar-i, ‘report’; obl. sg. m., as in ghar-i, ‘house’; or obl. sg. f., as in dis-i, ‘side’, ‘direction’). Each morph entered into the index is therefore furnished with information about all other morphs, lexical as well as grammatical, that immediately precede and follow it in the texts that were excerpted and included in the reader. in the introduction to his book, Miltner assures his readers that the ‘process of interpretation is very simple and requires minimal brainwork’. Practical experience in 1 Probably an extreme example of interpenetration of one medieval dialect or language by another can be found in the corpus of Hindī pads of the medieval mystical poet Nāmdev originating in Mahārāṣṭra. W.M. Callewaert and M. Lath, in their edition of Nāmdev’s songs, give a good specimen of this phenomenon when they draw attention to song no. 165 of Nāmdev’s Hindī corpus. The song is composed largely in Marāṭhī. ‘To a Rājasthānī audience the song would have made senseto whatever extent it did make sense―only with “Rājasthānī meanings”. This implies that phrases with a particular meaning in Marāṭhī, probably meant something totally different in Rājasthānī’ (Callewaert, Lath 1989, 401–2 [commentary], 352–3 [the song]). 28 J A R O S L AV S T R N A D Old Hindī courses has convinced me that this assessment is more or less realistic. Of course, students have to master the grammatical terminology used for the description of indo-aryan languages, but, at this more advanced level of training, meeting this requirement poses no particular problem. an important part of the task of preparing such a reader is the correct decision concerning the quantity and variety of texts selected for inclusion. Processing just one or two short poems may suffice for the purposes of basic instruction but leaves the student with little scope for his or her own interpretation: as each morphological element turns up only once or twice, the
Most scholars engaged in teaching courses of modern indo-aryan languages at the university level are familiar with one specific problem that invariably turns up after students finish the basic course of a modern written and spoken language and are encouraged to turn their attention toward wider cultural contexts and deeper historical perspectives behind modern literary works.Students certainly know that the literary tradition of, e.g., Hindī does not begin in the first decades of the 19 th century and may be curious about its older phases.After all, it is difficult to imagine studying English as a subject at the university level without being able to read and analyse texts of Chaucer or Shakespeare.Therefore, it should be only natural for a student of Hindī to read and enjoy texts written by or ascribed to Tulsīdas, Gorakhnāth, Kabīr or Mīrā.
Many scholars and teachers of Hindī would perhaps agree with another proposition, namely, that introducing students to the language (or languages) of the aforementioned Hindī poets is more difficult than encouraging them to interpret Shakespeare's sonnets.Basically, teachers of Hindī can choose between two different approaches: those more philologically oriented would explain basic grammatical features of the main Hindī literary dialects, Avadhī, Braj and possibly Diṅgal, and then students would proceed to read specimens of selected works and authors.The advantage of this approach lies in the firm grammatical framework mastered by students well in advance, which makes them aware of the position of these dialects in the broad historical development of the New Indo-Aryan languages.The disadvantage is that this preparatory stage, undoubtedly worth the effort in itself, is relatively time-consuming and in the general plan of a curriculum often leaves relatively little time for reading larger sections of texts.Specimens read in class may be selected primarily with the intention to illustrate grammatical phenomena rather than to show literary creations in their own right as works of art and of intellectual or spiritual depth.Moreover, this systematic approach based primarily on acquaintance with the standard grammars of literary dialects (Braj or Avadhī) does not work well with texts ascribed, e.g., to Gorakhnāth, Nāmdev or Kabīr, which show many irregularities and aberrations from standard forms found in grammars.
the other approach, perhaps more direct and intuitive, is best represented by readers working with texts in the original language and their parallel translationsan excellent example is Rupert Snell's The Hindī Classical Tradition: A Braj Bhāṣā Reader, published by sOas, london (1991), and used in several university courses across Europe and in the USA.Here, grammatical explanations, which form the first chapter of the book, are kept to a minimum and the student is invited to delve quickly into the texts themselves and to look for additional information in copious and informative notes accompanying the text.The text itself is arranged as a mirror: the left page contains the Hindī original in Devanāgarī, and the right its close translation in English, accompanied by copious explanatory notes.Students thus can proceed more quickly, acquaint themselves, in a limited amount of time, with greater quantity and variety of texts, and therefore get a better chance to appreciate them as works of art.A slight disadvantage of this method is that it may encourage a somewhat superficial attitude towards the purely grammatical, morphological and syntactical aspects of these texts: once a correct meaning, sometimes perhaps arrived at by looking at the English translation, is established, one ceases to worry about this or that grammatical peculiarity or irregular feature.students using this particular reader get a good introduction into a literary tradition and a literary dialect, but their ability to analyse grammatical forms correctly will be exposed to a hard test, especially when they encounter texts composed not in standard dialects.Here problems arising from frequent homonymy, especially of grammatical morphs, which can sometimes lead to different interpretations of meaning, are aggravated by occasional incidence of forms that can be described either as archaisms or as borrowings from some other dialect.Kabīr's language, for example, has been often characterized by modern Hindī scholars as khicaṛī bhāṣā, a mixture of forms coming from various dialects of western and eastern parts of the Hindī area. 1 The purpose of this paper is to offer still another approach to the study of Old Hindī literary dialects and texts, a method developed during the 60s and 70s of the last century by an eminent Czech Indologist, the late Dr Vladimír Miltner.Miltner looked at the language from the point of view of descriptive linguistics and subjected selected texts to detailed and rigorous analysis of their morphological and syntactical structure.He demonstrated his method of morphological analysis of an Old Hindī text for the first time in the 1960s in his short study called Early Hindī Morphology and Syntax (Miltner 1966) and developed it further in his Old Hindī Reader, which was ready for publication in the 1970s.Due to the adversity of those times, however, it could be published only as late as 1998 (Miltner 1998).
In this latter work, Miltner takes specimens of texts written by or ascribed to thirteen Hindī authors of the pre-modern era (Roḍā, Joindu, Dāmodar, Gorakhnāth, Cand, Kabīr, Vidyāpati, Jāyasi, Sūr, Tulsī, Mīrā, Gokulnāth and Biharī Lāl).He analyses each word into its constituent morphs, lexical and grammatical, orders them into an alphabetical sequence and thus obtains a detailed index of all morphological elements occurring in the texts in question.The alphabetical ordering shows a great degree of homonymy, a feature encountered frequently, particularly in the case of grammatical morphs: all homonyms are marked by index numbers.Further, as elements of a system, the occurrence of all morphs is co-determined by their immediate context, i.e., by other morphs that precede and morphs that follow the morph in question.For example, the grammatical morph -i can mean three different things when found at the end of a verbal base (as in kah-i: 3 rd pers.sg.pres., 2 nd pers.sg.imper., and absolutive) and has still other possible meanings when found at the end of a nominal base (dir.sg.f., as in khabar-i, 'report'; obl.sg.m., as in ghar-i, 'house'; or obl.sg.f., as in dis-i, 'side', 'direction').Each morph entered into the index is therefore furnished with information about all other morphs, lexical as well as grammatical, that immediately precede and follow it in the texts that were excerpted and included in the reader.
in the introduction to his book, Miltner assures his readers that the 'process of interpretation is very simple and requires minimal brainwork'.Practical experience in 1 Probably an extreme example of interpenetration of one medieval dialect or language by another can be found in the corpus of Hindī pads of the medieval mystical poet Nāmdev originating in Mahārāṣṭra.W.M. Callewaert and M. Lath, in their edition of Nāmdev's songs, give a good specimen of this phenomenon when they draw attention to song no.165 of Nāmdev's Hindī corpus.The song is composed largely in Marāṭhī.'To a Rājasthānī audience the song would have made senseto whatever extent it did make sense-only with "Rājasthānī meanings".This implies that phrases with a particular meaning in Marāṭhī, probably meant something totally different in Rājasthānī' (Callewaert, Lath 1989, 401-2 [commentary], 352-3 [the song]).
Old Hindī courses has convinced me that this assessment is more or less realistic.Of course, students have to master the grammatical terminology used for the description of indo-aryan languages, but, at this more advanced level of training, meeting this requirement poses no particular problem.
an important part of the task of preparing such a reader is the correct decision concerning the quantity and variety of texts selected for inclusion.Processing just one or two short poems may suffice for the purposes of basic instruction but leaves the student with little scope for his or her own interpretation: as each morphological element turns up only once or twice, the interpreter is left with the very simple task of reassembling the jigsaw-puzzle of the segmented text to its original form.The student scarcely meets any real ambiguity, e.g., a case when he or she would have to ponder whether the correct form to select in a particular case is the 2 nd pers.sg.imperative or 3 rd pers.sg. of the indicative: there is a relatively high probability that in a morphemicon based on too little morphologically segmented material, the verbal base in question will appear in combination with only one of these two homonymous morphs.The larger the corpus of analysed texts is, the greater the probability that the verbal base will be found to coexist with the other morph too.In such a case, an interpreter will be faced with the dilemma of which morph to select; and the very awareness of this possibility of choice will induce him or her to look at the wider context of the word and the sentence.in some cases, the student may come to the conclusion that there is a real ambiguity, implying the possibility of two different readings and meanings of one piece of text.Thus, at the very beginning of the course, the student is, so to speak, thrown into the water and made to swim.It is, of course, advisable for him or her to consult some grammatical overview where the main outlines of the dialect in question are presented in a more systematic form (students may get such materials at the beginning of the course), but the main point of this method is that the student starts with genuine texts and is able to see them in their complexity-with all their ambiguities, morphological irregularities and other peculiar features which are brought into sharp relief in the process of morphological analysis.
an important aspect that merits attention and that has been hinted at above is the language variability of the selected texts.As is obvious from the list of authors chosen by Miltner for his Old Hindī Reader, Avadhī of Tulsī and Jāyasī is presented side by side with the purely Braj works of Bihārī Lāl and Gokulnāth and with the language of Madhyadeśa, represented by Gorakhnāth and Kabīr, which is often closely related to but not identical with Braj.The Morphological Key or Morphemicon, as Miltner prefers to call it, therefore includes a wide variety of morphological forms that occur in Eastern as well as in Western Hindī; such a key cannot be used as a catalogue of forms belonging to one particular dialect or author.However, as the occurrence of eastern forms in western dialects and vice versa is not an uncommon feature with many authors (or, to be more exact, with their texts as extant in existing manuscripts, printed editions, or oral traditions), this is not necessarily a drawback.
However, the minimal amount of brainwork promised by Miltner to the student has to be more than compensated by hard intellectual labour on the part of the compiler of the Morphemicon.The morphological segmentation of words found in texts that cover a time span of more than half a millennium and area as wide as Western Europe is certainly a very difficult task, the more so, as Miltner tries to conform to a Pāṇinian ideal of maximum consistency and economy of description.An interesting problem that must have occupied him at that stage of analysis was how to solve one particular dilemma turning up time and again in the process of building up the repertoire of morphemes: when constituting a particular morph, should precedence be given to historical considerations or should the primary requirement be systemic clarity and economy of description?There is probably no clear-cut answer to this question; in my opinion, Miltner succeeded admirably in striking the middle course-most grammatical morphs constituted by him and found in the texts can be discussed as results of historical development, even if some cases are debatable. 2ith Miltner's Morphemicon at hand and with its usefulness for the practical task of analysing and translating Old Hindī texts tested in university courses, it was possible to apply his method to a larger corpus of texts ascribed to one particular author.after some deliberation, i decided to analyse a greater number of pads of the medieval Hindī poet and mystic Kabīr.Several reasons have led me to this decision.Probably the most important one was a recently published edition of the pads of Kabīr based on several relatively old manuscripts, the oldest dating back to A.D. 1614 (Callewaert 2000).The editor, Belgian scholar Winand M. Callewaert, selected ten manuscripts containing Kabīr's pads (short poems sung to a particular rāga) and organized his edition in such a way that one and the same pad (or what can still be counted as a version of one and the same pad) is presented in all its manuscript variants on the same page (or following pages).This synoptic presentation admirably shows the vari-ability inherent in the oral performance at the time when it gradually became fixed in written form and began to undergo further changes and corruptions characteristic to the transmission of the written word.For the purposes of morphological analysis, the great advantage of this type of presentation lies in the fact that one can work with and analyse one particular document, one single manuscript produced by one copyist at one time and possibly one place.Thus, the language subject to analysis has greater internal coherence than, e.g., the so-called critical editions that attempt to present the text in an-as far as possible-'original form' and are full of various emendations.3a morphemicon based on the analysis of a single manuscript-in this case, the Jaipur manuscript from the Sanjay Sharma Sangrahālaya dated 1614 (the oldest Pañc-vāṇī manuscript found and published by Callewaert in his edition)-may then reveal a specific language variant whose features can be subsequently quantified (processed by a concordancer) and, if found statistically significant in comparison with other manuscripts, can form a basis for further studies of oral as well as textual transmission of a given work.4 in this one respect, the attitude of the present author differs from Miltner's: whereas his wide selection of texts yields a very rich harvest of morphs and morphological variants, concentration on a single manuscript should help sort out regular features and exceptions or linguistic borrowings from other dialects.
Another consideration which led to my decision to concentrate on Kabīr was my feeling that this poet and thinker has so far been relatively neglected in modern studies that focus on medieval Hindī literary traditions.Great and certainly deserved attention has been paid especially to the Vaishnava authors-as a glance into bibliographies clearly shows.On the other hand, traditions of the so-called nirguṇī branch of Hindī poetry-with the significant exception of the Sikh tradition-have so far received relatively less attention.Moreover, Kabīr's words and ideas have been, in modern times, used so often for various ideological purposes that they can scarcely escape the process of simplification or even misinterpretation. 5ith these considerations in mind, I set about analysing some of Kabīr's pads closely following Miltner's method of morphological segmentation.The result has been, so far, a corpus of about one hundred pads, edited provisionally with footnotes, supplemented occasionally with modern Hindī commentaries, and followed by a morphemicon that can be used as a key for their translation and interpretation.constant referral to Miltner's Morphemicon as presented in his Old Hindī Reader and application of the morphological units he collected to new textual material convinced me of the viability of the majority of his decisions.Very few minor changes were necessary in the structure of the Morphemicon itself: one thing which seemed desirable for better understanding of the texts was the inclusion of fixed phrases or idiomatic expressions-collocations of words with special meanings, which necessarily lie beyond the process of morphological segmentation (expressions such as pĩḍ par-'to chase', 'to go after'; pnī birol-lit.'to churn water', fig.'to engage in useless activity', etc.).
However, there was still another concern that gradually pressed itself in the foreground with growing urgency as a clear desideratum: in order to make the morphological analysis of the grammatically often complex forms more comprehensible and the process of segmentation itself more transparent, a kind of grammatically and historically grounded justification for establishing a particular segment as a morph was felt necessary.Such a morphological commentary should explain the reason that a particular segment was deemed a morphological unit, discuss possible alternative solutions, and give a short exposition of the historical background.(Vladimír Miltner gave a short outline of this grammatical explanation in his Early Hindī Morphology and Syntax, but omitted it altogether in his Old Hindī Reader.)Properly researched and cross-referenced, such grammatical explanations will constitute a kind of historical grammar, a 'Text-grammatik' based on the language of one particular manuscript.As such, it could serve not only as a grammatical supplement to a proposed Kabīr reader, but also as a useful tool for future comparative studies of Old Hindī dialects.
The purpose of the appendix that follows this paper, taking one of Kabīr's pads as an example, is to present the following informationshow the basic structure of the work discussed in preceding paragraphs: 1.The original text of the pad is presented provideswith footnotes pointing out possible ambiguities of grammar and meaning.alternative readings found in other manuscripts edited by Callewaert in his Millenium Kabīr Vāṇī are included for comparison.The basic text that has been subject to further morphological analysis has been taken directly from athe Jaipur manuscript.2. Morphs that have been arrived at by the process of segmentation are presented of in the form of an appropriate, alphabetically arranged morphemicon (the indexing of morphs included in the list has been taken from the general Morphemicon which was compiled on the basis of much more extensive material and which therefore contains a greater number of homonymous mor-phological units; that is why several morphs are marked with higher index numbers).3. Historical and functional explanation of selected grammatical morphs are provided; in the Morphemicon, cross-references to these explanations are included in square brackets following the morph (in our examples, V designates the section that deals with verbs and the following digit marks the subsection devoted to a particular type of suffix or ending).4. The text of the pad is morphologically segmented and presented in a form suitable for inclusion into an electronic corpus; such a corpus of a morphologically structured text can be further analysed with the help of appropriate concordancing software (in this particular case, I have used WordSmith Tools, version 2.0, developed by Mike Scott at the University of Liverpool).As some software concordancers may have problems showing letters with diacritics correctly, the text has been converted into ASCII codes.In most cases, I hope, the ASCII equivalents of letters with diacritics will be self-evident.5.An example of a concordance showingan the occurrence of the sigmatic future in the present corpus: sentences which form the immediate natural context of such forms can be used at later stages of the research as illustrative material for paragraphed grammatical treatment of the language in question.
Explanatory grammatical notes
Section V1: -i 3 -connecting (auxiliary) vowel A. Origin and function in Miltner's Old Hindī Reader, we find -i 3 -described as a 'verbal thematic vowel' occurring before the morphs of the perfective participle (-0 2 -), verbal substantive (-b 1 -and -v 2 -) and verbal adjective (-k 3 -in choḍika:u in the text of Rāur-vel by Roḍā).The same designation, 'verbal thematic vowel', is also given to the morph -a 3 -(followed by a greater number of different morphs, i.e.inter alia by -b 2 -, a verbal adjective in the texts of Vidyāpati, Gorakh and Tulsī), to the morph -u 2 -(followed by -b 2 -in Tulsī), and to the morph -o 2 -(again followed by -b 2 -in the text of Vidyāpati).It is obvious that under one common designation we have here a group of morphs which share similar (not always identical) functions but are of different origin.
The contexts in which the morph -i 3 -occurs in Kabīr's pads from Rājasthān allows us to devise a more specific name for it: in all quotable instances it appears to be a descendant of the OIA connecting vowel (Bindevokal) used in the formation of, e.g., past participles and gerundives of the so-called seṭ-verbal roots (for a list of forms in which this Bindevokal is used in sanskrit, see, e.g., Morgenroth 1989, 125, §167.Of course, the use of the term 'connecting vowel' for both the OIA -i-and nia -i 3 -does not imply that the rules of its occurrence with particular verbal roots are identical.What we should see as significant is the origin of this nia -i 3 -, which is traceable to the OIA Bindevokal -i-, and a common function of both these suffixes in the formation of the future tense and participles.This is perhaps enough to justify the change in the designation of this morph in the context of the language of Kabir's pads, from "'verbal thematic vowel' to the more specific 'connecting vowel', a term used by Western grammarians to describe its Sanskrit predecessor (see, e.g.Burrow 1973, 331;but in ibid., 369) uses also the term 'auxiliary vowel', following perhaps the usage of W.D. Whitney).Thiel-Horstmann 1983, 42, appears to see the intermediate -i-in the perfect participles merely as a matter of alternative spelling: 'After root-final consonants -y-is often inserted.The writing may also be -iy-in such cases, thus, dekhiyā'.Kellogg 1965, 295, §497a notes the insertion of -i-before the perfect participle termination in the dialects of Rājpūtāna, 'often inserted in the "Plays"', quoting as examples sūraja ugīyā 'the sun has risen', rāja tākiyā '(i) have forsaken (my) kingdom' and kāgada le hũ āviyo 'i have brought a paper (i.e., a letter) '. tessitori 1914-1916, §126 gives several examples of this intermediate -i-with roots ending in consonants from Old Mārvārī texts (kar-iu, kah-iu, ūḍ-iu, āp-iu; and the less frequent strong form -ia:u in jaṇ-ia:u and pūj-ia:u).Narottamadāsa Svāmī in his Saṃkṣipta Rājasthānī-vyākaraṇa 1960, 83, introduces the -i-forms as regular alternatives of the sāmānya bhūta: both phiriyo and phiryo in the sg.and phiriyā as well as phiryā in the plural.-i 3 -in perfect participles in the analysed Kabir's pads therefore appears to be due to the influence of western dialects.The same can be said about the sigmatic future joined to the root by the same connecting -i 3 -: apart from Rājasthānī dialects, it appears also in Old Gujarātī and Old Marāṭhī.
B.
Examples of -i 3 -connecting vowel 2 nd pers.sg. of -s 3 -future (< OIA -i-ṣya-): 338.0 kāyā mãjisi kaũna gun 'why will you rub [your] body' 1 st pers.sg. of the -h 2 -future (< OIA -i-ṣya-): 257.3 kahai kabīra mai sala raci marih 'says Kabīr: having built a funeral pyre I shall die' -0 2 -perfective participle (< OIA -i-ta-): 188.2 ākāse phala phaliyā 'in the sky the fruit has ripened' -b 1 -verbal substantive (< OIA -i-tavya-): 315.0 iba mohi nācibau na āvai lit.'now, dancing does not come to me (i.e., I do not feel like dancing)' -b 2 -verbal adjective (< OIA -i-tavya-): 113.0 jau pai rasanā rāma na kahibau, tau upajata binasata bhrãmata rahibau lit.: 'if one will not pronounce the name of Rām (lit.: "if the name of Rām is not to be pronounced ...") with one's tongue, then one has to keep being born, perishing, wandering around' (or: 'one will have to be born, to perish, to wander around again and again') Section V2: -s 3 -future A. Origin and function the origin of the sigmatic suffix used in the formation of the future tense in several NIA languages and dialects can be traced back to the OIA sigmatic future suffix -sya-/-i-ṣya-added to a strengthened verbal root.In the MIA stage (Śaurasenī, Māgadhī), the suffix -ssa-/-issa-is added predominantly to the present stem; the ending of the 1 st pers.sg. is that of the secondary conjugation (-m instead of -mi) (Pischel 1900 GPS, 362, §520).In Apabhraṃśa, this final -m was together with the preceding -a-of the stem transformed into -u: skt.kariṣyāmi > pkt.karissaṃ > apa.karīsu (with the reduction of the double consonant and compensatory lengthening of the preceding vowel).The known Apabhraṃśa terminations coincide with the corresponding forms in Old Mārvāṛī -or Old Western Rājasthānī, as L.P. -asī; 1. pl. -asy, 2. pl. -asyo, 3. pl. -asī (tessitori 1914-1916, 80-81, §121).similarly Kellogg 1965, 297-8, §502, brings further examples of use and adds that 'in Bundá, Kotah, along the river Chambal, and northward to Jaipúr, the future in स् यू ं , etc., is the usual colloquial form'.In Kabīr's pads, occurrences of this type of future are not numerous; in the three forms of the 2 nd pers.sg., we see the suffix of the future tense connected with the root by the connecting vowel -i 3 -, and in the single instance of the 3 rd pers.sg. the suffix is connected by the vowel -a 3 -.In 138.0 we meet the form cīnhyasi, the affix -ya-being probably a graphic representation of weakened pronunciation of the connecting vowel -i 3 -(cf.McGregor 1968, 114, §2.9).
Since the sigmatic forms in the analysed pads of Kabīr are probably due to some influence of western dialects, still another possibility of their interpretation should be taken into account.tessitori, 1914-1916, §117, mentions as alternative ending of the 2 nd pers.pres.the form -si, although this is, according to him, very rare and limited to certain works written by Jainas, possibly due to the influence of Prākrit used by them.He notes that before this --si, 'thematic a is optionally substituted by i or e' and quotes as examples sah-a-si, anubhav-i-si, kar-e-si, lahe-si, rāc-e-si'.Some forms in the pads of Kabīr could be perhaps be interpreted in this way: e.g., mãjisi in 338.0, for which several other Rājasthānī MSS have athe variant reading mãjasi.Finally, still another possibility is to see in the form mãjisi, and according to M.P. Gupta and V. Sĩha also in cīnhyasi (in the verse 138.0), 2 nd or 3 rd pers.sg. of the Avadhī past tense: for a discussion of this possibility, see, in the present work, the section devoted to inflected perfect.
On the basis of the few examples given below, it is impossible to make a comprehensive statement about the uses of the s-future.the future action is either understood as certain (deṣisi, binasasī) or it is put in an interrogative sentence.Metzger's observation about the use of the s-future in the letters of vakīls of 18 th century Rājasthān may be pertinent to our texts too: 'Allerdings scheint es so zu sein, dass das Futur 1 [i.e., s-future] ausschliesslich zur Bezeichnung von Sachverhalten eingesetzt wird, die definitiv eintreten werden (oder besser, von Sachverhalten von denen das erwartet wird...), während das Futur 2 [i.e., l-future] oft eher eine Möglichkeit oder Wahrscheinlichkeit auszudrücken scheint' (Metzger 2003, 105).4. entry for the corpus (processed by concordancer built in the Wordsmith 2.0.program for linguistic analysis.)
3 -future 2 nd pers.sg.: 138.0 kā nāgaũ kā bdhai cma, jau nah cīnhyasi ātamarma 'what [is the use] of [going around] naked, what [is the use] of binding animal skin (around your body); when you will not recognize God in / as the Self?(or: " ... when you have not recognized the God in the Self?")' 367.2 kah vai loga kah pura paṭãṇa; bahuri na deṣisi āi 'where are those people, where is the quarter, the city [you lived in]?You will not see [them] again having come [back]' 408.3a aba nah bhajisi bhajisi kaba bhāī; āvaigā ãta bhajyau nahĩ jāī 'if you will not worship now, when will you worship?[When] the end comes, it will be impossible to worship' 3 rd pers.sg.: 92.2 je upajyā so binasasī 'what[ever] has been born will perish' 338.0 kāyā mãjisi kaũna gun; je ghaṭa bhītari hai malan 'why (lit.: "for what benefit / profit') will / do you rub [your] body, when the pot is dirty inside?" (or: "... have you rubbed [your] body, ...") | 8,716.4 | 2008-01-01T00:00:00.000 | [
"Linguistics"
] |
Hydrocarbon reservoir characterization of “ Otan ‐ Ile ” field , Niger Delta
Otan-Ile field, located in the transition zone Niger Delta, is characterized by complex structural deformation and faulting which lead to high uncertainties of reservoir properties. These high uncertainties greatly affect the exploration and development of the Otan-Ile field, and thus require proper characterization. Reservoir characterization requires integration of different data such as seismic and well log data, which are used to develop proper reservoir model. Therefore, the objective of this study is to characterize the reservoir sand bodies across the Otan-Ile field and to evaluate the petrophysical parameters using 3-dimension seismic and well log data from four wells. Reservoir sands were delineated using combination of resistivity and gamma ray logs. The estimation of reservoir properties, such as gross thickness, net thickness, volume of shale, porosity, water saturation and hydrocarbon saturation, were done using standard equations. Two horizons (T and U) as well as major and minor faults were mapped across the ‘Otan-Ile’ field. The results show that the average net thickness, volume of shale, porosity, hydrocarbon saturation and permeability across the field are 28.19 m, 15%, 37%, 71% and 26,740.24 md respectively. Two major faults (F1 and F5) dipping in northeastern and northwestern direction were identified. The horizons were characterized by structural closures which can accommodate hydrocarbon were identified. Amplitude maps superimposed on depth-structure map also validate the hydrocarbon potential of the closures on it. This study shows that the integration of 3D seismic and well log data with seismic attribute is a good tool for proper hydrocarbon reservoir characterization.
Introduction
Niger Delta basin is ranked among the most prolific basins in the world and its productivity has made it to be the fulcrum of Nigeria economy. The 'Otan-Ile' field, located in the transition zone Niger Delta (Fig. 1). The basin is often characterized by complex structural deformation and faulting which could lead to high uncertainties in the reservoir properties (Doust and Omatsola 1990). These high uncertainties greatly affect the exploration and development of fields within the basin such as 'Otan-Ile' field. Improper interpretation of reservoir properties heterogeneity in the field has led to poor performance of reservoir during hydrocarbon production. Nonlinearity, natural heterogeneity and uncertainty of reservoir parameters make problems related to hydrocarbon characterization difficult (Koneshloo et al. 2018). Thus, it is problematic to clearly quantify spatial relationships of variable properties of reservoir. In resolving this problem, well logs and seismic data can be used to generate useful petrophysical parameters, maps and seismic attributes Edigbue et al. 2015). b Base Map of "Otan-Ile" Field which could provide detail description of reservoir properties and assist in optimal well placement. Concise geometric description of stratigraphic and structural aspects of a reservoir can be well achieved using well log and seismic data (Adelu et al. 2016;Sanuade et al. 2018;Akanji et al. 2018).
Several studies have been carried out in the Niger Delta Basin using 3D seismic and well log data to properly characterize hydrocarbon reservoirs Ibe and Ezekiel 2019), for stratigraphical analysis (Emujakporue and Eyo 2019;Dim et al. 2019), volumetric analysis (Adelu et al. 2016;Akanji et al. 2016;Okpogo et al. 2018;Ukuedojor and Maju-Oyovwikowhe 2019), pore pressure prediction (Chiazor and Beka 2019; Tanko et al. 2019;Umoren et al. 2019) and structural analysis Akanji et al. 2018;Adeoti et al. 2014;Ibe and Ezekiel 2018;Soneye and Osinowo 2019). Furthermore, seismic attributes (measure of seismic data which aids to improve visualize or quantify structures of interpretation concern (Marfurt and Chopra 2007) are very important tools in analyzing and interpreting hydrocarbon reservoir as they provide detailed information. Seismic attribute maps have also been used by several researchers to enhance hydrocarbon characterization (Adelu et al. 2016;Sanuade et al. 2018;Akanji et al. 2018;Ibe and Ezekiel 2019).
Therefore, the objective of this study is to characterize the complex reservoir bodies in the Otan-Ile field and to evaluate the petrophysical parameters so as to provide detailed geological information about the field.
Structure and Sedimentology of Niger Delta
The structure of the Niger Delta basin is comparable to a colossal rollover, with paralic shales and sands acting as the principal sediments present. The latter sequence prevails at 1930-2050 m within the subsea (Short and Stauble 1967;Whiteman 1982;Doust and Omatsola 1990).
The three key formations within the Niger Delta basin are the bottom Akata, middle Agbada and topmost Benin Formations (Fig. 2). The first, newest, coastal plain, Miocene to Recent Benin Formation (Short and Stauble 1967;Evamy et al. 1978;Ejedawe 1986;Doust and Omatsola 1990) largely consists of deposits of alluvia and non-marine sandstones, in a continental, yet fluvial setting, encompassing the western flank of the Niger Delta complex to the total basin and the southern region of the shoreline. Bands of gravel lignite, wood fragments, minute intercalations of shales and coarse-grained sandstones are prime deposits observed. It is also primarily linked with infinitesimal accumulation of hydrocarbon . The thickness of this Formation varies widely, exceeding 1820 m in some instances.
The underlying lower Agbada Formation, is a chief petroleum-bearing, Eocene to Pliocene Formation, which is situated within varying environments from a coastal brackish and/or marine one to a fluvial one. There are equivalent layers and quantities of sandstone and shale at the basal region (Short and Stauble 1967) with more sands in the upper layer. Furthermore, in the basal layer, shales result from gradual variation of unconsolidated to marginally consolidated sandstones which are well graded with varying degrees of roundness. The sands host the hydrocarbon, acting as reservoirs with the shales offering effective closures and seals (Bustin 1988;Corredor et al. 2005;and Adeoti et al. 2014). Here, there are several orientations of the belts hosting massive quantities of oil, and a maximum thickness of 4500 m is observed (Evamy et al. 1978;Doust and Omatsola 1990;Dieokuma et al. 2014).
The ancient, bottom, marine Akata Formation is about 7000 m thick, with an age range from Eocene to Recent exhibits signs of excess overburden pressure; and resembles a diapir from the offshore continental slope. It is made up of dense series of the potential hydrocarbon source rock (shale), turbidites and insignificant amounts of clays and silts (Doust and Omatsola 1990). The shales also host local siltstones and sandstones as interbeddings (Haack et al. 2000).
Fig. 3
Example of Niger Delta oil field structures and associated traps types (Doust and Omatsola 1990) Northwest to southeast and northeast to southwest trending growth faults-antithetic, flank, regional, crestal, structure-building and listric faults; ridges, rollover anticlines and shale diapirs are associated with the Niger Delta basin (Hosper 1971). The Niger Delta comprises depobelts/megaunits which are objects with respect to distribution of hydrocarbon, stratigraphy and structure-building (Evamy et al. 1978). The improvement and style of the discrete megaunits are connected to the equilibrium between the rate of sediment subsidence and supply (Knox and Omatsola 1989). Doust and Omatsola (1990) described a variety of structural trapping elements, including those associated with simple rollover structures; clay filled channels, structures with multiple growth faults, structures with antithetic faults and collapsed crest structures (Fig. 3).
Database
The data used for this study include 3D seismic data (covering an area of 51 km 2 ), well log from four wells (sonic, density, gamma ray and resistivity) and checkshot data.
Lithology delineation
The methodology used in this study include identifying lithology by using gamma ray log. The lithology was identified by defining the shale base line to be 65 API (American Petroleum Institute) (Asquith and Krygowski 2004) which is a constant line in front of shale Formation across the entire 'Otan-Ile' field. A deflection, even slight from the shale base line to the left, indicates a sand Formation. Delineation of petroliferous zones was done by using combination of gamma ray and resistivity logs. When sand Formation correlates with relatively high resistivity response, such is recognized as petroliferous zone.
Petrophysical evaluation
Standard equations were used for the estimation of reservoir properties, which involves calculation of reservoir parameters such as gross thickness, net thickness, volume of shale, porosity, water saturation and hydrocarbon saturation of the field.
Volume of Shale
The data obtained from gamma ray log was used to achieve this. The volume of shale (V sh ) was mathematically computed using Eq. 1 (Asquith and Krygowski 2004): where I GR can be estimated using Eq. 2.
where GR max is gamma ray maximum (shaly sand); GR min is gamma ray minimum from clean sand; GR log is gamma ray log (shaly-sand); I GR is the gamma ray index. Porosity The porosity is a measure of the amount of internal space that is capable of holding fluid. It is expressed in percentage (%) (Asquith and Krygowski 2004). Porosity was estimated from the sonic log using Eq. 3: where φ sonic = Sonic-derived porosity; Δt ma = Interval transit time of matrix; Δt log = Interval transit time of formation; Δt f = Interval transit time of the fluid in the well bore (fresh mud = 620 µs/m salt mud = 607 µs/m. The sonic log only records matrix porosity rather than fracture or secondary porosity. Water Saturation (S w ) Water saturated was estimated using Eqs. 4-6 (Archie 1942).
where S w = water saturation; F = formation Factor; R w = Formation water resistivity at formation temperature; R o = Resistivity of formation at 100% water saturation; R t = True formation resistivity; n Saturation exponent. This is usually two.
Hydrocarbon saturation
Hydrocarbon saturation (S h ) was calculated using Eq. 7
3D seismic interpretation
Fault identification on the seismic section was centered on reflection discontinuity, vertical displacement of reflection, mis-closures in tying reflections around loops, sudden cessation of events and change in shape of events transversely the faults. The tops of petroliferous zones were tied to the
Fig. 6
Inline 6704 displaying horizons mapped, well tops and well log seismic section to identify two horizons (T and U) through seismic-to-well tie based on the continuity, event strength, amplitude and coherency and prospectivity. Two horizons were mapped across the 3D seismic volumes; and time, depth and attribute contour maps were generated for the horizons. Seismic attributes gives an idea of the verticallateral variations of the reservoirs in the subsurface. Having completed the horizon and fault mapping; an attempt to complement the conventional interpretation was undertaken using attribute analysis. The two attributes utilized were root mean square and maximum amplitudes.
Well log data interpretation
Two reservoirs sand bodies (Sand T and U) were delineated across the 'Otan-Ile' field from four wells: OW-1, OW-2, OW-3 and OW-4 (Fig. 4). The general stratigraphy comprised intercalation of sand and shale layers. The shale strata increase in thickness with respect to depth, while the sand layers decrease in thickness with depth. This is typical of rock sequence within the Agbada Formation (Anthony and Aurelius 2013; Akanji et al. 2018). The sand bodies thinning southeastern direction of 'Otan-Ile' field implying the direction of deposition or erosion as highlighted by (Catuneanu 2006). The reservoir sand bodies' gross thickness, (Table 1). The average gross thickness of reservoir sand bodies across the wells is 31.76 m. The average effective porosity (31%) and permeability (26,740.24 md) of the field show it is viable in terms of porosity and permeability (Buller et al. 1970;Ibe and Ezekiel 2019). This plays a vital role in releasing of hydrocarbon from the reservoir. The petrophysical parameters of this field have similarity with some of the works that had been carried out in Niger Delta (Adeoti et al. 2014;Adelu et al. 2016;Sanuade et al. 2018;Akanji et al. 2018). The porosity and permeability values of the field satisfied the porosity and permeability values for reservoirs qualitative description by Rider (1986) ( Table 2). The field varies from very good to excellent in terms of porosity and permeability.
Seismic data interpretation
Faults F 1 , F 2 , F 5 , F 17 and F 21 are the major growth faults delineated in the 'Otan-Ile' field, while the minor faults are F 3 , F 4 , F 11 , F 16 , F 18 , F 19 and F 34 (Fig. 5). Both antithetic (F 34 and F 41 ) and synthetic (F 1 ) faults were identified in the field (Fig. 5). The throw of the major faults increases with depth, which may serve as migration path and the minor faults are characterized with small throws, which could be acting as a seal in the field. The faults could serve as trap and most of them form potential sites for thick sediment accumulation in the down thrown region as described by Short and Stauble (1967). Figure 6 has well OW-4 projected on to it, which validates that proper horizon was mapped and the extensiveness of the petroliferous layers. Primary seal rocks in the Niger Delta are the inter-bedded shale within the Agbada Formation, the juxtaposition of reservoir sands against shale beds due to faulting creates good seal integrity (Doust and Omatsola 1990). The shale provides seals in the form of clay smears along these syn-sedimentary faults and vertical fault seals in a compressive stress setting (Weber and Daukoru 1975).
The sealing capability of the faults is dependent on the amount of throws and shale/clay smeared along the fault planes (Busch 1975;Weber and Daukoru 1975). According to Weber and Daukoru (1975), faults can be sealing if either the throws are less than 492 ft (150 m), or the amount shale/ clay smeared along the fault planes is greater than 25%. The average throws of the major faults F 1 and F 5 calculated are 134.88 ft (41.11 m) and 125.4 ft (38.22 m), respectively (Tables 3). Therefore, based on the amount of throws, faults F 1 and F 5 are sealing which is in agreement with the work by Weber and Daukoru (1975), signifying that in the Niger Delta, the soft and over-pressured Akata Shale, in most cases rise up to fill the fault zones, thus enhancing their sealing capabilities.
The time horizon map of Horizon T in Fig. 7 shows that the anticlinal structure observed on this surface is heavily faulted. Major faults associated with this structure are faults F 1 and F 5 . Faults F 22 and F 11 are antithetic to fault F 1 . Minor faults include faults F 2 , F 3 , F 6 , F 7 and F 8 . The
Fig. 9
Relative acoustic impedance attribute displaying the high amplitude (bright spots) as indicator of hydrocarbon probable structures accumulating oil and gas in this field are fault-assisted closures. On the depth structural map (Fig. 8), the two major closures identified at southeastern region are structurally controlled, which are four-way closures. These two major structural closures possess efficient traps suitable for hydrocarbon accumulation. The potential of trapping in the "Otan-Ile" field can be attributed to faults and/or anticlines, acting either as fault aided or anticline closures correspondingly or both.
The relative acoustic impedance attribute, which is volume seismic attribute, shows areal range of the bright spots (sweet spots) of several seismic attributes determined for reservoir sand bodies (Fig. 9). The detected sturdy reflection is a suggestive of reservoir rocks, which may be owing to the manifestation of hydrocarbons in the delineated layers (sands). It is observed here that, the existing wells target the observed bright spots for production, which supports the effectiveness of seismic attributes in optimal well placement. Figures 10 and 11 show the root mean square (RMS) and maximum amplitude maps. The high amplitude form observed around the exiting well positions on the attribute maps indicate bright spot, which may be produced by a locally greater-than-normal velocity dissimilarity amid two strata or a decrease in the acoustic impedance from the overlaying shale to the sand reservoir saturated with hydrocarbon. The high amplitude areas are in patches on the attribute maps and it is more distinct at the northern part of the maps. It was observed that OW-2 lies directly on high amplitude. The behavioral pattern of amplitude distribution on maximum attribute conforms to the RMS map as distinct zones of anomalous amplitude in northern and central parts of the map were observed. Figures 12 and 13 show the depth-structure maps superimposed on RMS attribute maps of the horizons. The bright spot (high amplitude) observed on the upthrown of one of the main structural building, major growth fault (F 1 ) and on the downthrown of the synthetic fault (F 9 ) supports the outcome from the maps of depth overlaid on the map of amplitude. The closures observed on depth-structure maps conform to high amplitude.
Conclusion
The reservoir properties of the 'Otan-Ile' field in the transition zone, Niger Delta, have been characterized using 3D seismic and well log data. Two reservoir sand bodies were delineated crosswise the available wells. The estimated petrophysical properties show that the field is viable in terms of hydrocarbon production. The growth fault, F 1 nearly cut across the field and increases in throw with depth and could serve as trap for hydrocarbon in the field. The time and structural maps revealed the geometry of the subsurface and nature of hydrocarbon reservoir in the field, which are fault-assisted structural and anticlinal closures. The attribute maps generated conformed to the structural highs observed on time and depth maps, which hereby validate the integrity of their interpretation. It has been shown from this study that the integration of 3D seismic and well log data can be used for hydrocarbon reservoir characterization and the information from this study would aid proper management of the reservoirs in the Otan-Ile field.
Compliance with ethical standards
Conflicts of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 4,277 | 2021-01-11T00:00:00.000 | [
"Geology"
] |
Earnings quality and the cost of debt: A case study of Vietnam
Abstract This paper examines the impact of earnings quality (EQ) on the Vietnam companies’ cost of debt (COD). We use data from companies listed on the Vietnam stock market from 2010 to 2019. In this paper, we develop a model to investigate the influence of audit quality and foreign ownership on COD. We find that EQ had a negative relationship with COD. However, when firm is in financial distressed, EQ has a positive relationship with COD. As an intermediate variable, EQ also has a negative relationship with COD.
Introduction
Businesses' earnings quality (EQ) is an essential factor in minimizing information asymmetry and thus promoting the development of financial markets. EQ is the possibility of potential earnings growth of the business or the probability that the business will achieve the expected earnings growth in the future. Lenders often rely mainly on figures presented in financial statements in the relationship between businesses and creditors and in setting terms in credit contracts. Therefore, information asymmetry due to the low quality of information presented in financial statements will affect funding decisions. Firms with a high level of information asymmetry will find it more difficult to access external funding sources or encounter strict loan terms and shortened loan tenor as well as high COD, which is the protection by the price set by the creditor to limit credit risk.
According to Richardson et al. (2001), EQ was assessed based on the stability of future sales (Beneish & Vargus, 2002) and also assumed that the stability of the firm's revenue would be evidence ABOUT THE AUTHORS Van Vu Thi Thuy is a lecturer at the School of Banking and Finance, National Economics University, Vietnam. Her works focus on corporate finance and the stock market. Hung Dang Ngoc is an associate professor at Hanoi University of Industry and a Ph.D. since 2011. He teaches and researches accounting and finance. Tram Nguyen Ngoc is a lecturer and research student at the National Economics University. She teaches and researches corporate finance and the stock market. Hoang Anh Le is a lecturer and research student at the National Economics University. He teaches and researches corporate finance and the stock market
PUBLIC INTEREST STATEMENT
Using the OLS estimation method and the data of 3800 observations of listed companies in 2010-2019, we find empirical evidence that firms of high EQ would reduce their own COD. Simultaneously, the study also identifies evidence for the relationship between COD and factors of audit quality and foreign ownership. Besides, the study also considers that upon the impact of EQ when interacting with financial distress, earnings quality has a positive relationship with COD.
for the EQ of the business. Penman and Zhang (2002) defined EQ as the predictability of future earnings of a firm. Investors or other stakeholders in the capital market often rely on the information presented on financial statements to assess future cash flows of a business and thereby estimate expected returns (Francis et al., 2004). As such, in order to be capable of predicting future cash flows better, earnings published in the financial statements should be of good quality.
Several empirical studies produced quite similar results in terms of the negative relationship between EQ and the cost of debt, such as Francis et al. (2005), Gao (2010), Lambert et al. (2007), Carmo et al. (2016), and Beltrame et al. (2017), and Orazalin and Akhmetzhanov (2019), and Houcine and Houcine (2020). However, the EQ measure followed various earnings management measurement models in different contexts and markets.
In Vietnam, a recent study by Le et al. (2021) examined the impact of accrual quality on the cost of debt with a sample of 889 observations from 2012 to 2017, which showed that EQ had a negative relationship with COD. We continue to develop in the research direction of Le et al. (2021), with larger sample size and, at the same time, add other independent factors such as independent audit quality and the impact of foreign ownership on the cost of debt. Also, we keep developing our model upon considering the impact of earnings quality when interacting with financial distress on COD and the research findings are pretty and attractive when considering this interaction variable. Simultaneously, we also considered whether or not there was an influence of EQ as an intermediate variable on COD. According to this study, providing high-quality information on financial statements will reduce the cost of borrowing, open up opportunities for businesses to gain access to loans, survive and develop in today's volatile market economy. Dechow et al. (2010) and Francis et al. (2004) summarized previous studies and provided criteria for EQ evaluation and divided these criteria into two groups based on which basis researchers adopted to assess whether accounting profit honestly reflected the business performance of the entity if it was worthwhile. The EQ classification can be based on the following criteria:
Earnings qualit
Firstly, the accountingbased measures, including earnings management, accrual quality, earnings persistence, predictability, and smoothness, are built based on the assumption that accounting profit results from efficient cash flow allocation into the reporting periods through accrual accounting. As reported earnings reflect the actual performance of the reporting entity, a relationship will exist among earnings, cash flow, and other accounting information.
Secondly, market-based attributes are determined based on the view that earnings reflect economic profits, and that stock return reflects economic profits. This group consists of two criteria: (1) value relevance, or the extent to which reported earnings can account for the fluctuations in a company's stock price and the return that investors receive from the company's stocks, and (2) Timeliness, which focuses on evaluating whether losses are recognized on time and in the same period as they are incurred.
Vander Bauwhede et al. (2015) used accrual accounting to measure EQ, which showed that this method measured EQ and allowed direct testing of the impact of EQ on COD. Research by Vander Bauwhede et al. (2015) revealed that higher information quality would reduce information asymmetry, and banks will reward firms with higher EQ by offering lower interest rates. The results indicated that high EQs receive economic benefits, while high EQ also minimizes financial COD. Choi and Pae (2011), in a study on the relationship between business ethics and financial statement quality in Korea, measured EQ in three ways: earnings management (using accrual accounting variables), prudential accounting, and the verity of accruals about future operating cash flows. The results indicated that businesses with higher commitment had higher EQ. Such firms also showed less interfered earnings management meant more consistency in financial statements and more accurate cash flow forecasts than other firms. In addition, these businesses also influenced the maintenance of EQ in the future. Schipper and Vincent (2003) believed that EQ played an essential role for those individuals who used earnings data to contract and make investment decisions. In other words, EQ was a critical decisionmaking factor for private and institutional investors and partners in setting up contracts. Furthermore, previous research literature suggested that higher EQ helped reduce adverse selection (Lambert et al., 2007) and lower COD (Vander Bauwhede et al., 2015).
The cost of debt
The cost of debt is the portion of capital expenditure payable to external funds. This cost includes both shortterm and longterm loans. In the case of a company issuing shares, this issuance cost should also be added. One of the methods adopted by professionals in the financial management industry is to calculate the capital expenditure based on a weighted average of the costs using various capital sources. In financial sponsorship, the interest rate or the cost of debt is the main economic benefit the lender can derive from the borrowing business. Lenders lend money to businesses in exchange for principal and interest, but the control of the business and the combination of shareholders and managers under the contract terms are preserved, such as covenants (Kothari et al., 2010). As such, lenders look for information on the earnings power of a firm, for example, its periodic business performance, as an indicator of loan repayment capacity and avoiding defaults. Debt contract theories will consider the debt interest rate as an essential mechanism in establishing a debt contract, and previous studies have shown a significant impact and tradeoff between COD and debt contract terms used.
Studying earnings quality and the cost of deb
Recently, researchers have focused on EQ concerning COD. Anderson et al. (2004) collected a sample of 252 industrial companies in the period 1993 to 1998. The findings revealed that the BOD size and the complete independence of the auditors would have a positive relationship with the reliability of EQ and therefore, significantly reduce the COD. Lambert et al. (2007) developed a model based on the CAPM model and showed that EQ directly impacted COD-investors' perception of future cash flow allocation, and indirectly-decisions made by businesses on future cash flows. Given the direct influence of EQ, it is not a separate information risk factor. In other words, this influence is not diversified in large economies. The quality of accounting information, directly and indirectly, impacts the cost of debt. Results by Yee (2006) noted that given uncertainty about future dividend payouts, EQ would increase the risk premium. The author also deemed that as long as there was uncertainty about portfolio diversification (future dividend payouts are uncertain) and investors were not wholly calm, limited EQ would increase equity risk premium. In a study by Kim and Qi (2010) on for businesses in the US, samples were collected from January 1970 to December 2006. After examining low-priced securities, the authors concluded about EQ (using the accrual accounting model) and found a significant price impact of accrual quality on stock profit. Furthermore, the authors also revealed that the accrual quality risk insurance premium was associated with fundamental risks related to the macroeconomic conditions and business activities of enterprises. Gao (2010) found that the negative correlation between EQ and COD existed only under certain conditions. In an economy where investor competition was perfect, the author provided conditions under which the negative correlation between EQ and COD was unlikely. The quality of financial statement disclosure improved investor benefits by reducing COD. The research result by Choi and Pae (2011) revealed that businesses with higher commitment would have higher EQ. Such firms also showed fewer earnings management, more consistent financial statements, and more accurate cash flow forecasts than other firms. In addition, such enterprises also impacted future financial statements' quality. More recently, Van Caneghem and Van Campenhout (2012) provided evidence revealing that financial leverage had a positive relationship with the EQ of firms. The authors used a series of EQ variables based on the auditor's verification to test EQ's influence on enterprises' financial leverage.
Furthermore, studies using auditors' confirmations-based on these samples assumed that verification from auditors would increase the information quality in the financial statements and considered issues (1) Auditor's verification improved EQ, and (2) EQ affected debt availability and usability. The findings showed that EQ impacted enterprises' credit access capacity, affecting the cost of debt. Vander Bauwhede et al. (2015) used the accrual quality to represent the EQ of firms from 1997 to 2010 and found evidence showing that the EQ of the firm had a negative impact on the interest costs of the firms.
Studying the audit quality and the cost of debt
Previous studies have examined various aspects of the relationship between audit quality and the cost of debt (Dhaliwal et al., 2008;Kim et al., 2013;Li, Xie & Zhou, 2010;Pittman & Fortin, 2004). DeAngelo (1981) defined audit quality as the probability that the auditor would detect serious material misstatement in a client's financial statements. According to this definition, audit quality was determined by the auditor's professional competence, independence, and other resources designed for the audit (such as the time and the audit team). DeAngelo (1981) suggested that audit quality was positively related to audit firm size. Auditors of large audit firms were expected to be motivated by a more excellent reputation to maintain independence from clients. Francis and Wilson (1988) argued that the audit firm's brand is a proxy variable for audit quality. As such, previous studies on audit quality focused on examining the impacts of audit quality by comparing audits conducted by the largest audit firms (currently Big4) and those by smaller ones. These studies are usually based on a sample of listed companies, which showed that the audit performed by Big4 was indeed associated with higher audit quality. In particular, integrated studies on audit quality revealed that those audits performed by Big4 had lower litigation rates (Palmrose, 1988), higher audit fees (DeFond et al., 2000), and reduced level of earnings management through accrual accounting (Becker et al., 1998). Besides, they raised the likelihood of putting forth opinions of total disapproval (Francis & Krishnan, 1999) and provided passages that emphasized financial distress signals.
Regarding the cost of debt, Blackwell et al. (1998) showed that auditing impacted the US firms' cost of bank loans. Other related empirical studies applied indirect measurement to the cost of debt. Using data on firms in Korea, Kim et al. () extended the findings of Blackwell et al. (1998) to show that audit quality through firm audit reputation had an impact on reducing the cost of borrowing. Empirical evidence from Spain indicated that the audit performed by Big4 impacted the debt valuation for firms, while the audit opinion did not have such a correlation (Cano Rodríguez et al., 2008). Furthermore, Illueca Muñoz and Gillde-Albornoz (2006) suggested a negative relationship existed between accrual quality and the cost of debt based on a sample of Spanish firms audited by an audit firm Big4, the findings of which indicated the audits of listed companies. Clients of Big4 auditors had larger stock trading volumes offered in the initial public offerings (IPOs; Jang & Lin, 1993) and higher earnings ratios (Teoh & Wong, 1993).
Earnings quality
To measure earnings quality, we follow the approach of McNichols (2002) which was developed based on the model of Dechow and Dichev (2002). This measure takes into account the change of revenue (∆REV), property, plant, and equipment (PPE).
where WCA it is the accrued working capital of enterprise i in year t, calculated by the change in current assets (∆CA) minus the change in cash and cash equivalents (∆Cash), minus the change in short-term debt (∆CL) and plus the change in short-term bank debt (∆Debt). WCA = ∆CA-∆CL-∆Cash + ∆Debt CFO itÀ 1; CFO it; CFO itþ1 are the operating cash flows in year t-1, year t and year t + 1, respectively. All variables are divided by total assets (A it −Total assets); ΔREV it is the change in receivables of the company i in year t; PPE it is the fixed asset cost of the company i in year t. All variables in Equation (1) are summarized in Table A1 (Appendix).
To estimate the quality of accruals, we used a model developed by McNichols (2002), which represented the quality of accruals as working capital accruals regressed on operating cash flows for the previous year, current year, and the immediately following year, the difference between receivables and fixed assets, all divided by total assets at the beginning of the period. To measure EQ, an AQ variable was generated as the negative standard deviation of residual ε i , t of equation (1) after performing the regression.
A higher accrual quality (AQ) value indicates poorer accumulation quality as cash flow performance accounts for minor variation in current accruals. Since earnings are the sum of accruals and cash flows, and the cash flow component is generally considered objective and unmanipulated, EQ depends on the quality of the accruals. Therefore, poorer accrual quality implies lower EQ. Variable EQ is measured by accrual quality EQ = AQ*(−1).
Earnings quality and the cost of debt
Based on the study overview, we have established the model as follows:
The cost of debt (COD)
When examining the cost of debt, each study had a different way of measuring it. Borisova and Megginson (2011) and Borisova et al. (2015) determined the cost of debt based on the yield variance between corporate and government bonds with respective maturities. The above measurement suits developed countries with longstanding corporate bond markets and strong business capital mobilization channels. However, this measurement becomes difficult to implement in emerging markets, where loans are mainly through banks (Shailer & Wang, 2015). In order to standardize the distribution of interest expenses and fit the characteristics of bank financing in emerging markets, Shailer and Wang (2015), Bliss and Gul (2012), Francis et al. (2005), and Gray et al. (2009) calculated the cost of debt as the logarithm of the ratio between interest expense and total current liability plus longterm liability. Stanišić et al. (2016), and Francis et al. (2005) measured the cost of debt differently. Previously, the capital expenditure was determined based on the pre-tax interest expense via the formula of total interest expense divided by the average of interest-bearing liabilities for the year.
In this study, we measure the cost of debt according to Stanišić et al. (2016), Francis et al. (2005), Persakis and Iatridis (2015), and Pittman and Fortin (2004) to fit the debt characteristics of Vietnamese enterprises (COD = Interest expense/ (Interest-bearing long-term liability + Interestbearing current liability).
Earnings quality (EQ)
Based on the studies by Anderson et al. (2004), Yee (2006), Lambert et al. (2007), Kim and Qi (2010), Gao (2010), Choi andPae (2011), andVander Bauwhede et al. (2015), and Hung and Van (2020), and Dang and Tran (2020), and , Dang et al. (2021), highquality information facilitated transparency, which would help reduce the problem of information asymmetry and satisfy the requirements set by investors and shareholders. A series of advantages of providing highquality information were mentioned by researchers; improving EQ would reduce information and liquidity risks, restrict managers from using their rights for the purposes of their own interests, enable them to make more effective investment decisions. The enhancement of EQ required firms to provide more information with improved information quality, in order to ensure that market participants have adequate information to make investment and credit decisions. The authors also concluded that the higher EQ means more likelihood to ease the problem of information asymmetry between firms and creditors.
Therefore, we form the following hypothesis: H 1 : Earnings quality has a negative relationship with the cost of debt.
In Model 1, we add the control variables which are firm size (SIZE) and financial leverage (LEV). Firm size (SIZE) is measured by the logarithm of total assets and financial leverage (LV) equals to liabilities to equity. We also use dummies to represent industry-specific characteristics.
Size.
Size can allow lenders to calculate the value of a company's market power, indirectly estimating its bankruptcy risk. The more significant assets, earnings, revenue, or number of employees means the greater financial autonomy of the company; therefore, such companies tend to diversify their business activities more, and the result obtained is lower risk and less likely bankruptcy. Large firms tend to use long-term debt, while small ones choose to access shortterm loans. Large firms often have economies of scale (Berger & Udell, 2006) in accessing longterm debts and even have the capacity to negotiate with banks and creditors about loan terms, preferential interest rates and credit limits. Moreover, large firms tend to diversify their businesses and own more stable cash flows; therefore, the risk of bankruptcy for large firms is relatively lower than that of smaller ones. This suggests that size has a negative relationship with COD. Information asymmetry theory also explains the negative relationship between size and COD. Thus, this implies that the relationship between size and COD is negative. We hypothesize: H 2 : Size has a negative relationship with the cost of debt.
LV.
The positive relationship between financial leverage and COD was found in previous studies (Berger & Udell, 2006;Pittman & Fortin, 2004), and this reflects the fact that the higher financial leverage of the company means more risk that the company is facing. Therefore, a positive relationship between leverage and COD is expected. Moreover, the trade-off theory also suggests a positive relationship between COD and financial leverage. This theory argues that firms with higher levels of financial leverage will face higher costs of financial distress, which leads to bankruptcy. If the company is aware of the existing risk, for whatever reason, the lender will have to ask for a higher interest rate on the loan to offset the risks that the company is facing. Previous studies suggest that financial leverage is measured by the debt ratio to the company's total assets. We suggest the following hypothesis:
Audit quality, foreign ownership and cost of debt
Model 2:
Audit quality (AUQ)
Recent studies showed that Big4 audit firm was related to reducing the cost of borrowing in USlisted companies Pittman & Fortin, 2004). Using an indirect measure of the cost of debt, Pittman and Fortin (2004) found a lower cost of debt for newly listed US firms audited by Big4. Similarly, Kim et al. () indicated that those companies audited by Big4 had lower interest rates on bank loans than those audited by nonBig4 firms. Mansi et al. (2004) argued that the audit performed by Big4 was related to falling bond yields and high risks. However, Piot and Missonier-Piera (2007) adopted an indirect measure of the cost of debt yet found no empirical evidence for the influence of Big4 audits in the context of French listed companies. In this study, we measured audit quality (AUQ) through audited financial statements as a Big4 company. For those enterprises audited by Big4 company, the audit quality (AUQ) is equal to 1; the rest will be 0.
Audit quality impacts debt performance by increasing the reliability of financial information, thereby reducing information asymmetry and past debt monitoring costs for lenders (Jensen & Meckling, 1976;Watts & Zimmerman, 1986). In other words, financial information's reliability reduced banks' need to rely on alternative information in debt contracts. We built up the following hypothesis: H 4 : Audit quality has a negative relationship with the cost of debt
Foreign ownership (OWFORE)
When studying mediumsized manufacturing enterprises in the US, Rahaman and Al Zaman (2013) found that foreignowned enterprises often had better operational management and more transparent disclosure of information than other types of business ownership; therefore, the cost of debt of foreignowned enterprises is lower. In addition, foreignowned firms were more reputable in borrowing, so they often attracted sources of capital with a low cost of debt (Boubakri et al., 2013). The findings of Stanišić et al. (2016) for 4710 Serbian enterprises from 2008 to 2013 also showed that increased foreign ownership in enterprises would help reduce the cost of debt. In Vietnam, the wave of foreign investment in domestic enterprises has increased since the introduction of the stock market. Vietnamese enterprises attracted foreign investment to improve management practices to reduce operating costs, including the cost of debt. So, do foreignowned enterprises reduce the cost of debt for firms? We built up the following hypothesis:
Financial distress, earnings quality, and cost of debt
Model 3: Financial distress refers to a company's difficulty paying in its debts or meeting other financial obligations (Ghazali et al., 2015). In the event of severe financial distress, the company might go bankrupt. Binti and Ameer (2010) defined financial distress as a term used to designate a situation when contractual arrangements with creditors could not be fulfilled due to a company's financial difficulties. Recently, many studies have demonstrated that managers of those companies in a period of financial distress tend to adjust the recognition of revenue, expenses, liabilities, and receivables. In other words, they have a motive for earnings management and reduce their earnings quality. The purpose of earnings management here may be to cover up the financial distress to mobilize additional sources of funding, thereby reducing the likelihood of bankruptcy (Rosner, 2003). Rogers and Stocken (2005) argued that managers were generally worried about losing their jobs if the company went into financial distress; therefore, they managed earnings to provide optimistic forecasts, thereby promising to restore financial status to ensure their job, salary and reputation. In addition, via earnings management, companies could also avoid breaches of contractual terms with related parties when the company falls into financial distress (Dechow & Dichev, 2002). As such, we continue to develop a model that considered the impact of EQ when combined with financial distress on COD. We built up the following hypothesis: H 6 : Earnings quality interacting with financial distress has a positive relationship with the cost of debt.
Currently, there are many ways to measure financial distress; however, each measure has its advantages and disadvantages. Ghazali et al. (2015) stated that Altman Z-Score could be considered the most popular method to measure the financial status of a company and was used to determine financial distress in various studies. Thus, in this study, we define financial distress based on the Z-index (Altman, 1968). Altman's Z-Index provided a calculation of the Z-index based on the following formula: Where X1 is current assets minus current liabilities divided by total assets; X2 is retained profit divided by total assets; X3 is profit before tax and interest divided by total assets; X4 is book value of equity divided by total liability and X5 is revenue divided by total assets. All variables in Equation (5) are summarized in Table A2 (Appendix).
If the Z index < 1.81, the company is in financial distress, and the financial distress variable will have a value of 1; otherwise, it will have a value of 0. Persakis and Iatridis (2015) examined the impact of earnings quality and audit quality on the cost of equity and debt under the influence of the 2008 financial crisis. Leuz (2010) used linear regression analysis, in which 137,091 observations of businesses from 18 countries worldwide were classified into three study groups according to the level of investor protection based on country classification. 1 The findings showed that the 2008 global financial crisis positively impacted the cost of debt for groups 1 and 2. Those firms audited by Big4 auditors in group 1 had a negative relationship with the cost of debt.
Earnings quality as an intermediary and the cost of debt
In Vietnam, the financial statements of listed companies are all audited. Therefore, audited financial information can be a reliable source of information in a contract. As such, banks may need less reliance on alternative information sources when assessing the credit risk of borrowers and monitoring old debt contracts, leading to more effectiveness in monitoring debt contracts based on accounting information and establishing the monitoring mechanism. Consistent with this view, Niskanen and Niskanen (2004) argued that corporate debt covenants in Finland aimed at protecting interests against the discretion of the bank managers. Therefore, the information quality of financial statements can be of great importance in valuing debt for listed companies in Vietnam. In this study, we investigate how the EQ factor as an intermediate variable of audit quality and foreign ownership will affect costs in Vietnam. Model 4: The quality of audit activities has a positive impact on the transparency and reliability of financial statements. Although there are various criteria for quantifying the audit quality, the fact that Big4 audits financial statements is considered a commitment to its conformity with accounting practices. Previous studies (Becker et al., 1998;Francis & Yu, 2009;Krishnan, 2003) proved that financial statements audited by auditors under the Big 4 group had lower earnings management than those audited by non-Big4. Thus, it is hypothesized that the company audited by the auditor of higher quality (Big4) has higher earnings quality than the one audited by the auditor in the other company (non-Big4). Based on the above arguments and studies, we develop the following hypothesis: H 7 : Audit quality has a positive relationship with earnings quality, and earnings quality has a negative relationship with the cost of debt.
The foreign ownership ratio represents the ownership structure of the organization. According to Widigdo (2013), the foreign ownership ratio, the percentage of shares held by foreign investors, was considered to have a positive meaning in creating an excellent governance mechanism. Foreign investors had various skills in detecting fraud in the company and were not easily deceived by the company's management board, thereby limiting earnings management practices. Sharma (2004) found that as the percentage of independent foreign ownership increased, there would be a decrease in the likelihood of fraudulent financial information. These findings suggested that foreign institutional investors could actively monitor and control financial statements, reduce management board fraud, and promote more honest managerial disclosure of financial information. Therefore, the hypothesis related to the foreign ownership ratio is given as follows: H 8 : Foreign ownership has a positive relationship with earnings quality, and earnings quality has a negative relationship with the cost of debt.
All of our models and hypotheses are summarized in Figure 1.
Research data and samples
This study uses data collected from the Vietnamese stock exchange from 2010 to 2019. These data are collected from audited financial statements of listed companies after excluding those in the banking, securities, and insurance sector. The final dataset consists of 3,800 observations presented in Table 1 by year and industry. We first test for autocorrelation and variable variance. The model test results show that the received P-values are equal to 0.000 < α (5%). This implies that the null hypothesis in the models is rejected at the 5% significance level. Therefore, we proceed to overcome the defects of the regression model by the Robust testing and industry fixed effects regression method.
Results and discussion
Statistical data in Table 2 shows that the average COD was 7.9%, the highest is 38.6%, and the standard deviation is 5.6%. EQ has a mean value of −0.017, a minimum of 0.102, and the highest of 0.000, a standard deviation of 0.017. The rate of foreign ownership (OWFORE) accounts for 9.4% on average, and the standard deviation of 13.4%. The size is measured as the logarithm of the total assets of the average 27,388; financial leverage (LV) accounts for 53.6%. Out of 3800 observations in our sample, the number of Big Four audit firms was 1078, accounting for 28.37%. (Figure 3), we find that firms in the agricultural sector has the lowest cost of debt at 7.1%, while the highest is from the healthcare businesses, with the average cost of debt at 9.3%. Table 3 shows descriptive statistics of the difference between the cost of debt at various quantiles. At the lowest quantile (Q1), the cost of debt is 2%, whereas at the Q5 quantile (the highest), this is 17%. The result of test on the difference in the cost of debt at the lowest and highest quantile is different and statistically significant. Table 4 provides the correlation coefficient results among variables. Testing the correlation between independent and dependent variables is to eliminate factors that could lead to multicollinearity before running the regression model. The correlation coefficient among the independent variables in the model has been no greater than 0.8; therefore, multicollinearity does not exist. After performing descriptive statistics and correlation matrix analysis, we estimate the model using least squares regression. The cost of debt has a negative relationship with the independent variables in the model and is statistically significant.
The results in Table 5 indicate that EQ shows a negative relationship with COD with 1% statistical significance. This result suggests that higher EQ meant a lower COD. This finding is consistent with the hypothesis H1 we built initially. This study also agrees with the findings of Carmo et al. (2016), Beltrame et al. (2017), Orazalin and Akhmetzhanov (2019), and Houcine and Houcine (2020), and Le et al. (2021).
Similar to the impact of EQ on COD, size also has a negative impact on COD at 1% significance. These finding reveals that the larger the size the lower COD. This finding is consistent with the empirical evidence found in previous studies (Berger & Udell, 2006) and the hypothesis H2 we established initially. The reason is that large firms will encounter lower information asymmetry than small ones, and concurrently, such firms have a pretty low degree of cash flow volatility and tend to diversify their businesses; therefore, they can negotiate lower lending rates. In other words, the COD of large companies will be relatively lower than that of small ones. When the firm uses high financial leverage, the cost of debt will also decrease with a statistical significance of 1%, which is contrary to hypothesis H 3, and the previous empirical evidence by Berger and Udell (2006) and Pittman and Fortin (2004). However, our finding is consistent with the finding of Yen et al. (2018) in the context of Vietnam where the most businesses used limited loans.
The results from Table 6 indicates that audit quality (AQ) impacted the cost of debt at a 1% statistical significance. According to the findings, audit quality has a positive relationship with the cost of debt; that is, audit quality impacts debt valuation. The audit quality's impact on the cost of debt is not consistent with the developed hypothesis H4 and is in contrast to previous empirical evidence from Spain and Korea (Illueca Muñoz & Gill-de-Albornoz, 2006;. However, our findings agree with empirical evidence from the US (Fortin & Pittman, 2007). According to Common Law, legal systems in developed countries prioritize investor protection at a higher level, the results of which imply that the role of management or signals from audits by those audits companies under Big4 is even more important in civil law environments with lower investor protection (Porta et al., 1998). Empirical evidence from the United States is limited to the specific context of bond prices (Fortin & Pittman, 2007) as investors can rely on supervisory practice; therefore, the supervisory role of the audit firm may be less important in this context and possibly suitable for Vietnam. In Vietnam, the audit firm's reputation has not been rated reliable. Based on the above arguments, the findings of this study indicate that the audit quality expressed through the audit firm's reputation has not contributed to the financial information of the listed companies evaluated by lending institutions. To better understand why we obtained different results than hypothesis H4, we compare and test the difference between the cost of debt of two groups, i.e., the group of Big Four audit firms and the other group. Table 7 reveals a difference between earnings quality and the cost of debt, and that the Big Four audit firms have lower debt expenditure than non-Big Four audit firms. Therefore, the cost of debt of Non-Big Four audit firms is higher, leading to a higher financial risk. Besides, the earnings quality of Big Four audit firms is lower and of statistical significance; therefore, the financial statement quality of this group is of more excellent reliability.
Also, according to Table 6, the finding shows that the foreign ownership ratio (OWFORE) has a negative relationship with the cost of debt. In terms of corporate governance, increasing foreign ownership will reduce the cost of debt for businesses. These findings are of 1% statistical significance. The higher foreign ownership means more openness and transparency about their governance structure, production and business activities. Thereby, the risk of information asymmetry for creditors is minimized. As such, these firms also receive a lower cost of debt. Table 8 indicates the research findings of model 3, which examined the impact of earnings quality on financially distressed firms. The variable of financial distress is measured through the Z-index and had a value of 1 if Z < 1.81; otherwise, it would have a value of 0. The results reveal that earnings quality under financial distress (EQ*FD) positively correlates with the cost of debt. As such, under financial distress, the earnings quality positively relates to the cost of debt, whereas considering the entire research sample, earnings quality negatively relates to the cost of debt. Dutzi and Rausch (2016) and Xu and Ji (2016) argued that financially distressed firms tended to conduct earnings management in response to poor financial performance. Under financial distress, managers tended to conduct earnings management to express that the company was still meeting its creditor obligations, to avoid increasing the cost of debt, and thus, maximize its benefits (Moreira & Pope, 2007). Ghazali et al. (2015) argued that the pressure from financial distress could be detrimental to the company, whereby investors and creditors could suffer significant losses. If the company went into financial distress, managers could expect their bonuses to be cut down and the possibility of replacement and damages to their careers and reputations. As such, conservative managers would take advantage of the opportunity to cover up financial distress by choosing various accounting methods that increase earnings and mask losses (Habib et al., 2013), with a view to reducing the cost of debt.
For a comprehensive view of control variables of firm size and financial leverage, we divide the research samples into two groups. Based on the median value of the firm size variable, the sample is divided into small and large firms together with financial leverage into low and high-leverage groups. The research findings in Table 9 show that earnings quality has a negative impact on the cost of debt at 1% statistical significance, whether the firm is under the small or large group, with high or low financial leverage. Large-scale enterprises have a more significant influence coefficient than small-scale ones, and low leverage firms had a higher regression coefficient than highleverage ones. Besides, audit quality only positively influences the small group and the low financial leverage ones and vice versa.
Based on the regression results according to the SEM structural model in Table 10 (SEM structural model is illustrated in Figure A3 and Table A3), the finding suggests that the audit quality factor (AUQ) poses a direct effect on the cost of debt, and concurrently, the audit quality also had a negative impact on the intermediate variable, earnings quality. This finding is consistent with the established hypothesis H 7 and agrees with the research findings of (Becker et al., 1998), (J. R. Francis & Yu, 2009), (Krishnan, 2003). Meanwhile, the foreign ownership variable (OWFORE) only directly affects the cost of debt but does not impact the intermediate variable, earnings quality. This is inconsistent with the developed hypothesis H 8 and does not agree with the research findings of Sharma (2004). As an intermediate variable of audit quality and foreign ownership ratio, earnings quality (EQ) negatively impacts the cost of debt at a 1% significance level.
All of the testing results of the model's indexes (Table 11) satisfies the criteria of the estimation model as qualified with the explanatory level of earnings quality factors as an intermediate variable affecting the cost of debt as well as the interpretation of factors affecting the cost of debt at the level of as low as 8%.
Conclusion and recommendations
Using the OLS estimation method and the data of 3800 observations of listed companies in 2010-2019, we find empirical evidence that firms of high EQ would reduce their own COD. Simultaneously, the study also identifies evidence for the relationship between COD and factors of audit quality and foreign ownership ratio. Besides, the study also states that upon the impact of EQ when interacting with financial distress, earnings quality has a positive relationship with COD.
Besides the obtained results, the study also shows some limitations: Firstly, the impact of earnings quality on the cost of debt is possibly influenced by endogenous variables; however, this has not been addressed in the research. Second, the study analyzes listed companies from a developing country like Vietnam; since the audit environment varies from country to country, these findings may not be generalizable to other countries in the region. Finally, the explanatory level in the model is low. For future studies, it will be necessary to extend other measures of earnings quality on the cost of debt and compare with countries in the ASEAN region and worldwide.
Along with the above research findings, financial statement information plays an important role, and financial statement information audited by large audit firms always provides the basis for investors and bank analyses. However, financial statement information will be of no meaning without the trust of investors and banks. It lays the foundation and driving force for the development of the stock market, especially in young markets like Vietnam's stock market. From the obtained results, we put forth some recommendations as follows: Firstly, the managers of enterprises should consider improving their units' EQ so they can reduce their COD and indirectly facilitate their businesses to improve their performance. To do this, managers and shareholders (who own capital) must be more prudent in choosing independent audit firms to audit their financial statements. Enterprises need to pay attention to several issues about financial statement disclosure, such as timing, the quality of financial statement information, and the selection of an audit firm.
Secondly, the research findings suggested that Vietnamese firms with high foreign ownership or large scale often had a low cost of debt. Therefore, listed companies should maintain foreign ownership ratio and attract foreign investors strategically and at a high concentration level, enabling companies to mobilize debt financing at a low cost. In addition, the capacity to pay interest well and use high financial leverage also facilitates firms to save on the cost of debt. However, firms also need to be aware that increasing lending interest rates will pressure the additional cost of debt. Therefore, firms need to be cautious before deciding to increase the debt ratio in the capital structure and closely monitor the fluctuation of lending interest rates in the market to maintain a reasonable debt cost.
Thirdly, firms can also choose a strategy to negotiate their COD with creditors in favor of the business. Firms may consider issuing bonds instead of borrowing from banks. Finally, the most practical thing is that firms should consider reducing their leverage ratio to the possible extent to minimize the COD they are approaching.
Fourthly, the common goal of regulators is to stabilize and develop the stock market. This is reflected in the management to increase investment efficiency; attract investors and increase market liquidity; manage transparency issues such as audit quality, time to publish audit reports, controlling negative behaviors to increase market efficiency. | 9,877 | 2022-11-27T00:00:00.000 | [
"Business",
"Economics"
] |
Modeling Tweet Arrival Times using Log-Gaussian Cox Processes
Research on modeling time series text corpora has typically focused on predicting what text will come next, but less well studied is predicting when the next text event will occur. In this paper we address the latter case, framed as modeling continuous inter-arrival times under a log-Gaussian Cox process, a form of inhomogeneous Poisson process which captures the varying rate at which the tweets arrive over time. In an application to ru-mour modeling of tweets surrounding the 2014 Ferguson riots, we show how inter-arrival times between tweets can be accurately predicted, and that incorporating textual features further improves predictions.
Introduction
Twitter is a popular micro-blogging service which provides real-time information on events happening across the world. Evolution of events over time can be monitored there with applications to disaster management, journalism etc. For example, Twitter has been used to detect the occurrence of earthquakes in Japan through user posts (Sakaki et al., 2010). Modeling the temporal dynamics of tweets provides useful information about the evolution of events. Inter-arrival time prediction is a type of such modeling and has application in many settings featuring continuous time streaming text corpora, including journalism for event monitoring, real-time disaster monitoring and advertising on social media. For example, journalists track several rumours related to an event. Predicted arrival times of tweets can be applied for ranking rumours according to their activity and narrow the interest to investigate a rumour with a short interarrival time over that of a longer one.
Modeling the inter-arrival time of tweets is a challenging task due to complex temporal patterns exhibited. Tweets associated with an event stream arrive at different rates at different points in time.
For example, Figure 1a shows the arrival times (denoted by black crosses) of tweets associated with an example rumour around Ferguson riots in 2014. Notice the existence of regions of both high and low density of arrival times over a one hour interval. We propose to address inter-arrival time prediction problem with log-Gaussian Cox process (LGCP), an inhomogeneous Poisson process (IPP) which models tweets to be generated by an underlying intensity function which varies across time. Moreover, it assumes a non-parametric form for the intensity function allowing the model complexity to depend on the data set. We also provide an approach to consider textual content of tweets to model inter-arrival times. We evaluate the models using Twitter rumours from the 2014 Ferguson unrest, and demonstrate that they provide good predictions for inter-arrival times, beating the baselines e.g. homogeneous Poisson Process, Gaussian Process regression and univariate Hawkes Process. Even though the central application is rumours, one could apply the proposed approaches to model the arrival times of tweets corresponding to other types of memes, e.g. discussions about politics. This paper makes the following contributions: 1. Introduces log-Gaussian Cox process to predict tweet arrival times. 2. Demonstrates how incorporating text improves results of inter-arrival time prediction.
Related Work
Previous approaches to modeling inter-arrival times of tweets (Perera et al., 2010;Sakaki et al., 2010;Esteban et al., 2012;Doerr et al., 2013) were not complex enough to consider their time varying characteristics. Perera et al. inter-arrival times as independent and exponentially distributed with a constant rate parameter. A similar model is used by Sakaki et al. (2010) to monitor the tweets related to earthquakes. The renewal process model used by Esteban et al. (2012) assumes the inter-arrival times to be independent and identically distributed. Gonzalez et al. (2014) attempts to model arrival times of tweets using a Gaussian process but assumes the tweet arrivals to be independent every hour. These approaches do not take into account the varying characteristics of arrival times of tweets.
Point processes such as Poisson and Hawkess process have been used for spatio-temporal modeling of meme spread in social networks (Yang and Zha, 2013;Simma and Jordan, 2010). Hawkes processes (Yang and Zha, 2013) were also found to be useful for modeling the underlying network structure. These models capture relevant network information in the underlying intensity function. We use a log-Gaussian cox process which provides a Bayesian method to capture relevant information through the prior. It has been found to be useful e.g. for conflict mapping (Zammit-Mangion et al., 2012) and for frequency prediction in Twitter (Lukasik et al., 2015).
Data & Problem
In this section we describe the data and we formalize the problem of modeling tweet arrival times.
Data We consider the Ferguson rumour data set (Zubiaga et al., 2015), consisting of tweets on ru-mours around 2014 Ferguson unrest. It consists of conversational threads that have been manually labeled by annotators to correspond to rumours 1 . Since some rumours have few posts, we consider only those with at least 15 posts in the first hour as they express interesting behaviour (Lukasik et al., 2015). This results in 114 rumours consisting of a total of 4098 tweets.
Problem Definition Let us consider a time interval [0, 2] measured in hours, a set of rumours , where x i j is text (in our case a vector of Brown clusters counts, see section 5) and t i j is time of occurrence of post p i j , measured in time since the first post on rumour E i .
We introduce the problem of predicting the exact time of posts in the future unobserved time interval, which is studied as inter-arrival time prediction. In our setting, we observe posts over a target rumour i for one hour and over reference rumours (other than i) for two hours. Thus, the training data set is
Model
The problem of modeling the inter-arrival times of tweets can be solved using Poisson processes (Perera et al., 2010;Sakaki et al., 2010). A homogeneous Poisson process (HPP) assumes the intensity to be constant (with respect to time and the rumour statistics). It is not adequate to model the inter-arrival times of tweets because it assumes constant rate of point arrival across time. Inhomogeneous Poisson process (IPP) (Lee et al., 1991) can model tweets occurring at a variable rate by considering the intensity to be a function of time, i.e. λ(t). For example, in Figure 1a we show intensity functions learnt for two different IPP models. Notice how the generated arrival times vary according to the intensity function values.
Log-Gaussian Cox process We consider a log-Gaussian Cox process (LGCP) (Møller and Syversveen, 1998), a special case of IPP, where the intensity function is assumed to be stochastic. The intensity function λ(t) is modeled using a latent function f (t) sampled from a Gaussian process (Rasmussen and Williams, 2005). To ensure positivity of the intensity function, we consider λ(t) = exp (f (t)). This provides a nonparametric Bayesian approach to model the intensity function, where the complexity of the model is learnt from the training data. Moreover, we can define the functional form of the intensity function through appropriate GP priors.
Modeling inter-arrival time Inhomogeneous Poisson process (unlike HPP) uses a time varying intensity function and hence, the distribution of inter-arrival times is not independent and identically distributed (Ross, 2010). In IPP, the number of tweets y occurring in an interval [s, e] is Poisson distributed with rate Assume that n th tweet occurred at time E n = s and we are interested in the inter-arrival time T n of the next tweet. The arrival time of next tweet E n+1 can be obtained as E n+1 = E n + T n . The cumulative distribution for T n , which provides the probability that a tweet occurs by time s + u can be obtained as 2 p(T n ≤ u) = 1 − p(T n > u|λ(t), E n = s) The derivation is obtained by considering a Poisson probability for 0 counts with rate parameter given by s+u s λ(t)dt and applying integration by substitution to obtain (2). The probability density function of the random variable T n is obtained by taking the derivative of (2) with respect to u: (3) The computational difficulties arising from integration are dealt by assuming the intensity function to be constant in an interval and approximating the inter-arrival time density as (Møller and Syversveen, 1998;Vanhatalo et al., 2013) We associate a distinct intensity function λ i (t) = exp(f i (t)) with each rumour E i as they have varying temporal profiles. The latent function f i is modelled to come from a zero mean Gaussian process (GP) (Rasmussen and Williams, 2005) prior with covariance defined by a squared exponential (SE) kernel over time, k time (t, t ) = a exp(−(t − t ) 2 /l). We consider the likelihood of posts E O i over the entire training period to be product of Poisson distribution (1) over equal length sub-intervals with the rate in a sub-interval [s, e] approximated as (e − s) exp(f i ( 1 2 (s + e))). The likelihood of posts in the rumour data is obtained by taking the product of the likelihoods over individual rumours.
The distribution of the posterior p(f i |E O i ) is intractable and a Laplace approximation (Rasmussen and Williams, 2005) is used to obtain the posterior. The predictive distribution f i (t i * ) at time t i * is obtained using the approximated posterior. The intensity function value at the point t i * is then obtained as Algorithm 1 Importance sampling for predicting the next arrival time 1: Input: Intensity function λ(t), previous arrival time s, proposal distribution q(t) = exp(t; 2), number of samples N 2: for i = 1 to N do 3: Sample u i ∼ q(t).
4:
Obtain weights w i = p(u i ) q(u i ) , where p(t) is given by (4). 5: end for 6: Predict expected inter-arrival time as Predict the next arrival time ast = s +ū. 8: Return:t Importance sampling We are interested in predicting the next arrival time of a tweet given the time at which the previous tweet was posted. This is achieved by sampling the inter-arrival time of occurrence of the next tweet using equation (4). We use the importance sampling scheme (Gelman et al., 2003) where an exponential distribution is used as the proposal density. We set the rate parameter of this exponential distribution to 2 which generates points with a mean value around 0.5. Assuming the previous tweet occurred at time s, we obtain the arrival time of next tweet as outlined in Algorithm 1. We run this algorithm sequentially, i.e. the timet returned from Algorithm 1 becomes starting time s in the next iteration. We stop at the end of the interval of interest, for which a user wants to find times of post occurrences.
Incorporating text We consider adding the kernel over text from posts to the previously introduced kernel over time.
We join text from the observed posts together, so a different component is added to kernel values across different rumours. The full kernel then takes form k TXT ((t, i), (t , i )) = k time (t, t ) + k text We compare text via linear kernel with additive underlying base similarity, expressed by k text (x, x ) = b + cx T x .
Optimization All model parameters (a, l, b, c) are obtained by maximizing the marginal likelihood p(E O i ) = p(E O i |f i )p(f i )df i over all rumour data sets.
Experiments
Data preprocessing In our experiments, we consider the first two hours of each rumour lifespan. The posts from the first hour of a target rumour is considered as observed (training data) and we predict the arrival times of tweets in the second hour. We consider observations over equal sized time intervals of length six minutes in the rumour lifespan for learning the intensity function. The text in the tweets is represented by using Brown cluster ids associated with the words. This is obtained using 1000 clusters acquired on a large scale Twitter corpus (Owoputi et al., 2013).
Evaluation metrics Let the arrival times predicted by a model be (t 1 , . . . ,t M ) and let the actual arrival times be (t 1 , . . . , t N ). We introduce two metrics based on root mean squared error (RMSE) for evaluating predicted inter-arrival times. First is aligned root mean squared error (ARMSE), where we align the initial K = min(M, N ) arrival times and calculate the RMSE between such two subsequences.
The second is called penalized root mean squared error (PRMSE). In this metric we penalize approaches which predict a different number of inter-arrival times than the actual number. The PRMSE metric is defined as the square root of the following expression.
The second and third term in (5) respectively penalize for the excessive or insufficient number of points predicted by the model. to 1000 (above the maximum count yielded by any rumour from our dataset), thus reducing the error from this method. We also compare against Hawkes Process (HP) (Yang and Zha, 2013), a self exciting point process where an occurrence of a tweet increases the probability of tweets arriving soon afterwards. We consider a univariate Hawkes process where the intensity function is modeled as λ i (t) = µ + t i j <t k time (t i j , t). The kernel parameters and µ are learnt by maximizing the likelihood. We apply the importance sampling algorithm discussed in Algorithm 1 for generating arrival times for Hawkess process. We consider this baseline only in the single-task setting, where reference rumours are not considered.
LGCP settings In the case of LGCP, the model parameters of the intensity function associated with a rumour are learnt from the observed interarrival times from that rumour alone. LGCP Pooled and LGCPTXT consider a different setting where this is learnt additionally using the interarrival times of all other rumours observed over the entire two hour life-span.
Results Table 1 reports the results of predicting arrival times of tweets in the second hour of the rumour lifecycle. In terms of ARMSE, LGCP is the best method, performing better than LGCP-TXT (though not statistically significantly) and outperforming other approaches. However, this metric does not penalize for the wrong number of predicted arrival times. Figure 1b depicts an example rumour, where LGCP greatly overesti-mates the number of points in the interval of interest. Here, the three points from the ground truth (denoted by black crosses) and the initial three points predicted by the LGCP model (denoted by red pluses), happen to lie very close, yielding a low ARMSE error. However, LGCP predicts a large number of arrivals in this interval making it a bad model compared to LGCPTXT which predicts only four points (denoted by blue dots). ARMSE fails to capture this and hence we use PRMSE. Note that Hawkes Process is performing worse than the LGCP approach.
According to PRMSE, LGCPTXT is the most successful method, significantly outperforming all other according to Wilcoxon signed rank test. Figure 1a depicts the behavior of LGCP and LGCP-TXT on rumour 39 with a larger number of points from the ground truth. Here, LGCPTXT predicts relatively less number of arrivals than LGCP. The performance of Hawkes Process is again worse than the LGCP approach. The self excitory nature of Hawkes process may not be appropriate for this dataset and setting, where in the second hour the number of points tends to decrease as time passes.
We also note, that GPLIN performs very poorly according to PRMSE. This is because the interarrival times predicted by GPLIN for several rumours become smaller as time grows resulting in a large number of arrival times.
Conclusions
This paper introduced the log-Gaussian Cox processes for the problem of predicting the interarrival times of tweets. We showed how text from posts helps to achieve significant improvements. Evaluation on a set of rumours from Ferguson riots showed efficacy of our methods comparing to baselines. The proposed approaches are generalizable to problems other than rumours, e.g. disaster management and advertisement campaigns. | 3,749.2 | 2015-09-01T00:00:00.000 | [
"Computer Science"
] |
Different Patterns of Respiration in Rat Lines Selectively Bred for High or Low Anxiety
In humans, there is unequivocal evidence of an association between anxiety states and altered respiratory function. Despite this, the link between anxiety and respiration has been poorly evaluated in experimental animals. The primary objective of the present study was to investigate the hypothesis that genetic lines of rats that differ largely in their anxiety level would display matching alterations in respiration. To reach this goal, respiration was recorded in high-anxiety behavior (HAB, n = 10) and low-anxiety behavior (LAB, n = 10) male rats using whole-body plethysmography. In resting state, respiratory rate was higher in HABs (85±2 cycles per minute, cpm) than LABs (67±2 cpm, p<0.05). During initial testing into the plethysmograph and during a restraint test, HAB rats spent less time at high-frequency sniffing compared to LAB rats. In addition, HAB rats did not habituate in terms of respiratory response to repetitive acoustic stressful stimuli. Finally, HAB rats exhibited a larger incidence of sighs during free exploration of the plethysmograph and under stress conditions. We conclude that: i) HAB rats showed respiratory changes (elevated resting respiratory rate, reduced sniffing in novel environment, increased incidence of sighs, and no habituation of the respiratory response to repetitive stimuli) that resemble those observed in anxious and panic patients, and ii) respiratory patterns may represent a promising way for assessing anxiety states in preclinical studies.
Introduction
In humans, respiratory dysregulation is characteristic of anxiety and related symptoms are a diagnostic feature of several anxiety disorders. Clinical observations of severe respiratory distress in patients with anxiety disorders have stimulated much research to identify respiratory abnormalities associated with this syndrome. Symptoms of hyperventilation [1], breath-to-breath respiratory instability and frequent sighing [2,3] have commonly been reported in patients with panic disorder, even during panic-free periods [4,5]. Similar respiratory abnormalities have been found, though less consistently, in patients with generalized anxiety disorders [5]. In addition, exaggerated respiratory arousal in response to psychological stress has been documented in individuals with high trait-anxiety [6].
While a link between respiratory abnormalities and anxiety is well described in humans, preclinical research has just started investigating the respiratory function in animal models of anxiety. Early research into the link between respiration and anxiety in animals was hampered mainly by the lack of techniques that offered sufficient precision with regard to the assessment of standard respiratory indices, relatively easy applicability and nonintrusiveness. Among modern techniques, whole-body plethysmography represents a promising method, as it is entirely non-invasive and thus does not introduce any confounding factor. Using this method, a series of elegant studies conducted by Kinkead and colleagues have recently demonstrated that neonatal maternal separation in rats provokes a respiratory phenotype in adulthood that presents many anxiety-related features. Such animals have altered respiratory responses to hypoxia [7] and hypercapnia [8], with the underlying mechanisms involving both alterations in the chemoreflex circuitry in the lower brainstem [9] and descending influences from the hypothalamus [10]. Further evidence of a link between respiration and anxiety in rats comes from other studies documenting that respiratory parameters (especially the respiratory rate) are strongly affected by conditioned and unconditioned aversive stimuli and by novelty [11,12].
Given the centrality of breathing in human anxiety and the availability of adequate techniques for the measurement of respiratory indices, research with valid and reliable animal models can offer new important insights into the link between respiration and anxiety.
In this study, we used whole-body plethysmography for conducting a reliable and sensitive analysis of the respiratory function in two Wistar rat lines selectively bred for either high (HAB) or low (LAB) anxiety-related behavior. The HAB/LAB rats have been proved to be extremely divergent in their level of baseline anxiety, as revealed in a variety of behavioral tests (for a review, see [13] and [14]), the differences being robust, consistent, and reliable [15,16]. Therefore, the use of these psychogenetically selected rats represents, in our view, a valid methodological approach for investigating the respiratory function in animal populations that possess clear differences in their level of baseline anxiety.
Specifically, we tested the hypothesis that in rats different levels of anxiety would be accompanied by matching alterations in respiration. Respiratory function was evaluated in HAB/LAB rats during exploration of the plethysmographic chamber and during exposure to acoustic (predator call), olfactory (cat feces odor) and psychological (restraint) stressful stimuli.
Ethics statement and animals
The experimental protocol described here was approved by the Veterinarian Animal Care and Use Committee of Parma University, and carried out in accordance with the European Community Council Directives of 22 September 2010 (2010/63/ UE).
Experiments were carried out on 4-month-old male Wistar rats obtained from the animal facilities of the University of Regensburg (Germany). The animals belonged to two lines selectively bred since 1993 for high or low anxiety-like behavior, as described previously in detail [13,17,18]. At their arrival in our laboratory, the HAB (n = 10) and LAB (n = 10) rats used in this study were housed in groups of 3-4 per cage and kept in rooms with controlled temperature (2262uC) and a reversed light-dark cycle (light on from 19:00 to 7:00 h), with free access to food and water.
Recordings of respiration and gross motor activity
Respiratory movements were detected using a custom-built whole-body plethysmograph [19]. This consisted of a sealed Perspex cylinder (i.d. 95 mm, length 260 mm, volume 2.5 l) with medical air constantly flushed through it at a flow rate of 2.5 l/ min. The output flow was divided into two lines using a Tconnector. One line was attached to a differential pressure amplifier (model 24PCO1SMT, Honeywell Sensing and Control, Golden Valley, MN, USA), while the other line was open to the room air. For semi-quantitative assessment of animals' motor activity, a piezoelectric pulse transducer was placed under the plethysmograph. The transducer was sensitive enough to detect even minor movements (e.g. turning the head), while locomotion produced large oscillatory responses.
Experimental protocol
Initially, HAB and LAB rats were tested on the elevated plusmaze to confirm their anxiety-related phenotype. The elevated plus-maze, validated for measuring anxiety [20], consisted of 4 elevated arms (100 cm above the floor, 50 cm long and 10 cm wide) arranged in a cross-like position, with two opposite arms being enclosed (by means of 40-cm high walls), and two being open, including at their intersection a central square platform (10610 cm) which gave access to the four arms. Each rat was initially placed on the central platform facing one closed arm and behaved freely for 5 min. The behavior during the test was recorded using a video camera positioned above the maze. The following behavioral parameters were calculated: i) number of entries in the open arms (% of total entries), ii) latency to enter an open arm (s), and iii) time spent in the open arms (% of total time).
One week after the behavioral testing, rats were placed into the plethysmographic chamber and allowed to explore the new environment for 40 min [19]. Subsequently, the following stimuli were presented: a) a predator (hawk) call was played back for 50 s, and then repeated again 5 min later; b) a piece of cat feces was placed in a syringe, and the air with the cat odor was quickly injected into the input line (through which the plethysmographic chamber was constantly flushed with medical air). All stimuli were separated by at least 5-min intervals and were presented when animals were in a quiet but awake state (i.e., no motor activity, eyes opened, slow regular breathing). After the last stimulus, animals were removed from the plethysmograph and introduced into a restrainer (wire-mesh tube; inner diameter: 6 cm, length: 180 mm), which was immediately placed back into the plethysmograph for 15 minutes. Subsequently, animals were released from the restrainer and allowed additional 15 min in the plethysmographic chamber. All experiments were carried out during the dark phase of the light/dark cycle, with just sufficient levels of red light to permit observation of the animals.
Data acquisition and analysis
Analogue respiratory and motion signals were digitized at 1 KHz and acquired using a PowerLab A/D converter and ChartPro 6.0 software (ADInstruments, Sydney, Australia). They were low-pass filtered at 20 Hz to remove noise, using a digital filter. Data analysis was performed as follows.
a) First 40 min in the plethysmographic chamber. We first calculated the respiratory rate from the respiratory signal. The respiratory rate (cycles per minute, cpm) was measured by calculating the rate of pressure fluctuations inside the chamber. Next, we split the 40-min period into 5-min epochs (0-5 min, 5-10 min, etc.) and constructed histograms (bin width 5 cpm) of respiratory rate vs. time. These histograms show how much time rats spent at a given respiratory rate. Histogram mode peak values were then averaged for each 5-min epoch and plotted against time to assess the time course of the dominant respiratory rate during the first 40 min. For each epoch, we then selected 250 cpm as an approximate centre between low-frequency (0-250 cpm) and high-frequency (251-600 cpm) respiratory rate, the latter reflecting sniffing behavior. This allowed us to calculate the time spent by the animals at high-frequency sniffing mode (expressed as % of total time).
Finally, the motion signal was rectified for each 5-min epoch using IGOR Pro 5.0 software (Wavemetrics, Inc., OR, USA). After setting the threshold level (defined as 150% of the signal when there was no motion), the total duration of time during which the signal exceeded this threshold was determined automatically, and defined as ''motion time''. We could not accurately determine the intensity of animals' movements as the amplitude of the motion signal depended on the position of the animal in the plethysmographic chamber. b) Acoustic (predator call) and olfactory (cat odor) stimuli. Mean respiratory rate was calculated from the respiratory signal and expressed as a mean of 5-sec intervals during predator calls (50 s) and cat odor exposure (60 s). In addition, respiratory rate responses to predator calls were analyzed by calculating the area under the response curve (AUC). We could not perform histogram analysis for these data because of the relatively short periods used for assessment.
c) Restraint test. The respiratory rate was calculated from the respiratory signal before (5 min) and during (15 min) the restraint test. Next, we split the 15-min restraint period into 5-min epochs (0-5 min, 5-10 min, 10-15 min) and constructed histograms (bin width 5 cpm) of respiratory rate vs. time as described above. In each group, histogram mode peak values were averaged for each 5-min epoch and plotted against time to assess the dominant respiratory rate. Finally, we calculated the time spent by the animals at high-frequency sniffing mode (expressed as % of total time). d) Tidal volume. We also determined relative changes in tidal volume provoked by sensory and stressful stimuli. We were unable to assess the absolute values of tidal volume as this required measurements of body temperature and chamber air humidity [21]. However, we assumed that for short-term recordings, as in the case of predatory calls (50 s), cat odor exposure (60 s) and first minutes of restraint, these variables were constant and thus changes in chamber pressure were only determined by inspiratory and expiratory movements. Tidal volume changes were quantified as % of variation to baseline. e) Sighs. Finally, for each recording period (first 40 min in the plethysmograph, predator calls, cat odor exposure, restraint and post-restraint phases) we quantified the number of sighs (''augmented breaths''). A sigh is a readily identifiable respiratory event: it consists of a deep additional inspiration that starts at or around the peak of a normal respiratory cycle. This superimposition of two inspirations makes a sigh much larger than the preceding and following breaths. A sigh is also usually accompanied by a post-sigh apnea.
All data are presented as mean 6 SEM. Statistical significance was set at p,0.05. Two-way ANOVA for repeated measures, with 'group' as between-subject factor (two levels: HAB and LAB) and 'time' as within-subject factor was applied for respiratory data obtained during: (i) first 40 min in the plethysmographic chamber; (ii) predator calls; (iii) cat odor exposure; (iv) restraint test. After ANOVAs, comparisons between the two groups were conducted using Student's ''t''-tests, with a Bonferroni correction for multiple comparisons. Student's ''t''-tests, following a Levene test, were applied for comparisons between HAB and LAB rats on: (i) data obtained from the elevated plus maze, (ii) number of sighs, (iii) AUC values for respiratory rate responses to predator calls.
Behavior on the elevated plus maze
HABs' and LABs' performance on the elevated plus-maze is illustrated in Fig. 1. This test was conducted as the validation criterion for their relative anxiety phenotype. HAB rats spent less time in the open arms (t = 215.4, p,0.01) and entered them less frequently (t = 24.6, p,0.01) than LABs. In addition, the latency to enter an open arm was longer in HABs compared to LABs (t = 4.2, p,0.01). This clearly indicates that HAB rats were more anxious than LABs, as open/unprotected arms are interpreted as more threatening than the closed/protected arms [20].
Behavior and respiration during the first 40 min into the plethysmograph
The behavior of HAB and LAB rats differed over the first 40 minutes inside the plethysmograph. During the first 20 min after entering the chamber ('active phase') animals were clearly engaged in exploratory behavior, which was characterized by periods of motor activity or repeated sniffing (head up, frequent movement of vibrissae) intermingled with periods of relative rest (Fig. 2). Of note, sniffing behavior provoked a marked increase in instantaneous respiratory rate (.250 cpm) and coincided with small body movements (Fig. 2). The total duration of motion time during the first 5 min of the active phase was similar between the two groups (HAB = 49617 s vs. LAB = 5969 s). During the following 20 minutes animals were more in a state of quiescence ('resting phase'), sometimes they curled up and closed their eyes, suggesting that they were asleep. The total duration of animals' motion time during the last 5 min of the resting phase was small, with no differences between the two groups (HAB = 462 s vs. LAB = 562 s).
The respiratory patterns of HAB and LAB rats during this 40min period are illustrated in Fig. 3. During the first 5 min of the 'active phase', animals spent time both at low (,250 cpm) and high (.250 cpm) respiratory rate (Fig. 3A), whereas during the resting phase animals spent time almost exclusively at low respiratory rate (Fig. 3B). During the first 15 minutes into the plethysmograph, HAB rats spent less time at high-frequency sniffing mode than LABs (0-5 min: t = 23.2, p,0.01; 5-10 min: t = 22.4, p,0.05; 10-15 min: t = 22.7, p,0.05) (Fig. 3C). Subsequently, no differences between the two groups were found in the amount of time spent at high-frequency sniffing mode (Fig. 3C).
The time course of changes in the dominant respiratory rate (i.e. the mode of the frequency histogram) during the 40 min in the plethysmographic chamber is shown in Fig. 3D. The dominant respiratory rate was higher in HABs than LABs during the first 5 min of the initial testing (t = 2.9, p,0.01). Subsequently, we observed a progressive reduction in the dominant respiratory rate in both groups that was clearly faster in LABs compared to HABs. At the end of the 40-min period, the dominant respiratory rate was significantly higher in HABs than LABs (t = 6.2, p,0.01).
Finally, the very low peak (,40 cpm) on the histograms (Fig. 3A, B) was not an experimental error; it originated from apneic periods that followed augmented breaths (sighs) (Fig. 2). The incidence of sighs during the 40 min in the plethysmographic chamber was significantly larger in HABs than LABs (t = 4.6, p,0.01) ( Table 1).
Respiratory responses to predator calls
During the first predator call HABs and LABs spent the same amount of time at high-frequency sniffing mode (HAB = 1666% vs. LAB = 1366% of total stimulus duration). In addition, mean respiratory rate during the first predator call was similar between the two groups (Fig. 4A). However, during the first call all ten HAB rats sighed while only two out of ten LAB rats emitted just one sigh. Consequently, the incidence of sighs was larger in HABs than LABs (t = 4.4, p,0.01) ( Table 1).
When the predator call was played back again five minutes later, HABs and LABs spent the same amount of time at highfrequency sniffing mode (HAB = 1566% vs. LAB = 1166% of total stimulus duration). However, mean respiratory rate during the second predator call was somewhat higher in HAB rats than LABs (Fig. 4B), with AUC values being significantly higher in HABs than LABs (t = 2.1, p,0.05) (Fig. 4B). Also, in HAB rats AUC values in response to the first and second predator call were similar, whereas in LAB rats AUC values were significantly lower in response to the second predator compared to the first response (t = 22.2, p,0.05) (Fig. 4A, B). During the second predator call, no differences were found both in the number of animals that sighed and in the incidence of sighs between the two groups ( Table 1). In addition, we observed in the two groups a similar increase of tidal volume compared to the respective baseline levels during both predator calls (first predator call: HAB = +100618% vs. LAB = +194624%; second predator call: HAB = +22614% vs. LAB = +32620%).
Respiratory response to cat odor
Prior to stimulus onset HAB rats had significantly higher respiratory rate than LABs (t = 3.6, p,0.01) (Fig. 5). Exposing rats to cat odor evoked a high degree of odor-sampling sniffing, with HAB and LABs rats that spent a similar amount of time at highfrequency sniffing mode (HAB = 6666% vs. LAB = 6467% of total stimulus duration). The peak respiratory rate reached during cat odor exposure was similar between the two groups (Fig. 5), with HABs having higher values of respiratory rate than LABs only during the fourth 5-s interval after stimulus onset (t = 2.7, p,0.05) (Fig. 5). In addition, during cat odor exposure we observed in HAB and LAB rats a similar increase of tidal volume compared to the respective baseline levels (HAB = +92613% vs. LAB = +122623%). All HAB rats sighed during cat odor exposure, whereas only five out of nine LAB rats emitted just one sigh. Consequently, the incidence of sighs was significantly larger in HABs than LABs (t = 2.9, p,0.05) ( Table 1).
Respiratory response to the restraint test
During the 5 min that preceded the test, animals spent time almost exclusively at low respiratory rate (Fig. 6A). However, HAB rats had significantly higher dominant respiratory rate than LABs (t = 10.2, p,0.01) (Fig. 6A, D). During the first 5 min in the restrainer, HAB rats spent less time at high-frequency sniffing mode compared to LABs (t = 22.95, p,0.01) (Fig. 6C). However, the two groups had similar dominant respiratory rate (i.e. the mode of the low-frequency histogram) during this time (Fig. 6B, D). The magnitude of the stress-induced increase in the dominant respiratory rate compared to pre-restraint values was significantly larger in LABs than HABs (HAB = 1765 cpm vs. LAB = 4466 cpm, t = 3.3, p,0.01). During the following 10 min in the restrainer, HABs and LABs spent similar amount of time at high-frequency sniffing mode (Fig. 6C). However, the dominant respiratory rate was significantly higher in HABs than LABs (5-10 min: t = 5.3, p,0.01; 10-15 min: t = 10.2, p,0.01) (Fig. 6D).
Submitting rats to restraint provoked in HAB and LAB rats a similar increase of tidal volume compared to the respective baseline levels (HAB = +39610% vs. LAB = +48618%).
All animals emitted sighs during and after the restraint test, with the incidence of sighs being significantly higher in HABs than LABs in both circumstances (t = 3.8, p,0.01; t = 4.6, p,0.01; respectively) ( Table 1).
Discussion
In this study we present a detailed description of the respiratory function in a unique animal model of anxiety, the HAB/LAB rats. Our major novel finding is that HAB and LAB rats differed quite dramatically in terms of the breathing pattern at rest and during arousal and stress. HAB rats had elevated resting respiratory rate, sighed more frequently, showed reduced sniffing in a novel environment and no habituation of the respiratory response to repetitive stimuli compared to LAB rats. These findings support the idea that respiratory indices may represent a promising physiological index of anxiety in rats.
The behavior of HAB/LAB rats on the elevated plus-maze confirms extensive literature documenting clear differences in the level of anxiety between these two rat lines [13,14]. We found that in HAB/LAB rats respiratory indices show discriminative ability of different anxiogenic situations, thus supporting the idea that this rodent model represents a useful tool for investigating respiratory correlates of anxiety.
In rats -rodents that heavily rely on their olfaction for assessing environment -the respiratory pattern consists of normal (eupnoea) or rapid (tachypnoea) breathing (that we name ''dominant'' respiratory rate), intermingled with periods of sniffing of variable intensity and duration. The instantaneous respiratory frequency may thus greatly vary -from 60-80 to more than 500 cpm. Consequently, the mean respiratory rate calculated during a given period is strongly affected by the proportion of time spent by an animal at high-frequency sniffing mode. For this reason, in analyzing our data we employed histogram analysis, which allowed us more accurate characterization of the respiratory pattern.
In interpreting our results obtained during the initial period of recording in the plethysmograph, it must be acknowledged that this was not true ''resting'' or ''basal'' state as animals were removed from their home cages, placed in a new environment and presumably experienced certain amount of stress. We found that during the first 15 minutes into the plethysmograph HAB rats spent relatively less time at high-frequency sniffing mode than LABs. Previous studies have demonstrated that HAB and LAB rats differ in their coping strategies, with HABs displaying reduced exploratory drive and preferring more passive strategies [14,22]. Our hypothesis is that the reduction in exploratory sniffing that we observed in HAB rats might thus be a function of a decreased motivational state in these animals and interpreted as a sign of preference for passivity, a behavior that is commonly taken as an indicator of increased anxiety [14]. In addition, HAB rats exhibited higher dominant respiratory rate (i.e., the mode lowfrequency peak) than LABs both during the initial testing in the new environment and after animals had settled down (i.e., after Figure 4. Respiratory rate changes during predator calls. For high-anxiety behavior (HAB, n = 10) and low-anxiety behavior (LAB, n = 10) rats, data are expressed as means (6SEM). Baseline reference value is the mean of the 60 s prior to stimulus onset. During the first (A) and second (B) predator call, each point represents the mean of 5-s intervals. Inner graphs in (A) and (B) represent the area under the response curve (AUC) of respiratory rate during predator calls. Two-way ANOVA yielded a tendency for a group difference in respiratory rate values between HABs and LABs during the second predator call (F = 3.7, p = 0.07). * indicates a significant difference between HAB and LAB rats (Student 't' test, p,0.05). # indicates a significant difference in AUC values between the first and second predator call in LAB rats (Student 't' test, p,0.05). doi:10.1371/journal.pone.0064519.g004 Figure 5. Respiratory rate changes after cat odor exposure. For high-anxiety behavior (HAB, n = 10) and low-anxiety behavior (LAB, n = 10) rats, data are expressed as means (6SEM). Baseline reference value is the mean of the 60 s prior to stimulus onset. During cat odor exposure, each point represents the mean of 5-s intervals. Two-way ANOVA yielded a significant effect of time (F = 24.6, p,0.01). # and * indicate a significant difference between HABs and LABs (p,0.01 and p,0.05, respectively; Student 't' test). doi:10.1371/journal.pone.0064519.g005 about 15 minutes). Clearly, respiratory rate and motor activity are tightly interlinked [19] and it may be speculated that the difference in the dominant respiratory rate observed between the two groups were determined by different somatomotor activity. However, at the end of the 40-min period, when animals were clearly in a resting state and only minor movements could be detected, the dominant respiratory rate was still significantly higher in HABs than in LABs. Thus, physical activity alone cannot be responsible for the difference between HABs and LABs in the dominant respiratory rate, which rather represents a distinctive feature of these animals. Based on our previous observations with cardiovascular responses [19], we consider data values during that period as a reasonable approximation of their true basal respiratory rate.
Interestingly, dominant respiratory rate was higher in HABs than LABs also prior to cat odor exposure and the restraint test, suggesting that the elevated dominant respiratory rate that was found in HAB rats thorough the experimental protocol might also be a consequence of a latent fear induced by the preceding stimuli. This in line with a psychophysiological perspective that the elevated dominant respiratory rate in HAB rats may be part of the increased arousal characteristic commonly observed in anxious individuals, independently from specific emotional contents [23,24].
During the restraint test we found differences in the respiratory pattern of the two groups that were qualitatively similar to those seen during the initial period in the plethysmograph. Specifically, during the initial confinement of the animals into the restrainer, HAB rats spent relatively less time at high-frequency breathing than LAB rats. In addition, HAB rats exhibited a smaller stressinduced increase in the dominant respiratory rate than LABs. Similar to our argument presented above, we hypothesize that this finding may reflect a difference in the behavioral strategy adopted by the two groups to cope with stress. Several studies have reported, for example, that HAB rats float more and struggle less during a forced swimming test, whereas LAB rats show the opposite and are more active [18,25,26]. Our hypothesis is that the reduced respiratory responsiveness seen in HAB rats during the restraint test may be a consequence of their supposed passive (or reactive) style of coping with a stressor.
HAB and LAB rats showed similar respiratory response to the first predator call. However, when the acoustic stimulus was repeated 5 minutes later, we observed a habituation-like effect for the respiratory rate in LAB rats that was not found in HAB rats. This suggests that HAB rats did not adapt in terms of respiratory responsivity to an alerting stimulus, although this was unchanged over time. Persisting, high respiratory reactivity to stress has been described in high-trait anxious individuals [27,28].
When exposed to cat odor, HAB and LAB rats showed a similar high degree of odor-sampling sniffing. As a consequence of this threat-detection behavioral pattern, the peak respiratory rate reached during this olfactory stimulus was much higher compared to those reached during the acoustic stimuli. Similar bouts of highfrequency respiration in response to novel odorants have been found in previous studies, which have highlighted the importance of sniffing behavior in rats for odor detection and identification [12,29,30,31]. The lack of differences between HAB and LAB rats in the respiratory rate response to the cat odor is likely due to ceiling effect (i.e., the respiratory rate reached its physiological maximum), which may have masked possible subtle differences between the two groups.
Our study is the first to demonstrate that similar to humans, rats do sigh in stressful situations. Furthermore, and also similar to humans, it appears that this sighing occurs more frequently in animals with higher innate anxiety levels. Indeed, a robust and stable difference between HAB and LAB rats was reflected by the incidence of sighs, which was significantly higher in HABs both during the first 40-min recording and under any stressful condition. Sighing is a fundamental vertebrate behavior that can be facilitated by lower blood O 2 in rats [32] and also by increased blood CO 2 in other species [33,34], whose function is to prevent atelectasis in hypoventilated parts of the lungs. Extensive evidence from the human literature has shown that the incidence of respiratory sighing is greater among anxious patients, especially those with diagnoses of panic disorders. Panic subjects sigh more often during resting state [35,36,37] and during/after challenges [2] than controls. It has been proposed that a hypersensitive suffocation alarm system may explain the high incidence of sighs in panic disorder [38]. According to this theory, panic disorder subjects might have an overly sensitive chemoreceptor activity and thus may be inclined to take periodic deep breaths to lower the pCO 2 safely below the threshold level. Evidence that panic disorders subjects might have an overly sensitive chemoreceptor activity comes from clinical studies demonstrating that lower subthreshold concentrations of hypercarbic gas (e.g. 5-7% CO 2 ) provoke panic attacks in the majority of panic disorder patients, but not in healthy controls [39,40,41]. Other studies have failed to provide unequivocal evidence of a specific, dysregulated suffocation alarm system in panic and have hypothesized that frequent sighing may be a compensatory response in an attempt to reduce the sensation of dyspnea [3,42]. Our data does not clarify whether increased sighing in HAB rats acts as a general re-setter of the respirator system and can be explained by respiratory variables, such us hypoxia or hypercapnia. On the other hand, our results clearly indicate that rats must possess a mechanism linking perception of stress to the ponto-medullary respiratory pattern generator. We have recently demonstrated in anesthetized rats that pharmacological activation of the dorsomedial hypothalamus (DMH, a crucial ''defense area'' that coordinates stress-induced autonomic neural responses) results in dramatic increase in the number of sighs as well as in tachypnoea [43]. We thus hypothesize that the stress-evoked respiratory responses that we describe here were in fact triggered by the DMH. A previous study has proposed that sighs are rats' expression of relief and may function as a signal of safety [44]. In our rats, however, sighing was more frequent during stressful conditions (i.e., first minutes in the plethysmograph or restraint) than during periods of relative reduced perception of danger (i.e., end of the 40-min period or post-restraint phase), thus not supporting the relief signal hypothesis of sighing. Like humans, sighing in rats could be due to various causes. Whatever mechanisms predominate, frequent sighing is a respiratory behavior that markedly differentiates between HAB and LAB animals, and resembles what has been observed in panic disorder subjects.
Conclusion and perspectives
The results of this study complement and extend previous animal findings [7][8][9][10][11][12] documenting that the respiratory phenotype can differ considerably between subjects and that such variability can be due to the individual levels of anxiety-related behavior. It must be acknowledged that the interpretation of the data in this study is limited by the lack of quantitative assessment of total somatomotor activity, which may have partially accounted for the differences observed in the respiratory patterns between HAB and LAB rats. Another limitation of this study is that we have not determined whether the respiratory changes in HAB rats can be attenuated with anxiolytic compounds. On this regard, future work is required in order to validate and strengthen the use of respiration as a reliable, locomotor-independent index of anxiety in rats. Nevertheless, the respiratory changes found in high-anxiety behavior rats share similarities to the symptoms observed in patients with anxiety and panic disorders and provide evidence that respiration may represent a promising method for assessing anxiety states in preclinical studies. | 7,352.8 | 2013-05-17T00:00:00.000 | [
"Biology",
"Psychology"
] |
Early Phase of Plasticity-Related Gene Regulation and SRF Dependent Transcription in the Hippocampus
Hippocampal organotypic cultures are a highly reliable in vitro model for studying neuroplasticity: in this paper, we analyze the early phase of the transcriptional response induced by a 20 µM gabazine treatment (GabT), a GABA-Ar antagonist, by using Affymetrix oligonucleotide microarray, RT-PCR based time-course and chromatin-immuno-precipitation. The transcriptome profiling revealed that the pool of genes up-regulated by GabT, besides being strongly related to the regulation of growth and synaptic transmission, is also endowed with neuro-protective and pro-survival properties. By using RT-PCR, we quantified a time-course of the transient expression for 33 of the highest up-regulated genes, with an average sampling rate of 10 minutes and covering the time interval [10∶90] minutes. The cluster analysis of the time-course disclosed the existence of three different dynamical patterns, one of which proved, in a statistical analysis based on results from previous works, to be significantly related with SRF-dependent regulation (p-value<0.05). The chromatin immunoprecipitation (chip) assay confirmed the rich presence of working CArG boxes in the genes belonging to the latter dynamical pattern and therefore validated the statistical analysis. Furthermore, an in silico analysis of the promoters revealed the presence of additional conserved CArG boxes upstream of the genes Nr4a1 and Rgs2. The chip assay confirmed a significant SRF signal in the Nr4a1 CArG box but not in the Rgs2 CArG box.
Introduction
Cognitive processes such as learning and memory originate from plastic modifications in the central nervous system CNS: these plastic changes affect the structure and the functions of neurons and of synapses and lead to experience-dependent alterations in neural network wiring and behavior. The introduction of high-throughput assays and large-scale approaches in neuroplasticity has contributed to encompass the broad extent of this phenomenon, which involves the cooperative interplay of numerous cellular processes that not only regulate the synaptic transmission itself but also cell survival [1], neuronal growth [2] and neurogenesis [3].
The modulation of gene transcription has proven to be playing a key role in neuroplasticity: increased synaptic activity leads to calcium influx into the post-synaptic spines, dendrites and soma, which activates calcium dependent signaling pathways that in turn regulate transcription factors within the nucleus [4][5] [6]. In our previous work with dissociated rat neuronal cultures [5] we combined transcriptome profiling with electrophysiological recordings in order to describe the role of different calcium sources in the regulation of gene expression changes. The variations of calcium dynamics driven by synaptic activity, as well as the resulting activation/deactivation changes in the relative signaling pathways, have shown to be tightly regulated both in time [7] [8] and space [9][10] [11]. For instance, the modulation of the neurotrophin Bdnf (brain derived neurotropic factor) gene expression, following synaptic activity, requires a series of phosphorylation/dephosphorylation steps of the transcription factors CREB, MEF2 and MEcp2 in order to keep the Bdnf expression bound to the desired dynamics [12]. The expression level of many other plasticity-related genes is governed by sophisticated controls of dynamics [13]: this result is often achieved thanks to the interplay of a large number of transcription factors and is often related to signaling changes which are triggered within a time-scale of minutes [14][15] [16].
Alterations in the dynamical pattern of activity-induced programs may result in pathological states: for example, the removal of the phosphatase MKP-1/DUSP1 negative feedback loop on the kinase JNK alters the proper JNK-activation dynamics and leads to the inability of forming new axonal branching during mice cortex development [17]. Despite the importance of the dynamical aspects of transcriptional changes, the information currently available is limited to time-courses with low temporal resolution, i.e. a few time points, and/or concerning a reduced number of genes, such as [15][18] [19]. The purpose of the present study is to trace with high temporal resolution the early transcriptional dynamics associated with plasticity, using the gabazine treatment of rat organotypic cultures as hippocampal plasticity model: organotypic culture preparation has the advantage of retaining the general morphological and functional properties of the intact hippocampus [20] [21]. Besides, if compared to acute slices, organotypic cultures are able, within one week, to remodel the synaptic connections altered by the slicing procedure, which is not possible for acute slices [22]. In this work we will begin with a preliminary microarray-based assessment of the transcriptional response of hippocampal cultures to a 20 mM gabazine (also known as SR95531, a GABA-A receptor antagonist) treatment: the aim of this step is to obtain a general outline of the cellular activities involved in the response to GABA-A blocking. GABA-A channels are ionotropic channels that, upon binding of Gaba molecules, exert an inhibitory effect on neuronal excitability by specifically increasing the chloride conductance. Drugs such as gabazine, bicuculline or picrotoxin (PTX) act as GABA-A antagonists and therefore induce an increase of the overall neuronal excitability: these drugs have been extensively used as models for various types of plasticity (epilepsy, long term potentiation, homeostatic plasticity etc.), according to the tissue, dosage, duration of the treatment and possible concomitant stimuli. The 20 mM dosage was adopted in accordance to the evidences provided in [5], where we have previously studied the electrophysiological effects of a 20 mM GabT in dissociated hippocampal cultures.
Following the microarray assay, we will then quantify and analyze a high temporal resolution time course comprising a large set (33) of plasticity-related genes and we will relate the main features of the dynamical profiles with the putative biological functions of the relative genes/proteins. Following that, we will link one cluster of genes to a SRF-dependent regulation, by means of statistical and in silico analysis, and we will finally develop a chip (chromatin immunoprecipitation) assay in order to gain novel information about the role of SRF in the early phase of activity-dependent regulation of gene expression.
Microarray analysis
A transcriptome profiling of a GABA-A receptor antagonist treatment is still lacking in the case of organotypic hippocampal cultures. Therefore, we decided to start the analysis with a preliminary, microarray-based, assessment of the response of rat organotypic hippocampal cultures to a 20 mM gabazine treatment (GabT): the purpose of this step was to obtain a complete profile of the tissue reaction to a prolonged GABA-A receptor blockade, which is strictly associated with a sudden and powerful increase in the tissue synaptic activity and in the intensity of calcium dynamics [6][23] [24].
Three independent biological replicas were collected and analyzed on the Affymetrix rat 230.2 chip; for each replicas the expression of the gabazine-treated sample was then compared to the control-untreated sample and the probes/genes of the chip were arranged in ascending up-regulation/p-value score. The results of a GO enrichment analysis, performed considering the genes with an up-regulation value higher than 2, approximately corresponding to a p-value#0.005, are presented in table 1. The complete list of probes/genes data used in the present and in the subsequent analysis is provided in the table A in file S1.
The sudden increase of synaptic activity induces the upregulation of a variety of genes involved in several cellular processes and localized into different cellular compartments. A significant component (p-value#1.90?10 24 , modified Fisher Exact P-value) of the up-regulated genes, including for instance the effectors Arc and Rgs2, is involved in the regulation of synaptic transmission itself, by acting directly in axon terminals and dendritic spines. Another group of genes (p-value#1.90?10 25 ) consists in a large pool of transcription factors, like for example Cfos and Klf4, that is responsible for driving the second wave of cellular responses, possibly related to longer lasting changes in neuron metabolism, morphology and functions [7]. Interestingly, the same group of transcription factors results to be highly enriched in the positive regulation of transcription term (p-value#3.70?10 27 ): this indicates that, despite the presence of transcriptional repressors, such as Icer and Nfil3, the longer lasting changes are mainly based on the activation of not-expressed genes rather than on the suppression of already expresses ones. A consistent (p-value#2.20?10 23 ) component of genes is involved in the regulation of cell survival: interestingly, according to the GO, they appear to influence the survival in both a positive and a negative manner. However, it appears that GabT treatment induces a strong push (p-value#1.7?10 22 ) towards growth, neurogenesis and neuritogenesis. Finally, it is worth mentioning that the MAPK signaling pathway as well as the small-gtpase family are confirmed as the most important mediators of the aforementioned processes (p-value#2.9?10 22 ).
To verify the up-regulation values observed in the microarray assay, we selected a group of 33 genes among the highest upregulated ones and we measured their expression level in gabazine vs. untreated samples by RT-PCR. These 33 transcripts correspond to the top-fifty up-regulated probes deprived of those pointing to ''predicted'' transcripts and deprived of those characterize by low values of mRNA abundance (i.e. intensity of microarray signal). The latter ones were excluded mainly because their low amounts of mRNA were causing the RT-PCR data to be excessively noisy. The final list of transcripts whose up-regulation was verified by RT-PCR is presented in Table 1, while the RT-PCR data is presented in table C in file S1.
As a next step, we wanted to validate the previous Gene Ontology analysis. The functions associated to the genes in the Gene Ontology database (www.geneontology.org) are often derived from bioinformatics predictions, such as inference from sequence ortology or from common expression patterns: these kind of predictions, although likely reliable, have not been verified experimentally. In order to assess the consistency of our GO analysis, we proceeded by creating a manually compiled ''vocabulary'' of gene functions for each of the genes belonging to the set confirmed by RT-PCT; this vocabulary was based on an extensive search in the literature and built by considering only the most reliable results. More precisely, we preferentially considered only functional evidences derived from hippocampal tissues such as organotypic slices, acute slices, dissociated cultures or in vivo conditions. When hippocampus-based studies were lacking, we collected proofs from other types of nervous tissues, such as cortical neurons, dorsal root ganglion cells or glioma tissue. The complete list of gene/protein roles extracted from the literature is available in file S2, while a brief summary of them is available in Table 2.
Since it is well established that certain genes/proteins listed in Table 2 can exert different roles according to the cellular context [1][110] [162] (see file S2 for more details), we also tried to avoid considering functional results obtained from excessive pathological stimuli, which could alter the physiological native role of a gene/ protein. For instance in [60] the neurons were treated with Camptothecin to cause DNA damage and the Cbp/p300-interacting transactivator 2, also known as Cited2, was related to the activation of apoptosis: we found these circumstances too dissimilar from the gabazine-treatment of the present work and therefore we decided not to consider this as a functional evidence. The Fig. 1 represents the distribution of the literature-extrapolated functions with respect to the cellular compartments. The similarity between the functions/processes highlighted by GO and those derived from selected literature appears to be good, nonetheless we can make at least two considerations: 1) In regard to the equilibrium of pro and anti-survival genes emerged from the GO, we must point out that the resulting situation from the literature analysis is quite different: instead of an equilibrium, we can actually notice a substantial shift towards pro-survival genes in response to gabazine. This Table 2. difference arises from a different attribution of functions to the genes Nr4a1, Ptgs2, Arc, Atf3, Gadd45b and Nfil3. More precisely, all of these genes have proven, in the past years, to consistently promote neuron survival by protecting them from various oxidative, genotoxic and exitotoxic stresses; see file S2 for a complete review. (In short, we can confirm that a strong neuroprotective shield is induced by the synaptic activity associated with GABA-Ar blockage.) 2) Fig. 1 depicts more clearly how the effector early genes induced by the GABA-A blockade are mainly involved in the regulation of synaptic transmission and are localized in the synaptic terminals. Vice versa, those genes with growth, survival and neurogenesis promoting effects are mainly acting in the nucleus as transcription factors, thus their effects will realize only in conjunction with the subsequent wave of upregulated genes.
Gene expression time course
To gain better insights into the mechanisms of the transcriptional response to GabT we decided to investigate whether the upregulation value found after 1.5 hours (for the genes induced by gabazine) is reached following different temporal dynamics or, on the contrary, all genes share the same induction pattern.
Previous studies [50][163] [164] have already suggested that, following episodes of synaptic activity or during synaptic plasticity processes, the induced immediate-early-genes (IEGs) are characterized by different up-regulation dynamics: nonetheless, the timecourse data collected so far in the literature is mainly obtained by microarray analysis, such as [7] [165], and not by a reliable and accurate RT-PCR analysis: more precisely, the information currently available is limited to time-courses with low temporal resolution, i.e. a few time points, and/or concerning a reduced number of genes, such as [15][18] [19]. In all of these cases the time-course measurement was not the main aim of the paper, but it was rather an instrument to verify the effects of certain blockers/ conditions, therefore a particularly high temporal resolution was simply not needed.
The rat organotypic hippocampal cultures were subjected to a 20 mM gabazine treatment and the total Rna was gathered at 12 different time points spanning from 10 minutes to 95 minutes, with an average inter-sample time (sampling period) of 10 minutes. The procedure was eventually repeated three times, each time with a different twin rats couple, in order to obtain three independent replicas of the time-course, and RT-PCR was then performed for every gene in order to measure the up-regulation values at the different time points. The genes included in the time course analysis are those presented in Table 2. The time points of each replicas were then interpolated with a smoothing spline in order to emphasize the major trend underlying the up-regulation process; afterwards, the three interpolations derived from the replicas were combined into an average one, which was considered as the reference trend in all of the subsequent analysis. As an example, the resulting time course for the Bdnf gene (exon IV ) is shown in Fig. 2, together with the original and interpolated results for each replicas.
The first step of analysis that we carried out was a clustering of the temporal data, aimed to unveil the existence of distinct temporal patterns. Given that the measured time series are highly non stationary, we decided to discard correlation-based methods in favor of a k-means clustering algorithm based on Euclideandistance; after a preliminary normalization, which reduced all the expression values of each gene to the interval [0:1], the Euclideandistance method proved to be able to correctly group together genes sharing a similar temporal pattern, regardless the absolute values of up-regulation. This methodology is the same applied in [164]. The main drawback of the k-means algorithm is the necessity to manually set k, i.e. the number of desired clusters [166]. The ability of the algorithm to distinguish among potential different temporal dynamics increases as k increases, but, on the contrary, the Z-score of the grouping outcome becomes less significant at higher k values, which means that a random grouping would have produced similar results, as illustrated in Fig. 3A.
To further test the consistency of the clustering procedure, we designed four new control primers for the genes Egr1, Cfos, Rgs2 and Nurr1: these alternative primers point to different exons and different exon-exon junctions with respect to the original ones. With k = 2 the control primers were correctly grouped together with their counterparts, as highlighted in figure 3. Most importantly, even at higher fragmentation levels, with k = 4, k = 6 and k = 8, the control primers remain associated to the proper original ones: the probability that this correct grouping might be due to chance is p = 6.33?10 27 when k = 8.
We decided to use the approach described in [167] to determine the optimal value for k in a unsupervised manner; the method is based on the minimization of a function H(N), where N is the number of clusters. Intuitively, the minimum of H(N) coincides with the number of clusters where the addition of a further one does not reduce significantly the average intra-cluster distance. More details about this approach are presented in the Materials and Methods section. The final result, presented in Fig. 3B, indicates that k = 3 is the optimal value for the cluster number. In Fig. 3C the outcome of the clusterization process with k = 3 is represented in a two-dimensional plane.
Cluster 1, which comprises genes such as Arc, cFos and Klf4, is characterized by a fast rise in the expression values, which peak at about 50 minutes and subsequently remains steady till the end of the measurement. The Arc gene was reported in several works to be rapidly induced by episodes of synaptic activity, with a peak within the first 60 minutes. Thus, for the Arc gene, our result is coherent with [168], [169] and [170]; furthermore, it extends the results to the other 12 IEGs characterized with the same dynamic of Arc, thus suggesting the existence of a common regulation system responsible for the induction of these faster-rising IEGs. Cluster 2, which comprises genes such as Bdnf, Irs2 and Homer1a, is characterized instead by a slower but constant increase, almost linear up to 90 minutes. The differential dynamics characterizing the Bdnf gene (cluster 2) with respect to the Cfos and Egr1 genes (cluster 1) are coherent with a previous study [15] of Schaffercollateral HFS-induced LTP: again, here we extend the results to other 25 IEGs which result to be similar to Cfos/Egr1 or to Bdnf dynamics. Besides, the longer lasting duration of Cited2 (cluster 2) mRna up-regulation with respect to the faster and shorter upregulation timings of Cfos (cluster 1) and NOR-1 (cluster 3) also recalls the results obtained in [19] with an ECS stimulation of the Dentate gyrus. The last cluster, which is smaller than the previous ones and comprises genes such as NOR-1 and Btg2, presents a marked peak which is concurrent to cluster 1 peak, but that is successively followed by a pronounced decrease of the expression value.
Relationship between clustering and function
Since previous studies have already supported the notion that temporally clustered genes are likely involved in the same biological functions [165][171], we next wanted to determine whether it was possible to relate the different temporal profiles previously extracted with particular inherent functions. Therefore, for each temporal cluster of gene expression we performed an enrichment analysis of functional evidence collected in the manually compiled vocabulary, introduced in the ''microarray analysis'' section.
The recent developments in the study of hippocampal plasticity have consolidated the idea that episodes of intense physiological synaptic activity strongly promote neurogenesis [3][34][37], growth [2][97] [136] and survival [1][36] [150]. Our data confirm the up-regulation of numerous genes endowed with these properties already in the early phase (10-90 minutes) of transcriptional regulation, see Fig. 4, indicating that a strong neuroprotective shield is quickly activated by synaptic activity in organotypic cultures, together with an increase in the Dentate Granule Cells neurogenesis and an increase in the growth rate of neurons and synaptic connections. However, also genes with negative effects on growth (namely Icer, Klf4, Nr4a1 and Mkp-1) are induced in association with the mentioned majority of positive regulators, see Fig. 4. Interestingly, these four genes were all grouped in the cluster 1 temporal pattern, thus making the cluster 1 significantly enriched with anti-growth properties (p-val-ue#0.017, Fisher's exact test). Vice versa, the cluster 2 comprises only genes providing a positive effect on growth.
In the past decade the mechanisms involved in the homeostatic regulation of synaptic strength have emerged as a fundamental complement to Hebbian plasticity [172][173] [174]. In the present work we report that in rat organotypic cultures, following chronic blockade of GABA-Ar, many genes involved in homeostatic-scaling (weakening) processes, namely Narp/Nptx2, Arc, rgs2, arcadlin, plk2, [154], are induced in concert already in the first minutes of synaptic activity, thus suggesting the existence of a sensitive and fast feedback mechanism that is activated almost contextually to the perturbation. As illustrated in Fig. 4, the homeostatic genes are equally spread among the three clusters (p-value#0.43, Fisher's exact test) indicating that there is no particular relationship between the homeostatic function and the up-regulation timings in the early phase (0-90 min.) of the hippocampal response to perturbation. Interestingly, we noticed that the homeostatic genes are tightly associated, in every cluster, with genes exerting the opposite function, i.e. the potentiation of synaptic transmission, as depicted in Fig. 4. Therefore, differently from the survival and growth functions, for the regulation of synaptic transmission we observe a functional equilibrium between homeostatic-plasticity (weakening) genes and Hebbian-plasticity (potentiation) genes.
Another crucial step of the homeostatic response is the reestablishment of the basal level of active MAPKs [15] [111]; this process is carried out mainly by means of a negative feedback loop involving the MAPKs themselves, together with the Dusp family of phosphatases [72][76] [134]. Here we report that Dusp1, Dusp5 and Dusp6 are induced together by GabT, but with different temporal patterns, since they are grouped into different clusters, see Fig. 4. This result, which is coherent with previous studies [175] [176], indicates that each of the DUSPs is dynamically tied to a different group of genes: in this way, each cluster of the induced genes is synchronized with a relative homeostatic feedback to the MAPKs.
The peculiar distribution of the Dusp family members, as well as the in-cluster balance between homeostatic and Hebbian plasticity eventually was the one used for all of the subsequent clustering analysis. The neurotrophin Bdnf, one of the master regulators of learning and memory, will prove in the end to be up-regulated according to a pattern which is representative for almost 50% of the genes of the set under study. doi:10.1371/journal.pone.0068078.g002 Figure 3. Analysis of the clustering quality for the time course data. A) Outcome of the clustering algorithm, with progressive increase in the number of clusters k: the picture represents, at each different k, the grouping of the 4 couples of alternative primers pointing to the same gene. For k = 2,4,6,8 the alternative primers were correctly grouped together. The ''replicas p-value'', on the right, indicates the statistical consistency of the alternative primer grouping, which reaches it maximum value when the algorithm is forced to split the 33 genes into 8 different clusters. On the left, the Z-value of the global clustering, indicating the consistency of the temporal dynamics discrimination. B) Outcome of the algorithm aimed at determining the optimal value for k. The number of clusters N is plotted against a function H(N): the minimum of H(N), i.e. N = 3, coincides with the genes, led us to notice that, concerning the regulation of synaptic transmission, genes endowed with different, but at the same time complementary/counterbalancing, functions seem to be bound together into the same temporal dynamics in order to favor global robustness of the system: indeed in this case the dysregulation of a pathway caused by a pathological state would not create excessive imbalances since the inner genes compensate each other. It is interesting to point out that the present observation about global optimal value for k. See Materials and Methods section for further details. C) Visual representation, with k = 3, of the distances between trajectories and cluster centroids for all the 33 genes. For each cluster, the genes are disposed at increasing distances from the centroid, proportionally to their normalized Euclidean distances. The distance of the farthest gene is indicated in the proximity of the outer circle. The orientation of the genes reflects the proximity to the remaining two clusters. The distances between the cluster centroids are also indicated. doi:10.1371/journal.pone.0068078.g003 stability recalls the conclusions of previous work [177], in which a bioinformatic analysis of the CA1 hippocampal intracellular pathways [178] revealed the existence of robustness, stability and adaptability properties.
Relationship between clustering and regulation
In order to investigate the possible relationships between the different temporal patterns of gene induction and the regulators of gene transcription, we performed an accurate and extensive literature research aimed to recreate the complete network of pathways involved in the regulation of hippocampal gene transcription: the complete survey is available in file S2. By a cross comparison between pathways and transcription factors on one side and time-course patterns on the other side, it emerged that cluster 1, which was characterized by a fast increase in the expression values followed by a flat/stationary state, is particularly enriched in SRF, serum response factor, dependent regulations (p-value#0.02, Fisher's exact test). On the contrary, the cluster 2 does not present any SRF dependent regulation (p -value#0.05 ). This data indicates that the SRF dependent regulation is consistently biased towards the cluster 1, which is the cluster of genes such as Arc, Cfos, Cyr61, Egr1 and Egr2, all of which have shown to be regulated by Serum Response Factor in various plasticity models [ [183].
To assess the validity of the above mentioned SRF regulatory evidence for the genes Arc, Cfos, Cyr61, Egr1 and Egr2 in our model of hippocampal plasticity, i.e. GabT of organotypic cultures, we performed chip (chromatin immunoprecipitation) experiments to detect SRF binding levels in their promoters during GabT. Besides, we carried out an in silico analysis of the promoters of the remaining 8 genes belonging to the same cluster in order to detect other possible active CArG boxes, the DNA sequence motif CC[A/T] 6 GG that has a high affinity for SRF. As a result, CArG boxes conserved among humans, rats and mice were found in the upstream region of RGS2 and NR4A1 genes, respectively at 25 kb and 2123-111; therefore, those genes were included in the chip experiment together with the previous ones. The results of chip, presented in Fig. 5, show that a strong SRF signal was detected in Arc, Cfos, Cyr61, Egr1 and Egr2 and Nr4a1 while no significant signal was found for RGS2, indicating that the latter gene is likely not to be regulated by SRF in our plasticity model.
Discussion
The present article identifies three different dynamical patterns in the early-phase (10-90 min) of the transcriptional response induced by GabT of organotypic hippocampal cultures and provides novel information about the role of Serum Response Factor. The blockage of GABA-A ionotropic channels by means of gabazine/bicuculline/PTX is a widespread [1][145] [184] [185] model of plasticity where the increased synaptic activity triggered by GabT leads to the up-regulation of a plethora of activitydependent genes. While the electrophysiology of GABA-A antagonists in organotypic hippocampal cultures has been extensively studied [6][24], the relative variations in the transcriptome have been so far conducted in dissociated cultures [1][5][7] [186]. This last aspect prompted us to develop a preliminary assessment with a microarray-based transcriptome profiling.
The Gene Ontology analysis of microarray data revealed that the major functions of the 346 genes up-regulated by GabT (p-value#0.005) are related to the regulation of synaptic transmission, calcium ions transport, transcription, apoptosis, feeding behavior, learning and memory. With respect to apoptosis regulation, the GO analysis further indicates that both positive and negative regulators of survival are up-regulated in organotypic cultures and therefore the general effects of GabT on cell fate seems not to be predictable. Nevertheless, a manual annotation of the gene functions actually revealed that GabT promotes a push towards survival, neurogenesis and neuroprotection, confirming the results obtained in dissociated cultures [1] [185] and extending them to the case of hippocampal organotypic cultures.
To further investigate the dynamics underlying the early-phase of the regulation of activity-dependent genes, we developed the quantification of a high temporal resolution time course, ranging from 10 to 90 minutes, with an average inter-sample time of 10 minutes. The trajectories of the 33 genes included in the timecourse were subjected to a unsupervised k-means clustering: the unsupervised clustering identified three different dynamical patterns, as depicted in Fig. 3 and Fig. 4. By crossing the cluster grouping with the gene functions listed in the manually compiled vocabulary (see file S2) we found that the group of genes characterized by a fast rise to a plateau value (cluster 1) seems to be significantly (p-value,0.05) provided with anti-growth and antisurvival properties. Since this cluster is characterized by the fastest response, peaking already at 50 minutes, this data suggests that a rapid activation of negative regulators of growth, possibly involved in the initial disassembly of existent structures, is subsequently followed by an induction of growth promoting genes (cluster 2, slow up-regulation).
Besides, we also found that cluster 1 is also enriched in SRF, serum response factor, dependent regulations (p-value#0.02, Fisher's exact test). Interestingly, in a previous work [187] with dissociated cultures we showed that the genes Nr4a1, Arc, Egr1, Egr2 and Egr3, which belong, in the present paper, to cluster 1 (with the exception of Egr3), are characterized by a marked dependence on MAPK-dependent regulation when compared to Bdnf and Homer1a, which instead here belong to cluster 2. Moreover, a strong dependence Dusp1 and Fos, which again belong to cluster 1, on MAPK regulation was previously emphasized in rat neuroendocrine cells [188] [189]. These data suggest that the cluster is particularly dependent on SRF/MAPK and motivated us to investigate whether the aforementioned SRF dependent regulations, which were extrapolated from the literature and derived from different experimental conditions, are still valid in the case of GabT of organotypic cultures.
To this end, we performed chip, chromatin immunoprecipitation, for detecting SRF binding levels during GabT and we found that Cyr61, Egr1, Egr2, Fos and Arc present a significant SRF binding signal. While genes Fos and Egr1 have already been reported to be regulated by SRF in hippocampal organotypic cultures [179], ours is the first report for genes Cyr61, Arc and Egr2.
To complete the survey of working CArG boxes in cluster 1, we analyzed the sequences upstream of TSS for the remaining genes and we found conserved CArG boxes also upstream of Rgs2 and Nr4a1. Eventually, the chip assay revealed that the Nr4a1 CArG box presents a significant SRF signal while no signal was found for Rgs2. This result is interesting in particular for Nr4a1 gene, for which the functionality of the aforementioned CArG box has so far provided motley evidences. In fact, in serum stimulation of NIH-3T3 fibroblasts [190] and platelet-derived growth factor (PDGF) stimulation of T98G-glioblastoma [181] the CArG box has proven to be functional but in hippocampal neuronal cultures [191] [192], cerebellar cortex [127] and in vivo [193] conditions general findings are in favor of a CREB and MEF2 determinant role. Therefore, our latter result suggests that, in organotypic cultures, SRF may play a role in the regulation of Nr4a1 gene during the intense synaptic activity triggered by GabT.
In conclusion, this study provides novel insights into the early dynamics of transcriptional regulation in a plasticity model, showing how a large group of co-expressed activity-dependent genes is characterized by consistently different patterns of induction in the first 90 minutes of tissue response and linking these patterns to different inherent functions and regulatory mechanisms. We believe that unveiling the finest tuning in the regulatory dynamics of plasticity is the key step to gain a more quantitative awareness of the phenomenon.
Ethics Statement
Rat hippocampi were dissected from Wistar rats (P4-P5), in accordance with the regulations of the Italian Animal Welfare Act, and the procedure was approved by the local authority veterinary service (Dr. R Zucca). Every possible effort was taken in order to minimize both the number and the suffering of used animals. The experiments were carried out in accordance with the European Communities Council Directive of 24 November 1986 (86/609/ EEC) and formal approval for experimental procedures was provided by the Ministry of Health(protocol 13/97-A).
Tissue, pharmacology and Rna extraction
Rat hippocampi were dissected from Wistar rats (P4-P5). Organotypic cultures were prepared following the roller tube method [194]. Gabazine was purchased from Tocris (Bristol, UK). Gabazine treatment (GabT) for microarray samples consisted in treating the cultures for 90 min with 20 mM of gabazine, a specific GABA-A receptor antagonist [195]. Gabazine treatment (GabT) for time course samples consisted in treating the cultures with 20 mM of gabazine for a variable time with time samples ranging from 10 minutes to 90 minutes. The total RNA for the microarray samples and the time-course samples was extracted using the TRIzol reagent (Sigma, Milano, Italy) according to the manufacturer's instructions followed by a DNase I (Invitrogen, Carlsbad, Figure 5. Analysis of SRF binding sites by chromatin immunoprecipitation. Chromatin fragments of hippocampal organotypic cultures were immunoprecipitated with anti-SRF antibody. A) Immunoprecipitation levels normalized to input control: the s.e.m. is calculated over three different replicas. B) Immunoprecipitation of each promoter region, together with input control and IgG antibody, was amplified by PCR. Each sample is derived from three independent replicas. doi:10.1371/journal.pone.0068078.g005 California, USA) treatment to remove any genomic DNA contamination. The total RNA was further purified using RNeasy Mini Kit Column (Qiagen, Valencia, CA) and subsequently quantified by ND-1000 Nanodrop spectrophotometer (Agilent Technologies, Palo Alto, CA).
Analysis of Microarray data and P-value calculation
For the microarray data, three biological replicas were collected at 90 min of GabT and Standard Affymetrix protocols were applied for amplification and hybridization. Gene profiling was carried out with the Affymetrix RAT2302 GeneChip containing 31099 probes, corresponding to 14181 probes with a gene symbol. Low level analysis was performed using an Robust Multi-array Average (RMA) algorithm [196] directly on the scanned images.
Data were organized in matrices ''m6n'' (m, number of genes; n, number of replicas). Two samples were considered: an untreated culture (C ij : i = 1,..,n j = 1,..,m), a culture treated with gabazine (G ij ). Data were analyzed by considering log 2 changes of gene expression in each replicas against its own untreated control, that is, log 2 (G ij /C ij ). Thus, from the microarray data we obtained an ''m6n'' ratio-matrix for each treatment. Considering the three replicas as independent variables, this matrix was treated as a multivariate variable in three dimensions. We derived the empirical cumulative distribution function with upper and lower bounds of the multivariate variable, using the Kaplan-Meier estimator (Kaplan and Meier, 1958) so to assign a p-value to all the genes and select the most significant ones. The microarray data can be found in the GEO database, accession number: GSE46864.
GO enrichment analysis
GO enrichment analysis for microarray data was performed with Gene David [197] (http://david.abcc.ncifcrf.gov/). GO analysis for the manually annotated vocabulary was performed according to the following formulas: The probability to have exactly x x genes characterized with a certain ''GO term'' (for example, ''SRF regulation'' or ''positive regulation of synaptic transmission''), in a cluster of dimension n, is Where N is the total number of genes(elements), n is the dimension of the cluster, k is the total number of genes(elements) which present the ''GO term'' under consideration. The cumulative probability to have an amount of terms equal or higher than x x, in a cluster of dimension n, is
Quantitative RT-PCR and time-course analysis
For the time course experiment, the expression level of the target mRNA was quantified bt RT-PCR. RNA (250 ng) was reverse-transcribed using SuperScript II reverse transcriptase and random hexamer (Invitrogen). qRT-PCR was performed using iQ SYBR Green supermix (Bio-Rad, Munich, Germany) and the iQ5 LightCycler (Bio-Rad). Gene specific primers were designed using Primer3 [198](http://frodo.wi.mit.edu/). The thermal cycling conditions comprised 3 min at 95C, and 45 cycles of 10 sec for denaturation at 95C and 45 sec for annealing and extension at 58C. The expression level of the target mRNA was normalized to the relative ratio of the expression of Gapdh mRNA. Fold change calculations were made between treated and untreated samples at each time point using the DDCT method. Three organotypic cultures were used for each sample. The 36 primers used for the time course analysis are provided in table B in file S1.
The resulting time-course data-set consists of three biological replicas, each one containing 12 time points ranging from 10 m to 90 m. Each raw data time-course replicas obtained from RT-PCR data was independently fitted with a smoothing spline (Matlab environment) and normalized to the [0:1] interval. Subsequently, the three replicas were jointed together and analyzed via a kmeans clustering, based on Euclidean distance (same method as [ [164]]). To identify the optimal number of clusters we adopted the approach proposed in [167]. Briefly, a function is computed at every k, i.e. cluster number. N is the number of clusters, dist(c i ) is the intra-cluster distance, i.e. the scaled average squared distance between shapes in the cluster c i and a is a parameter controlling the grain of the clustering. The minimum of the function H(N) corresponds to the optimal number of clusters. The enrichment score for the transcription factors regulatory evidences was computed using the same approach described in one of the previous section, ''Analysis of Microarray data''.
Identification of upstream sequences and transcription factor binding sites
The 10 k-bp upstream regions for mouse, rat and human of the cluster 1 genes were extracted from mapviewer (http://www.ncbi. nlm.nih.gov/mapview/). To identify the putative transcription factor binding sites within each upstream sequence, a preliminary verification of the conserved regions among mouse, rat and human was performed by aligning the sequences with blast-bl2seq (http:// blast.ncbi.nlm.nih.gov/Blast.cgi), using a word letter size 16. To refine the blast results a further analysis was carried out with Evoprinter (http://evoprinter.ninds.nih.gov/) [199]. Finally, conserved domains were analyzed with Jaspar [200] (http://jaspar. cgb.ki.se/), using the MA0083.1 SRF binding matrix with a threshold score of 0.8.
Chromatin immunoprecipitation
The chromatin immunoprecipitation assay was performed using the MAGnify Chromatin Immunoprecipitation System (Invitrogen, Catalog Number49-2024) according to the manufacturer's instructions with slight modifications. Briefly, organotypic cultures (ten cultures per condition) were cross-linked at room temperature, immediately after the GabT, using a PBS solution with formaldehyde 1%. Shearing was performed with a MSE Soniprep 150 (7 pulses of 5 seconds) to yield an average length of 300 bp. Samples were immunoprecipitated with 10 ug of anti-SRF antibody (Santa Cruz Biotechnology, Heidelberg, Germany, cat.no sc-335x) and with 1 ug of anti-rabbit IgG negative control antibody. Promoter specific primers were used for amplification:
Author Contributions
Conceived and designed the experiments: GI VT. Performed the experiments: GI. Analyzed the data: GI. Contributed reagents/materials/analysis tools: VT. Wrote the paper: GI. Supervised the analysis part: CA. | 8,978.2 | 2013-07-23T00:00:00.000 | [
"Biology",
"Medicine"
] |
Study on Strength andMicrostructure of Cement Pastes Containing Limestone Powder under Flowing Acid Solution Condition
Different cement pastes containing limestone powder were prepared and soaked, respectively, in flowing acetic acid solution with pH value of 4 and sulfuric acid solution with pH value of 2. The strength and microstructure of the pastes after different flowing acid attack periods were investigated by using strength test, X-ray diffraction (XRD), and scanning electron microscopy (SEM) techniques in this study, which reveals the effect of limestone powder on flowing acid resistance mechanism of cement paste. Testing results show that the strength of pastes suffered flowing acid attack decreases with the increase of water-binder ratio and the content of limestone powder. In flowing acetic acid solution, calcium hydroxide and calcium carbonate react with acetic acid, which therefore made deterioration of pastes proceed from the exterior to the interior. In flowing sulfuric acid solution, although calcium hydroxide and calcium carbonate could react with sulfuric acid and form gypsum, the flowing liquid would dissolve it out and thus the crystallization of gypsum was difficult which would somewhat inhibit the swell of pastes.
Introduction
Concrete and mortar are always affected by physical and chemical attack under environmental water conditions and therefore their microstructure will be deteriorated, strength of the structure will decrease, and finally the architectures will be destroyed [1,2].There are mainly five kinds of environmental water attack, among which the acid attack is the most common.Acidic materials are such widely spread as in the atmosphere, in the soil contiguous with buildings, in the industrial wastes, and in the marine environments.Cement paste as an alkali compound is easily to be corroded under acid attack.
Modern cements often incorporate several mineral admixtures, one of which is limestone powder.The use of Portland cement containing limestone powder is a common practice in Europe.European standard EN 197 identifies two types of Portland limestone cements (PLC): type II/A-L containing 6%-20% and Type II /B-L containing 21%-35% [3][4][5].The use of limestone powder can improve properties of concrete, decrease the costs, and reduce CO 2 and NO x emissions during cement manufacture [6][7][8].But there are still problems concerning the application of concrete mixed with limestone powder.The main composition of limestone powder is calcium carbonate which is easily to be suffered from acid attack.Although they have been implemented about this kind of concrete, most of the researches were limited to the static erosion environment [9][10][11][12].While in practical projects, the concrete structures such as piers and foundations are usually located where the seepage of groundwater is positive or the erosion of the flowing water is intense.How the flowing acid attack resistance of cement paste changes after incorporating limestone powder still remains uncertain.
In this paper, cement pastes with different mix proportions and different contents of limestone powder were prepared.These pastes were then soaked in the flowing acetic acid solution with pH value of 4 and sulfuric acid solution with pH value of 2, respectively.The changes of strength and microstructure of pastes after different flowing acid attack periods were investigated by using strength test, X-ray diffraction (XRD), and scanning electron microscopy (SEM) techniques, which will indicate the effect of limestone powder on flowing acid attack resistance mechanism of cement paste.
Experimental
2.1.Raw Materials.The mixtures were prepared with ordinary Portland cement PO 42.5 (the Chinese standard GB175-2007), limestone powder, fly ash, and silica fume.Limestone powder, produced from carboniferous limestone with a very high purity (95% of CaCO 3 content), was added as filler.The particle size distribution of Portland cement, limestone powder, and fly ash measured by laser diffraction is shown in Figure 1.Obviously, the particle size of limestone powder is much smaller than those of Portland cement and fly ash.The dominant particle size of limestone powder is below 5 μm.
Mix Proportions.
In order to study the effect of limestone powder on flowing acid attack resistance of cement paste, two different water-binder ratio serials including the low value of 0.3 and the high value of 0.5 were set.By changing the content of limestone powder and mixing with fly ash or silica fume, 10 different pastes were prepared.The specific mix proportions are shown in Table 1.
Paste specimens with size of 20 × 20 × 20 mm 3 were cast according to Table 1.After molding 24 hours, the pastes were removed from the mold and then placed for standard curing till 28 d when their strength was tested as reference.Thereafter, they were soaked in two different solutions: the flowing acetic acid solution with pH value of 4 and the flowing sulfuric acid solution with pH value of 2. The strength change, XRD, and SEM analysis were studied, respectively.The actual devices simulating flowing solution environment are shown in Figure 2.
XRD. XRD measurements were implemented on a
Philips X'Pert diffractometer equipped with a graphite monochromator using Cu Kα radiation and operating at 40 kV and 20 mA.
Step scanning was performed with a scan speed of 2 • /min and sampling interval of 0.02 • /2θ.XRD was used to identify the hydrates in the cement pastes containing limestone powder.
Results and Discussions
3.1.Strength.The compressive strength of the pastes soaked in flowing acid solution for different periods is shown from Figure 3 to Figure 6.It can be concluded from the four diagrams that compressive strength of the pastes would firstly decrease and then increase slightly and finally decrease again.The strength would decrease correspondingly with the increase of water-binder ratio and the content of limestone powder.Strength of the pastes incorporating fly ash and silica fume declines more slowly than that of pastes only containing limestone powder.
Figures 3 and 4 show the early strength of pastes decreases in flowing acetic acid solution because calcium hydroxide and calcium carbonate reacted with the acid and the product calcium acetate will leach out.Then unhydrated cement particles continue their hydration which leads pastes' strength to recover slightly.At last, with the further acetic acid attack into the interior, calcium hydroxide will be consumed and calcium carbonate will be dissolved.Thereafter other hydration products decomposed and as a result the strength descended again.
Figures 5 and 6 show the early strength of pastes decreases in the flowing sulfuric acid solution, possibly because calcium hydroxide and calcium carbonate in the pastes reacted with the acid and the products leach out.Then unhydrated cement particles' proceeding hydration and the swelling effect of the generated gypsum lead the pastes strength to recover slightly.At last, with the further sulfuric acid attack into the interior, simultaneously the generated gypsum destroys the pastes' structure and consequently the strength descends again.In comparison with the flowing acetic acid solution, strength of the pastes soaked in the flowing sulfuric acid solution declines more slowly possibly due to the depression effect of the flowing liquid onto the gypsum's crystallization which is commonly the main reason to cause expansion of the paste under sulfuric acid attack.As is indicated in Figure 2, the feculent sulfuric acid solution, unlike the clear appearance presented in the acetic acid solution, suggests gypsum was dissolved out.The pictures provide a convincing demonstration of the explanation.
As for influence of water-binder ratio, pastes with high W/B value are more easily to be damaged because of their loose microstructure and high porosity.As for the effect of limestone powder, the higher the content, the more paste strength decreases because calcium carbonate, dominant composition of the limestone powder, could react with the acid and the reaction products will leach out.When mixed with an additional cementing material, fly ash, or silica fume, the descending tendency of strength is not that intense because the content of limestone powder decreases.What is more, both fly ash and silica fume could react with calcium hydroxide and the hydration products are beneficial to enhance the strength.Subsequently, the content of calcium hydroxide decreases and so is its degree of reaction with the sulfuric acid.The higher the content of limestone powder, the higher the calcium carbonate peak.Calcium carbonate peak is much higher than calcium hydroxide peak.Calcium hydroxide peak is very low because they are consumed in their reactions with the acetic acid.
Microstructure.
Figure 8 shows the results of XRD analysis of hydrates of cement pastes with W/B value of 0.5 soaked in the flowing sulfuric acid solution at 28 d.There are calcium carbonate peak, calcium hydroxide peak, gypsum peak and ettringite peak.Calcium carbonate peak is the highest, and the higher the content of limestone powder, the higher the calcium carbonate peak.Calcium hydroxide peak is very low because they are consumed in their reactions with the sulfuric acid which leads to the generation of gypsum.The gypsum could then react with C-A-H and ettringite is generated.The expansion of pastes caused by gypsum and ettringite should account for the destruction of microstructure and the strength.The figure also indicates that the content of gypsum is relatively low because of the depression effect of the flowing solution onto the gypsum's crystallization.This is consistent with the explanation the results of strength tests.
Fragments of broken off and washed with acetone were examined by SEM. Figure 9 shows the SEM pictures of sample H-3 soaked in the flowing acetic acid solution at 28 d.C-S-H gel, ettringite and calcium carbonate crystals can be found.But calcium hydroxide crystals are not found because they were consumed in their reactions with the acid.This is consistent with the results of XRD analysis.
Figure 10 shows the SEM pictures of sample H-3 soaked in the flowing sulfuric acid solution at 28 d.C-S-H gel, calcium hydroxide and ettringite crystal can be found in it.But the gypsum is not that much because of the distorting effects of the flowing solution onto the gypsum's crystallization which is consistent with the results of XRD analysis.
Conclusions
(1) The compressive strength of the cement pastes would firstly decrease and then increase slightly and finally decreases again under flowing acid attack.The strength will decrease correspondingly with the increase of the water binder ratio and content of the limestone powder.Strength of pastes mixed with fly ash of silica fume decreases slowly than those of pastes only containing limestone powder.
(2) Cement pastes with high value of W/B are more easily to be damaged because of their loose structure and high porosity.Calcium carbonate in limestone powder could react with the acid and the higher content of it, the faster the strength decreases.When mixed with fly ash or silica fume, they could both react with calcium hydroxide and C-S-H is generated which not only is good to enhance the pastes' strength but also weakens the degree of reactions between the hydration products and the acid.
(3) Microstructural analysis manifests on condition of the flowing acetic acid attack, destruction of the microstructure, and loss of the strength are caused by the reactions between calcium hydroxide and the acid which make the deterioration proceed from the exterior to the interior.While under the flowing sulfuric acid attack, crystallizations of the gypsum and ettringite lead to the expansion of the pastes.The flowing solution environment inhibits the crystallization of gypsum and therefore delays the attack process.
Figure 1 :
Figure 1: Particle size distributions of cement, limestone powder and fly ash.
(a) acetic acid attack (b) sulfuric acid attack
Figure 2 :
Figure 2: Devices of simulating flowing (a) acetic acid solution and (b) sulfuric acid solution.
Figure 3 :Figure 4 :
Figure 3: Compressive strength of pastes with W/B = 0.3 soaked in flowing acetic acid solution over different time.
Figure 5 :Figure 6 :
Figure 5: Compressive strength of pastes with W/B = 0.5 soaked in flowing sulfuric acid solution over different time.
Figure 7 : 8 :
Figure 7: XRD patterns of hydrates of cement pastes with W/B = 0.5 soaked in flowing acetic acid solution at 28 d.
Figure 9 :
Figure 9: SEM pictures of sample H-3 soaked in flowing acetic acid solution at 28 d.
Figure 10 :
Figure 10: SEM pictures of sample H-3 soaked in flowing sulfuric acid solution at 28 d.
Table 1 :
Mix proportion of the cement pastes. | 2,776.2 | 2012-10-15T00:00:00.000 | [
"Materials Science"
] |
Rotation Dynamics of Star Block Copolymers under Shear Flow
Star block-copolymers (SBCs) are macromolecules formed by a number of diblock copolymers anchored to a common central core, being the internal monomers solvophilic and the end monomers solvophobic. Recent studies have demonstrated that SBCs constitute self-assembling building blocks with specific softness, functionalization, shape and flexibility. Depending on different physical and chemical parameters, the SBCs can behave as flexible patchy particles. In this paper, we study the rotational dynamics of isolated SBCs using a hybrid mesoscale simulation technique. We compare three different approaches to analyze the dynamics: the laboratory frame, the non-inertial Eckart’s frame and a geometrical approximation relating the conformation of the SBC to the velocity profile of the solvent. We find that the geometrical approach is adequate when dealing with very soft systems, while in the opposite extreme, the dynamics is best explained using the laboratory frame. On the other hand, the Eckart frame is found to be very general and to reproduced well both extreme cases. We also compare the rotational frequency and the kinetic energy with the definitions of the angular momentum and inertia tensor from recent publications.
Introduction
Polymer solutions have an important role from both the fundamental and applied point of views. The addition of a small amount of polymers to a liquid can be use to tune the stability and rheological properties on multiple commercial systems as paints, pharmaceutical products, food and oils. As a consequence of the polymer flexibility, a field flow can provoke large conformational changes, which in turn influence the flow field. In this way, understanding the coupling between the conformational and dynamical properties of isolated polymers immersed in a field flow is an important first step to elucidate the rheological behavior of (dilute and semi-dilute) polymer solutions [1,2]. To date, there has been a considerable amount of work on the response of flexible polymers with different architectures (e.g., linear, ring, hyperbranched and star polymers) to shear stress, which has revealed generic and specific properties of such systems. On top of experimental techniques, the development of simulation methods allowing one to efficiently couple the solvent particles and monomers, a wide spectrum of behaviors has been found regarding the average deformation and the orientation as a function of the shear rate, as well as multiple dynamic responses [3][4][5][6][7][8][9]. The latter encompass stretching and recoil, tumbling, tank-treading, rupture and collapse of polymers and ultimately determine the (complex) viscoelastic response of dilute bulk phases.
In this work, we consider the dynamics of isolated star block copolymers (SBCs), which can be exploited as versatile building blocks as they self-assemble into structures with one or multiple clusters of their solvophobic segments, i.e., they behave as self-associating patchy particles, featuring tunable softness, functionalization, shape and flexibility [10,11]. Recently, the structural properties of isolated SBCs under (linear) shear flow were analyzed by means of particle-based multiscale simulations for a wide set of parameters, which include the functionality of the star, the amphiphilicity degree, the solvent quality and the shear rate. In particular, the formation of attractive patches on the SBC corona as a function of the shear rate was analyzed. Three mechanisms of patch reorganization under shear were identified, which determine the dependence of the patch numbers and orientations on the shear rate, namely free arms joining existing patches, the fusion of medium-sized patches into bigger ones and the fission of large patches into two smaller ones at high shear rates [12].
Along with these studies, the dynamic behavior of single SBCs must be considered to gain some insights into the influence of these patch rearrangements on the rheology of dilute suspensions. Motivated by a very recent work on the rotational dynamics of star polymers in shear flow [13,14], this work focuses on the dynamics of sheared SBCs analyzed by means of the so-called Eckart frame, which allows one to separate pure rotational and vibrational motions. We show that SBCs display a richer structural and dynamical behavior than athermal star polymers in a shear flow, and therefore, they are also interesting candidates to tune the viscoelastic properties of complex fluids. The rest of the manuscript is organized as follows: In Section 2, we present the model and the employed tools. In Section 3, the simulation results are displayed, and the ensuing dynamic properties are discussed. Finally, in Section 4, we summarize and draw our conclusions.
Coarse-Grained Model for the Star Block Copolymer
As mentioned above, the dynamics of a single SBC immersed in a sheared (Newtonian) solvent is studied by means of a hybrid multiparticle collision dynamics-molecular dynamics (MPCD-MD) method, as described in detail in [11,12]. Briefly, the star polymer and the solvent particles are modeled at a coarse-grained level. Each arm of the SBC is represented as a bead-spring chain having N A inner and N B outer monomers, thereby defining the degree of polymerization N pol = N A + N B and the amphiphilicity α = N B /N pol . The monomers are represented as soft spheres of diameter σ and mass M interacting through pair potentials V AA (r) = V AB (r) = V(r; 0) and V BB (r) = V(r; λ), where: Here, V 0 (r) = 4 (σ/r) 48 − (σ/r) 24 , r c = 2 1/24 σ, r is the monomer-monomer distance, and λ is an attraction-coupling constant. The latter allows us to tune the solvent quality for the B-monomers; as explained in [11]. In particular, increasing the value of λ enhances the attraction between the B-monomers. Sufficiently large values of this parameter, λ > 0.92, are equivalent to considering that a homopolymer made of B-monomers is below its θ-temperature. The bonding between connected monomers is introduced by an FENEpotential: where K = 30( /σ 2 ) and R 0 = 1.5σ.
Multiparticle Collision Dynamics and Molecular Dynamics
Multi-particle collision dynamics (MPCD) was employed to mesoscopically simulate the solvent [15,16]. The latter is assumed to be composed of N s point-like particles of mass m, whose dynamics follows two steps: a streaming step, in which the solvent particles move ballistically, and a collision step, in which the solvent particles exchange linear momentum. To do that, particles are sorted into cubic cells with length a, and their relative velocities with respect to the cell center-of-mass are rotated by an angle χ around a random axis [6,15,16]. The number of solvent particles per MPCD-collision cell is ρ = 5, and their mass is m = M/ρ, serving as the unit of mass of the simulation; a convenient timescale is defined as τ = √ mσ 2 / . In what follows, we choose m = σ = = 1, setting thereby the units of mass, length and energy, respectively; accordingly, τ serves as the unit of time. For the temperature T, we choose the value k B T = /2, where k B is the Boltzmann constant. The remaining MPCD-parameters were set as follows: the time between collisions is ∆t mpcd = 0.1τ, the rotation angle is χ = 130 • and the cell size a = σ, making the presence of two monomers in the collision cell very unlikely. Lees-Edwards boundary conditions were used to generate a shear velocity field v(x 2 ) =γ x 2ê1 , characterized by the shear rateγ, as schematically depicted in Figure 1. In the MD-section of the hybrid technique, time evolution of the monomers follows the Newtonian equations of motion, which are integrated by means of the velocity-Verlet scheme [17] with an integration time step ∆t md = 10 −3 τ. The coupling between the monomers of the SBC and the solvent particles is achieved during the collision step, in which the former are included as point particles in the evaluation of the center-of-mass velocity of each cell, and their velocities are also randomly rotated. This interaction is strong enough to keep the monomers at the desired temperature, once a thermostat for the solvent particles has been introduced, which in the present case corresponds to a cell-level, Maxwell-Boltzmann scaling [18]. During the collision step, mass, momentum and energy are conserved, leading to correlations among the particles and giving rise to hydrodynamic interactions. As a dimensionless measure of the shear rate, we consider the Weissenberg number Wi, which is the product of the shear rate with the longest relaxation time of the polymer. For the latter, we take the longest Zimm relaxation time τ Z of a polymer with N pol monomers, which is given by the expression [6,19]: where η s is the (MPCD) solvent viscosity and ν = 3/5 is the Flory exponent for self-avoiding chains. We obtain τ Z 1.3 × 10 4 τ for the specific choices of the MPCD collision parameters and the value N pol = 40 employed here. Although we neglect any dependence of the relaxation time on star functionality f and attraction strength λ along the arms, the results justify a posteriori the choice of a common relaxation time, in the sense that we are able to obtain results for the shape parameters that mostly collapse on one another when plotted against Wi =γτ Z .
We performed a total of 14 independent runs with different initial conditions for each set of parameters { f , α, λ} investigated, covering a broad range of Wi, from the linear (Wi 1) all the way to the strongly nonlinear (Wi 10 3 ) regime. We focus on the following three particular sets of parameters: { f , α, λ} = {12, 0.3, 1.0} (Case 1), {15, 0.5, 1.1} (Case 2) and {18, 0.7, 1.1} (Case 3). According to our previous study, these parameters represent the typical trends found in regard to the patchiness of the SBCs, namely: no patches are formed; several patches are formed having a small population; and few (one or two) bulky patches are formed [12]. For each run, a preparation cycle of 5 × 10 6 MD steps was executed in the first place, which was long enough for the SBC to reach its stationary state, and then, a production cycle of 1.5 × 10 7 MD steps took place. Depending on the shear rate, the simulation box has dimensions of 60σ ≤ D 1 ≤ 110σ and D 2 = D 3 = 60σ. Configuration data were saved every N save = 2 × 10 4 MD steps during the production cycle. As in this work there exist various physical systems and they are looked at from various frames of reference and at different levels of approximations as regards their rotational dynamics, we use in what follows a number of abbreviations, whose meaning is summarized in Table 1 below.
Rotational Dynamics
Soft colloids and polymers under shear flow deform and undergo a succession of complex motion patterns, such as tumbling and tank-treading, which are hard to decouple from one another and analyze quantitatively. Recent studies aimed at a better understanding of the complex dynamics of (athermal) star polymers in shear flow have demonstrated that Eckart's formalism allows one to separate correctly the different characteristic motions of the polymer, i.e., pure rotation, vibration with no-angular momentum and vibrational angular momentum [13,14]. In the following, a brief description of this formalism is given, which will be subsequently employed to analyze our simulation results.
Laboratory Frame
Here, the frame of reference is fixed in space, and it is customarily and conveniently chosen in such a way that the first axis lies along the flow direction, the second along the gradient direction and the third along the vorticity direction, as shown in Figure 1. Taking r k andṙ k as the position and the velocity of the k-th monomer in the laboratory frame of reference, the total angular momentum of a star polymer with respect to its center of mass is, by definition: with k = 1, . . . , N mon = f N pol + 1, N mon the total number of monomers, ∆r k = r k − r cm and ∆ṙ k =ṙ k −ṙ cm . Here, r cm andṙ cm are, respectively, the position and the velocity of the center of mass, i.e., The time evolution of the k-th monomer position can be evaluated as [13,14,20,21]: whereṽ k denotes a purely vibrational motion, which is angular momentum-free in the laboratory frame, i.e.,ṽ k and ∆r k are parallel (cf. Equation (4)). The angular frequency ω can be expressed as: with the components of the moment of inertia tensor J being defined as: with δ µν the Kronecker delta and r k,µ the µ-th component of the position vector of the k-th monomer.
In the case of rigid-body motion,ṽ = 0 and ω coincides with the rotational angular velocity. The full kinetic energy E kin of the sheared polymer results from Equation (6) and reads as: where M s = N mon M is the total mass of the polymer. The three terms in the r.h.s of Equation (9) represent the translational, rotational and vibrational contributions to the kinetic energy, respectively. We emphasize, though, that the velocity contributionṽ k in the motion of a monomer is not the only vibrational contribution, but just the one that does not contribute to the (instantaneous) angular momentum; there are, in general, additional vibrational contributions included in ω. Therefore, ω is the apparent angular velocity, and it is not possible to separate rotation from vibrational with angular momentum motion within the lab frame.
Eckart Frame
Eckart's formalism makes use of a non-inertial frame, which co-rotates with the polymer at angular velocity Ω (see Equation (15) below) [22,23]. The first step to build up the Eckart frame is to choose one initial configuration of the SBC as a reference, accompanied by an initial frame of reference spanned by the basis vectors {f 1 (0), f 2 (0), f 3 (0)}. The origin of this frame is located at the center of mass of the chosen reference configuration of the polymer, and as a matter of convenience, the three axes {f 1 (0), f 2 (0), f 3 (0)} also coincide with the orientation of the laboratory frame. Due to the choice of the origin, in this system of coordinates, the position vectors of the monomers at time t = 0, {a k = ∆r k (0); k = 1, 2, . . . , N mon }, satisfy the relation: This reference configuration is frozen and co-rotates with the Eckart frame of reference, the latter evolving with time as explained below. In the second step of the process, the unit base (column) vectors {f 1 (t), f 2 (t), f 3 (t)} of the instantaneous Eckart frame are evaluated. To achieve that, the vectors: are introduced, which are completely defined in terms of the instantaneous positions ∆r k (t) and the Cartesian components a k,µ of the reference position vectors a k for each monomer. In what follows, we drop the explicit time-dependence from the notation of the various vectors. The right-handed triad of unit vectors {f 1 , f 2 , f 3 } is determined as: where the elements of the symmetric (Gram) matrix F are defined as In this way, the position vector c k of the k-th monomer in the co-rotating reference configuration, decomposed onto the unit vectors of the rotating Eckart frame of reference, is given by: the coefficients a k,µ being fixed, time-independent quantities set by the reference configuration and the triad {f 1 , f 2 , f 3 } depending on time as explained above. In this way, the c k are constant vectors when looked at from within the rotating Eckart frame and describe the original, rigid configuration.
Using the initial configuration of the SBC in the production run as the (fixed) reference configuration for Eckart's frame, Figures 2-4 show its time evolution as it is seen in the laboratory frame for Case 1 and different shear rates. For Wi = 10, the reference configuration is seen in the lab frame as a rigid body rotating mainly around the vorticity axes. As the shear rate increases, the rotation takes place faster and around all three axes in lab frame, as illustrated by the cases Wi = 100 and Wi = 400. For the latter, Figures 3 and 4 show a significant change of the Eckart frame orientation with respect to the lab frame. The polymer is expected to have a relatively high rotation frequency around the vorticity axis in the lab frame, which is found in the Eckart frame, as well (see Appendix A). The angular velocity Ω of the Eckart coordinate system can be determined by starting from the time derivative of the Eckart condition [14,20,21]: Taking into account that the unit vectors of the Eckart frame evolve in time like rotations of a rigid body,ḟ µ = Ω × f µ , (µ = 1, 2, 3), the Eckart angular velocity Ω is expressed as: where the 'inertia tensor' J and the 'angular momentum vector' L are given by the relations: and: The above equations provide an expression for the (instantaneous) angular velocity Ω of rotation of the Eckart frame. Note that in the case of a truly rigid body, ∆r k = c k at all times, and thus, J and L become a true inertia tensor and angular momentum vector, respectively. In this frame, the kinetic energy of the polymer can be written as (see Appendix B): whereĴ is the inertia tensor using the Eckart variables (see Equation (20) below) and u k represents the angular contribution of the vibrational motion, i.e., the part of k-th monomer vibrational motion coupled to the rotations if the angular velocity is calculated by the (lab frame) standard approach. The last four terms of Equation (18) represent the kinetic energy contributions from, respectively, pure rotation, vibrations without angular momentum, vibrations with angular momentum and the Coriolis coupling (see Table 2). As can be seen, application of the Eckart frame formalism allows one to distinguish between vibrations without and with angular momentum contribution, the latter being displacements with respect to the pure rotation of the reference configuration [14].
Hybrid Frame
As mentioned before, the introduction of the Eckart frame allows one to obtain an optimal separation of rotation and vibration. This feature has been employed in the formulation of symplectic integrators for MD simulations, which are applicable to molecules having one equilibrium configuration and which allows the evaluation of internal high-frequency vibrations [24][25][26][27]. Despite its success in describing the vibrational dynamics of small molecules, it is interesting to note that the definition of the inertia tensor for Eckart's frame derived from the Eckart condition and given by Equation (16) does not meet in general the symmetry condition, i.e., ∆r k,µ c k,ν = ∆r k,ν c k,µ .
To fulfil this last condition, we further explored a hybrid frame, in which we combine a proper, rigid-body inertia tensorĴ [22,23] with the deformable-body angular momentum L resolved on its Eckart-frame components, to define a new angular velocity W. In particular, we define: and the angular momentum (performing a transformation between the laboratory and Eckart's frames [23]),L The angular velocity of the hybrid system is then given by the expression: In analogy with the expressions in the laboratory and Eckart frames, we also consider here a rotational kinetic energy:
Geometrical Approach
A last, complementary approach to estimate the rotational frequency of soft colloids under shear is the so-called geometrical approximation (GA). This is based on two assumptions about the behavior of the polymers in linear shear flow [28,29]. First, it is assumed that the velocity of the monomers is entirely defined by the local, undisturbed velocity profile of the flow according to: Under this assumption, the instantaneous angular momentum of the polymer is given by the expression: where G µν = N −1 mon ∑ k ∆r k,µ ∆r k,ν denotes the µν-component of the gyration tensor, which measures the overall conformation of the SBC. Furthermore, a long-time average is then performed in Equation (25), whereupon the non-diagonal element of the gyration tensor disappears, and thus, the average angular momentum has a single component, along the vorticity axis. Finally, it is assumed that the rotation of the SBC takes place mainly around the vorticity axisê 3 , i.e., ω 1 = ω 2 ≈ 0. Within these approximations, ω 3 = ω G has a constant value, and using Equation (7) it results in: Though clear by the construction of the GA, it is worth emphasizing once again that the so-obtained estimate for the angular frequency is a result of averaging the polymer motion over very long time intervals while at the same time making the a priori assumption that the instantaneous velocities of the monomers only have a component along the shear direction, dictated by the undistorted solvent velocity profile; see Equation (24). The final result, Equation (26), corresponds to the tumbling (rotation) frequency of a rigid body, the shape of which is similar to the average shape of the SBC and which also has an angular momentum equal to the value given by the mean flow [13,14]. At the same time, however, due to Equation (24), the estimate ω G is also valid for a tank-treading (TT)-type of motion, in which the SBC does not rotate as a whole, but rather, the individual arms rotate by tank-treading around the geometrical star center, which remains at rest. This is a different, prototypical type of motion, for which the overall shape of the star remains fixed in time, i.e., no tumbling of the soft colloid as a whole takes place.
Global Conformation and Dynamics
As flexible polymers generically behave in shear flow, the SBC are stretched along the shear direction, compressed along the orthogonal (gradient and vorticity) directions and exhibit a preferred (average) orientation with respect to the flow. These global features are quantified by the average values of the gyration tensor G and the orientational angle χ G , both of which can be measured experimentally. The latter measures the flow-induced alignment of the polymer and is defined as the angle formed between the eigenvectorĝ 1 associated with the largest eigenvalue of G and the flow directionê 1 , and it can be evaluated as: defining in this way the orientational resistance m G of the stars in shear flow.
At low values of Wi, the SBCs are hardly distorted, whereas for Wi 10, they become increasingly anisotropic, expanding in the flow direction and shrinking most strongly in the shear direction and in minor proportion along the vorticity axis, as demonstrated by the diagonal components of the gyration tensor in Figure 5. Similarly, Figure 6 displays the average alignment angle as a function of the shear rate. At low shear rates (Wi < 1), the scaling tan(2χ G ) ∼ Wi −0.83 is found, while for Wi > 10, it behaves as tan(2χ G ) ∼ Wi −0.3 , which is in agreement with previously-reported values [6]. The overall (equilibrium) shape of an SBC depends on the number of patches formed and the compactness of the latter, which in turn depends on f , N pol , α and λ. Depending on the values of these parameters, three general cases can be recognized. At low α and λ (α < 0.3 and λ < 1.0), the star block copolymers behave very similarly to athermal stars (α = 0) with no formation of patches or very weak, breakable ones (Case 1). On the other opposite limit, at high α and λ (α 0.6 and λ 1.1), the macromolecule acquires cylindrical symmetry around its principal axis, since it self-assembles into dumbbell-like structures with one or two massive patches (Case 3). At intermediate values of α and λ, the SBCs form a number of patches that can break-up and/or merge as a consequence of shear (Case 2) [10][11][12]. These three tendencies can be also observed from the dynamical point of view, as displayed in Figure 7, where characteristic snapshots are shown, helping to visualize the time evolution of the SBCs under shear. As can be seen there, for low amphiphilicity and good solvent, the SBC behaves in a similar way as athermal stars, and then, the arms perform tank-treading-like (TT) motions. As the contribution of the attractive interaction increases, patches begin to form and TT rotation is also found, but this time, the motion is simultaneously performed by all arms forming the cluster. Finally for high α and λ, the SBC motion closely resembles that of a rigid dumbbell. We will explore, in what follows, the ways in which these statements based on impressions from simulation snapshots acquire quantitative character through the comparison of characteristic quantities among different reference frames and approximations.
Reference Configuration Update
In the original Eckart formalism, the rigid reference configuration of (small) molecules is assumed to be the equilibrium one (all forces on all monomers vanishing), and its dynamics is governed by the time evolution of the positions of the atoms forming the molecule, which are defined by vectors c k ; see Equation (13). Since thermally-fluctuating (star) polymers do not have such a rigid equilibrium configuration, but rather a multitude of typical configurations related to the given conditions (temperature and shear rate), it is plausible to think that, as the simulation advances, the reference configuration needed to build up the Eckart frame must be updated at regularly-spaced numbers of MD steps. The period of updating the characteristic, reference configuration is denoted as t Eckart , and it can vary at will, from a very frequent update of the reference configuration that tries to follow the details of the particle motion to a rare one, for which the average, time-coarsened rotational dynamics of the molecule is captured.
In Figures 8-10, we compare the behavior of the different contributions to the kinetic energy (see Table 2) as a function of the Weissenberg number for different values of t Eckart . For t Eckart = 200 τ, the rotational energy grows very slowly with Wi (it is essentially constant), and it coincides with the value that it obtains in the laboratory. In this case, where the reference configuration is updated very frequently, the rotational frequencies ω and Ω in the LF and the EF are very similar, i.e., ω Ω and alsoĴ J, resulting in the approximate equality of rotational energies: Related to this approximate equality is the vanishingly small value of the kinetic energy contribution T u , which emerges as the sum of the angular-momentum-carrying contributions and the Coriolis coupling, viz.: Table 2).
The reason for the smallness of this term lies in that the quantity u k itself is small. Indeed, since u k = ω × ∆r k − Ω × c k , the proximities of angular velocities and configurations (∆r k ∼ = c k ) imply the smallness of u k and of both terms in the right-hand side of Equation (29) above. Another useful way to look into the quantity T u is to express it as (see Appendix C): (Table 2).
Evidently, T u is the difference in the rotational energies between the LF and EF, and its small value affirms the similarity of the two for frequent updates of the reference configuration in the Eckart frame.
Upon increasing the time intervals between updates of the reference configuration, deviations between the LF and the EF appear in the strongly nonlinear regime, Wi > 10. The EF rotational energy grows much higher than its LF counterpart, signaling significant deviations between the (temporally coarse) EF angular velocity Ω and its LF-counterpart ω. This phenomenon is consistently accompanied by an increase in the magnitude of T u , as well as an increase in the magnitudes of the velocities u k , leading to a growth of the angular-momentum carrying vibrational parts of the energy. The second term on the right-hand side of Equation (29) is the Coriolis term E C , which can be rewritten in the form: defining the partial terms E C,1 and E C,2 with the help of the vector ρ k = ∆r k − c k , Equation (A1). The behavior of each term of the Equation (31) is shown in Figure 11 only for Case 1 as representative for all other cases, as well. For t Eckart = 200 τ, the Coriolis coupling is close to zero, but for t Eckart = 400 τ, the Coriolis coupling is negative, and the contribution related to ρ k , the second term in the right of Equation (31), is dominant in the Coriolis coupling behavior. Finally, the vibrational kinetic energy associated with the velocities carrying no angular momentum, E vib = (M/2) ∑ kṽk ·ṽ k , is very large, and its value is essentially independent of t Eckart : the stars have a large number of breathing and fast oscillatory modes. Even for the case of short Eckart times, for which the quantities ρ k and u k are small, the quantitiesρ k =ṽ k + u k ṽ k are significant and denote fast oscillations of the corresponding displacement variables.
Angular Momentum and Angular Frequency
We now proceed to our results regarding the angular momenta and frequencies of the SBC motions under shear flow. In Figure 12, we compare the component of the total angular momentum around the vorticity direction L 3 in the laboratory frame from Equation (4) to the value evaluated through the geometric approximation, Equation (25). The velocity of the monomers for intermediate values of Wi is well approximated by Equation (24), i.e., it is mainly determined by the velocity of the fluid, at least in the average sense. Results for the angular frequency as a function of Wi and the dependence of this function on the frame of reference, as well as on the configuration update time t Eckart are shown in Figures 13-15, right panels. According to our analysis, since the block copolymer stars under consideration are very soft systems, the frequency of rotation in the Eckart frame should be closer to the geometrical approach, and therefore, one would expect that the decay law for high Wi should be the same in both approximations for sufficiently long updating intervals t Eckart . Our findings confirm that, indeed, the Eckart rotation frequencies lie closer to those from the geometric approximation, and they have the ones obtained by the laboratory frame analysis as a lower bound. As t Eckart grows, the Eckart rotation frequencies move from the LF towards and beyond the GA curves, confirming the fact that at coarse time scales, the stars, at least for Cases 1 and 2, can be thought of as soft colloids with a tank-treading type of motion of the polymers in their interior. Case 3 seems exceptional, in the sense that the angular frequency evaluated in the EF appears to be almost independent of the parameter t Eckart and always very close to the GA result. This is an indication of the fact that, contrary to the other two cases, these star block copolymers do not behave as tank-treading soft colloids. On the contrary, and consistent with their rather compact, elongated, dumbbell-shape, they rotate similarly to rigid prolate ellipsoids under constant shear flow. In particular, the GA-assumption of isolated monomers, each of which is carried through the solvent with the local velocity of the streaming solvent, are responsible for giving these molecules the character of rigid-like, stiff objects, as opposed to the very soft and flexible polymers of Case 1, for which associations among the end-monomers are rare and easily breakable. To emphasize the difference between Case 1 and Case 3, in Figure 16, we plot the angular frequencies for the two limiting frames, LF and GA, together with the EF result at the longest Eckart time, t Eckart = 8000 τ. As can be seen, whereas for Case 1, the EF frequencies exceed both the LF and the GA ones, for Case 3, EF and GA are very close to one another. Differences in the power-law behavior for large values of Wi between the two cases can also be seen.
Conclusions
In this work, we analyzed the rotational dynamics of an isolated star-shaped block copolymer under shear flow for three representative sets of parameters, i.e., a very flexible system (Case 1), an intermediate flexible-rigid system (Case 2) and, finally, a rather rigid system (Case 3). Motivated by very recent studies on polymer dynamics [13,14], we explored the quantitative predictions emerging from the employ of the Eckart frame formalism and compare them to the resulting ones from two different approaches (lab frame and geometrical approach). Additionally, we performed an analysis of each term in the kinetic energy and the contributions of the various kinetic terms to it.
In addition to the standard Eckart formalism [22], extended to polymers under flow in [14,20,21], we suggested a "hybrid" definition of the rotation frequency. As a consequence, we obtained different analytical approximations for the total kinetic energy and for the numerical value for the rotational frequency of the SBC, which we express using strictly the Eckart's variables. It is important to note that both treatments reproduce correctly the results for the laboratory frame for small updating time t Eckart (t Eckart ∼ 200τ); however, for t Eckart > 200τ, we found differences between both treatments, particularly for the rotational energy term. For Wi < 10, we found that the rotational energy is independent of t Eckart in the hybrid formulation, which is not the case for the rotational energy associated with the Eckart rotational frequency. Additionally, both the rotational energy and frequency found in [14] are larger than the outcomes from the hybrid treatment.
The main result concerns the behavior of the associated rotational frequency Ω at high shear rates (Wi > 100) for the three different systems. We found that for all cases, Ω is bounded from below by the rotational frequencies obtained in the lab frame (ω). For the third case, i.e., self-assembled, dumbbell-like SBC, Ω ≈ ω G for sufficiently large values for the updating time t Eckart , demonstrating that the rotation frequency mainly corresponds to tumbling motion of the SBC induced by the shear flow. On the other hand, for Case 1, which is closely related to athermal star polymers, the results obtained from the geometrical approximation are consistent with the Eckart frame only for long enough t Eckart ; therefore, the geometrical approximation only captures the average, time-coarsened tank-treading rotational frequency of the polymer. These results agree with those obtained for athermal stars with smaller polymerization degree (N pol = 6), for which it was found that the vibrational angular momentum has a larger contribution for softer polymers [14].
The dynamics of Case 2 is richer; although this system features four patches on average [12], the shear causes those patches to break and to cluster over and over again. Therefore, here, the rotational frequency results from the average of the tank-treading motion of free and clustered arms. It remains to establish a more detailed description regarding the statistic of the typical times between break-up and rejoin events, which shed light on their influence on the rheology of semi-dilute suspensions, in particular on the expected shear thinning behavior and how it can be tuned by the amphiphilicity and the solvent quality [1].
Conflicts of Interest:
The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.
Appendix A. Rotation Frequencies
In Figure A1, we show results for all components of the angular frequency. In general, we find that the angular velocity in the vorticity axis is dominant in the angular frequency vector, especially as Wi grows. The vorticity component ω 3 approaches a constant value at high values of Wi or even shows a decrease there, in Case 3. | 8,390.2 | 2018-06-22T00:00:00.000 | [
"Physics"
] |
Activation of d-Tyrosine by Bacillus stearothermophilus Tyrosyl-tRNA Synthetase
Tyrosyl-tRNA synthetase (TyrRS) is able to catalyze the transfer of both l- and d-tyrosine to the 3′ end of tRNATyr. Activation of either stereoisomer by ATP results in formation of an enzyme-bound tyrosyl-adenylate intermediate and is accompanied by a blue shift in the intrinsic fluorescence of the protein. Single turnover kinetics for the aminoacylation of tRNATyr by d-tyrosine were monitored using stopped-flow fluorescence spectroscopy. Bacillus stearothermophilus tyrosyl-tRNA synthetase binds d-tyrosine with an 8.5-fold lower affinity than that of l-tyrosine (K D-Tyrd = 102 μm) and exhibits a 3-fold decrease in the forward rate constant for the activation reaction (k D-Tyr3 = 13 s–1). Furthermore, as is the case for l-tyrosine, tyrosyl-tRNA synthetase exhibits “half-of-the-sites” reactivity with respect to the binding and activation of d-tyrosine. Surprisingly, pyrophosphate binds to the TyrRS·d-Tyr-AMP intermediate with a 14-fold higher affinity than it binds to the TyrRS·l-Tyr-AMP intermediate (K PPid = 0.043 for TyrRS·d-Tyr-AMP·PPi). tRNATyr binds with a slightly (2.3-fold) lower affinity to the TyrRS·d-Tyr-AMP intermediate than it does to the TyrRS·l-Tyr-AMP intermediate. The observation that the K Tyrd and k3 values are similar for l- and d-tyrosine suggests that their side chains bind to tyrosyl-tRNA synthetase in similar orientations and that at least one of the carboxylate oxygen atoms in d-tyrosine is properly positioned for attack on the α-phosphate of ATP.
Tyrosyl-tRNA synthetase (TyrRS) 2 catalyzes the transfer of tyrosine to the 3Ј end of tRNA Tyr in a two-step reaction (Fig. 1). In the first step, tyrosine is activated by ATP, forming the enzyme-bound tyrosyl-adenylate intermediate. In the second step, the tyrosyl moiety is transferred to the 3Ј end of tRNA Tyr . The observations that the two steps of the reaction can be run independently of each other, and that formation of the tyrosyl-adenylate intermediate is accompanied by a change in the intrinsic fluorescence of the enzyme, make it possible to use stopped-flow fluorescence to monitor single turnover kinetics for each step in the reaction (2,3).
Tyrosyl-tRNA synthetase is composed of two identical 47-kDa subunits, each of which consists of a Rossmann fold domain containing the active site, a helical anticodon binding domain, and a carboxyl-terminal domain that binds the variable loop in tRNA Tyr . Tyrosyl-tRNA synthetase exhibits an extreme form of negative cooperativity with respect to tyrosine binding, known as "half-of-the-sites" reactivity, in which the unliganded subunit is completely inactivated. This behavior has been rationalized by the observation that, in solution, tyrosyl-tRNA synthetase binds only one molecule of tRNA Tyr and therefore has no need for two functional active sites. Discrimination between L-tyrosine and other amino acids is achieved solely on the basis of binding affinity (i.e. there is no editing domain in tyrosyl-tRNA synthetase). Surprisingly, Calendar and Berg (4) observed that tyrosyl-tRNA synthetase is able to aminoacylate tRNA with either the L-or D-stereoisomer of tyrosine, although activation is more efficient for L-tyrosine than it is for D-tyrosine. Hydrolysis of D-Tyr-tRNA Tyr is catalyzed by D-tyrosyl-tRNA deacylase in vivo, as tyrosyl-tRNA synthetase does not have an editing mechanism to prevent formation of D-Tyr-tRNA Tyr (5)(6)(7). Recognition of tRNA Tyr differs between bacteria and eukaryotes, with the bacterial and eukaryotic (or archaeal) tyrosyl-tRNA synthetases being unable to efficiently aminoacylate each others' tRNA Tyr substrates (8,9). This property has been exploited to introduce unnatural amino acids into proteins in both bacterial and eukaryotic systems. For example, Schultz and co-workers (10) have modified the tyrosyl-tRNA synthetase:tRNA Tyr pair from Methanococcus jannaschii so that it is completely nonorthologous to that of Escherichia coli. By replacing the anticodon in tRNA Tyr with one that is complementary to a stop codon, they have been able to introduce unnatural amino acids at specific positions in recombinant proteins expressed from E. coli (10). The observation that tyrosyl-tRNA synthetase catalyzes the aminoacylation of tRNA Tyr by D-tyrosine raises the possibility that tyrosyl-tRNA synthetase variants designed to incorporate unnatural L-amino acids into proteins can be adapted to selectively incorporate the D-analogs of the unnatural amino acids. As a first step toward this goal, we have characterized the binding, activation, and transfer of D-ty-rosine to tRNA Tyr by Bacillus stearothermophilus tyrosyl-tRNA synthetase using pre-steady-state kinetics.
Purification of Recombinant Tyrosyl-tRNA Synthetase-Purification of the wild-type tyrosyl-tRNA synthetase was performed as described previously (1,(11)(12)(13)(14)(15)(16). Briefly, the purification consists of the following: 1) expression of tyrosyl-tRNA synthetase in E. coli Tg2 cells (17); 2) lysis of the E. coli cells and incubation of the extract at 56°C for 40 min, followed by centrifugation to remove contaminating E. coli proteins; 3) dialysis of the remaining supernatant against three changes of 20 mM Tris buffer, pH 7.78, containing 1 mM EDTA, 5 mM -mercaptoethanol, and 0.1 mM pyrophosphate to remove any tyrosyladenylate bound to the tyrosyl-tRNA synthetase, followed by dialysis against 20 mM BisTris, pH 6.0, 1 mM EDTA, 5 mM -mercaptoethanol; and 4) high pressure liquid chromatography purification of the B. stearothermophilus tyrosyl-tRNA synthetase variants on a Source 15Q-Sepharose anion exchange column using a gradient from 20 mM BisTris, pH 6.0, to 20 mM BisTris, pH 6.0, 1 M NaCl. A peak eluting at 180 mM NaCl was collected and dialyzed overnight against 20 mM Tris, pH 7.78, 1 mM EDTA, 5 mM -mercaptoethanol. This protein was then repurified on a Source 15Q-Sepharose column using a gradient from 20 mM Tris, pH 7.78, to 20 mM Tris, pH 7.78, 1 M NaCl. A peak eluting at 220 mM NaCl was collected and dialyzed overnight against 20 mM Tris, pH 7.78, 1 mM EDTA, 5 mM -mercaptoethanol, and 10% glycerol (v/v). Typical yields were 20 -30 mg/liter. Purified protein was stored at Ϫ70°C. A single band corresponding to the B. stearothermophilus tyrosyl-tRNA syn-thetase was observed on SDS-PAGE. The concentration of the tyrosyl-tRNA synthetase was determined using a filter-based activesite titration assay, in which the incorporation of [ 14 C]tyrosine into the enzyme-bound tyrosyl-adenylate intermediate is monitored (18). Comparison of the tyrosyl-tRNA synthetase concentration determined by active site titration with that determined by A 280 (19) indicated that Ͼ95% of the purified protein was active tyrosyl-tRNA synthetase.
L-Amino Acid Oxidase Treatment of D-Tyrosine-Treatment of D-tyrosine with L-amino acid oxidase was performed as described by Calender and Berg (4). Briefly, D-tyrosine (2.0 mM) was treated with L-amino acid oxidase (0.1 unit/ml) in 144 mM Tris buffer, pH 7.78, at 37°C. The reaction was terminated by boiling for 2 min.
Purification of tRNA Tyr -In vitro transcription of tRNA Tyr was performed using the procedure described by Xin et al. (20). In vitro transcribed tRNA Tyr was purified by a modification of the procedure described by Uter et al. (21). The in vitro reaction was loaded onto a 5-ml DE52 (Whatman) column, eluted with elution buffer (100 mM HEPES-KOH, pH 7.5, 12 mM MgCl 2 , 600 mM NaCl). Fractions containing tRNA Tyr were pooled and desalted on a NAP-25 column. Fractions from the NAP-25 column that contained tRNA Tyr were pooled and precipitated by adding 2 volumes of 100% ethanol and incubating at Ϫ20°C overnight. After centrifugation, the tRNA pellet was dried and resuspended in 100 l of 10 mM MgCl 2 . Annealing of tRNA Tyr was achieved by incubation at 80°C for 10 min, followed by slow cooling overnight. A nitrocellulose filter assay, in which the incorporation of [ 14 C]tyrosine into the Tyr-tRNA Tyr product is monitored, was used to determine the concentration of tRNA Tyr (2).
Steady-state Fluorescence Spectra-Steady-state fluorescence emission measurements were performed at 25°C using a TimeMaster fluorescence spectrometer (Photon Technology International). The intrinsic fluorescence of the B. stearothermophilus tyrosyl-tRNA synthetase was measured in the absence and presence of substrates ( ex ϭ 295 nm, em ϭ 300 -400 nm) in 144 mM Tris, pH 7.78, 10 mM -mercaptoethanol, 10 mM MgCl 2 , and 1 unit/ml inorganic pyrophosphatase (Buffer A). Specifically, aliquots of either MgATP or D-or L-tyrosine were added to the B. stearothermophilus enzyme (0.5 M) and 1 unit/ml inorganic pyrophosphatase in either Buffer A alone, Buffer A ϩ 200 M L-tyrosine, Buffer A ϩ 500 M D-tyrosine, or Buffer A ϩ 10 mM MgATP. After allowing the reaction to equilibrate for 2 min at 25°C, the intrinsic fluorescence of the enzyme was determined by exciting the protein at 295 nm, and the relative intensities of the fluorescence emission spectra were determined by integrating the area under the emission curve from 320 to 400 nm. Fluorescence spectra for samples containing MgATP were corrected to eliminate inner filter effects. This was done by multiplying the spectra by a scalar determined from Equation 1, where y is the correction factor, and x is the percent decrease in the total fluorescence that is observed on the addition of an equivalent amount of MgATP to the unliganded enzyme.
Equilibrium Binding Studies-Equilibrium dialysis was performed using a modification of the method previously described by Fersht (22). Briefly, one chamber of each equilibrium dialysis cell contained 40 M tyrosyl-tRNA synthetase and 1 unit/ml inorganic pyrophosphatase in buffer composed of 144 mM Tris, pH 7.78, 10 mM -mercaptoethanol, and 10 mM MgCl 2 (chamber A). The other chamber (chamber B) of each equilibrium dialysis cell contained concentrations of D-[ 14 C]tyrosine ranging from 40 to 1300 M in the same buffer. A dialysis membrane with a molecular mass cutoff of 10,000 daltons separated the chambers. After overnight dialysis at 4°C, the amount of D-[ 14 C]tyrosine present in each chamber was determined by removing 40-l aliquots, adding each aliquot to 5 ml of Cytoscint scintillation mixture, and counting in a Beckman LS 6500 scintillation counter. The concentration of tyrosine in each chamber was calculated from the specific activity of the stock D-[ 14 Kinetic Procedures-All kinetic analyses were performed in 144 mM Tris buffer, pH 7.78, 10 mM -mercaptoethanol, and 10 mM MgCl 2 (Buffer B) at 25°C unless otherwise indicated. ATP was added as the Mg 2ϩ salt to maintain the free concentration of Mg 2ϩ at 10 mM.
Tyrosine Activation-Formation of the enzyme-bound tyrosyl-adenylate complex is accompanied by a decrease in the intrinsic fluorescence of tyrosyl-tRNA synthetase (3). This allows the kinetics of the tyrosine activation reaction to be monitored using stopped-flow fluorescence methods (2,3). An Applied Photophysics SX-18.MV stopped-flow spectrophotometer was used to monitor the decrease in the intrinsic fluorescence of B. stearothermophilus tyrosyl-tRNA synthetase on formation of the TyrRS⅐Tyr-AMP intermediate ( ex ϭ 295 nm, em Ͼ 320 nm). The rate constant (k 3 ; where k 3 is the forward rate constant for the activation of tyrosine) and equilibrium constant for the dissociation of tyrosine (KЈ d Tyr ) from the TyrRS⅐Tyr⅐ATP complex were calculated from the variation of k obs with respect to tyrosine concentration in the presence of 10 mM ATP. Under these conditions ϳ70% of the enzyme has ATP bound to it. The equilibrium constants for the dissociation of tyrosine and ATP from the TyrRS⅐Tyr and TyrRS⅐ATP complexes were calculated in the same manner as KЈ d Tyr , except that ATP and tyrosine concentrations are kept at 0.5 mM and 10 M, respectively. Under these conditions, Ͼ90% of the enzyme was present as the unliganded enzyme. For determination of K d Tyr and KЈ d Tyr , the concentration of tyrosine was varied from 10 to 1200 M. For determination of K d ATP (where K d ATP is the equilibrium constant for the dissociation of ATP from the TyrRS⅐ATP complex), the concentration of ATP was varied from 0.5 to 50 mM.
In general, the experimental setup for determining the rate and equilibrium constants is as follows: syringe 1 contains 0.3-0.5 M tyrosyl-tRNA synthetase, 1 unit/ml inorganic pyrophosphatase, and the substrate that is not being varied in Buffer B. Syringe 2 contains 1 unit/ml inorganic pyrophosphatase and the substrate whose dissociation constant is being determined in Buffer B. After mixing equal volumes from each syringe, the decrease in the intrinsic fluorescence of the protein was monitored. The addition of inorganic pyrophosphatase prevents the reverse reaction from occurring once the TyrRS⅐Tyr-AMP complex has formed.
Pyrophosphorolysis and Pyrophosphate Release-The kinetics for pyrophosphorolysis of the ATP moiety were determined by monitoring the reverse reaction for tyrosine activation. The conversion of TyrRS⅐Tyr-AMP ϩ pyrophosphate to TyrRS ϩ Tyr ϩ ATP is accompanied by an increase in the intrinsic fluorescence of tyrosyl-tRNA synthetase (3). This allows stoppedflow fluorescence methods to be used to monitor the reverse rate constant (k Ϫ3 ) and the equilibrium constant for the dissociation of pyrophosphate from the TyrRS⅐Tyr-AMP⅐PP i complex (K d PPi ). The TyrRS⅐Tyr-AMP intermediate was prepared by incubating tyrosyl-tRNA synthetase with saturating concentrations of MgATP and tyrosine and 1 unit/ml inorganic pyrophosphatase in Buffer B for 30 min at 25°C. The TyrRS⅐Tyr-AMP complex was separated from free tyrosine and MgATP by gel filtration on a NAP-25 column (26). The experimental setup for monitoring the reverse reaction is similar to that described above for the activation of tyrosine, except that syringe 1 contains the TyrRS⅐Tyr-AMP complex (0.3 M) in Buffer B and syringe 2 contains 0.1-0.8 mM disodium pyrophosphate.
Pre-steady-state Kinetic Measurement of tRNA Tyr Aminoacylation-Formation of the TyrRS⅐[Tyr-tRNA Tyr ⅐AMP] ‡ complex is accompanied by an increase in the intrinsic fluorescence of the protein (2). An Applied Photophysics model SX 18. MV stopped-flow spectrophotometer was used to monitor changes in the intrinsic fluorescence of the TyrRS⅐Tyr-AMP intermediate on the addition of tRNA Tyr as described by Xin et al. (20). Briefly, the TyrRS⅐Tyr-AMP intermediate is mixed with various concentrations of in vitro transcribed tRNA Tyr in the stopped-flow spectrophotometer and the change in the intrinsic fluorescence of the protein is monitored over time using an excitation wavelength of 295 nm and an emission filter with cutoff above 320 nm.
Analysis of Kinetic Data-All kinetic data were fit to a single exponential floating end point equation using the Applied Photo-physics stopped-flow software package to determine the observed rate constants (k obs ). The Kaleidagraph software was used to plot k obs versus the substrate concentrations and to fit these plots to the following hyperbolic function shown in Equation 4, where k 3 is the forward rate constant for the formation of tyrosyl-adenylate; [S] T is the total substrate concentration, and K d is the dissociation constant for the substrate of interest (27). Goodness of fit was determined from the Eadie-Hofstee transformation of Equation 4 to Equation 5, where k obs , k 3 , [S] T , and K d are as described above.
The forward rate constant for the transfer of the tyrosyl moiety to tRNA Tyr (k 4 ) and the equilibrium constant for the dissociation of tRNA Tyr from the TyrRS⅐Tyr-AMP⅐tRNA Tyr complex (K d tRNA ) are calculated using equations that are analogous to Equations 4 and 5.
Calculation of Standard Free Energies of Binding-For the activation and transfer of D-tyrosine to tRNA Tyr , the relative standard free energies for each state along the reaction pathway were calculated from the rate and dissociation constants using Equations 6 -12, assuming standard states of 1 M for ATP, tyrosine, pyrophosphate, and tRNA Tyr , where ⌬G 0 is the standard Gibbs free energy change; R is the gas constant; T is the absolute temperature; k B is the Boltzmann constant; h is Planck's constant; "⅐" and "-" represent noncovalent and covalent bonds, respectively, and ‡ denotes the transition state complex.
where ⌬G 0 ‡ is the activation energy; R is the gas constant; T is the absolute temperature; k B is the Boltzmann constant; h is Planck's constant; and k 3 is the forward rate constant for the activation of tyrosine (28).
Ligand Docking-The tyrosyl-and tryptophanyl-tRNA synthetase coordinates used in ligand docking were taken from the 4TS1 and 1MB2 PDB files (30,31). Only the coordinates for chain A were used in ligand docking. Similarly, the coordinates for L-Tyr and L-Trp were extracted from the 4TS1 and 1MB2 PDB files, respectively. The coordinates for D-Tyr were generated as follows: 1) the L-Tyr coordinates were reflected through the z axis to generate D-Tyr; 2) the four possible rotamers of D-Tyr were generated using Coot (32,33); 3) LSQKAB (in the CCP4 software suite) was used to superpose the C  and side chain atoms for each of the D-Tyr rotomers onto the original L-Tyr coordinates, and the rotomer that superposed best was selected by visual inspection (32,34). The coordinates for D-Trp were generated from the L-Trp coordinates in an analogous manner.
Ligand docking was performed using Autodock 4.0 (35). Protein coordinates were fixed during docking. For docking of tyrosine and tryptophan, the ligand was flexible and allowed to move on a grid centered on the L-Tyr and L-Trp coordinates from the 4TS1 and 1MB2 PDB files, respectively. An initial population of 300 starting structures was used for energy optimization, and 100 docking runs were performed to find the optimal ligand conformation. A maximum of 2.5 ϫ 10 6 energy evaluations was used. Default settings were used for all other parameters, including the grid spacing, which was set to 0.375 Å. Grid searching was performed using a Lamarckian genetic algorithm. Molprobity was used to analyze atomic clashes between the docked ligand and protein (36). In the absence of substrates, B. stearothermophilus exhibits a relative fluorescence emission maximum at 349 (Ϯ3) nm when excited at 295 nm (Fig. 2, solid line in panels A-D). The addition of 500 M D-tyrosine to the enzyme causes the relative fluorescence emission to be blue-shifted by 8 (Ϯ2) nm, resulting in an 8% decrease in the fluorescence emission of the enzyme above 320 nm (Fig. 2, panel C). As is the case for L-tyrosine, inner filter effects due to the presence of D-tyrosine are negligible. The Activation of D-Tyrosine by Tyrosyl-tRNA Synthetase MAY 9, 2008 • VOLUME 283 • NUMBER 19 JOURNAL OF BIOLOGICAL CHEMISTRY 12963 addition of 10 mM MgATP to the enzyme results in a 14% decrease in the total fluorescence emission of the enzyme (Fig. 2, panels B and D). The decreased fluorescence of tyrosyl-tRNA synthetase on addition of MgATP exhibits a linear dependence with respect to the concentration of MgATP, indicating that the fluorescence decrease is because of inner filter effects (data not shown). With the exception of the TyrRS⅐ATP spectra ( Fig. 1, panels B and D), all of the steady-state fluorescence emission spectra in which MgATP is present are corrected for this inner filter effect. The addition of 500 M D-tyrosine and 10 mM MgATP together produces a 2 (Ϯ1) nm enhancement of the blue shift observed in the presence of tyrosine alone. Integrating the areas under the curves indicates that there is an additional 8% (after correcting for the inner filter effect) decrease in the relative fluorescence of the enzyme above 320 nm. These changes in the emission spectrum of B. stearothermophilus tyrosyl-tRNA synthetase upon formation of the TyrRS⅐D-Tyr and TyrRS⅐D-Tyr-AMP complexes are similar to the intrinsic fluorescence changes observed for the formation of the TyrRS⅐L-Tyr and TyrRS⅐L-Tyr-AMP complexes, suggesting that the conformation of the enzymeligand complex is similar in both cases.
Formation of the Enzyme-bound D-Tyrosyl-Adenylate Complex Is Accompanied by a Blue Shift in the Intrinsic
If the blue-shift in the fluorescence emission spectrum of tyrosyl-tRNA synthetase is because of formation of the TyrRS⅐Tyr-AMP complex, then the addition of pyrophosphate to this complex should produce a corresponding red-shift in the fluorescence emission spectrum. In the absence of pyrophosphate, the purified B. stearothermophilus TyrRS⅐D-Tyr-AMP complex exhibits a fluorescence emission maximum at 341 (Ϯ2) nm. The addition of 0.8 mM disodium pyrophosphate to this TyrRS⅐D-Tyr-AMP complex produces a 7 nm red-shift in the fluorescence emission spectrum (Fig. 2, panel F). Similar changes in fluorescence were observed with the B. stearothermophilus TyrRS⅐L-Tyr-AMP intermediate (Fig. 2, panel E).
Recognition of D-Tyrosine by Tyrosyl-tRNA Synthetase-The observation that formation of the TyrRS⅐Tyr-AMP intermediate produces changes in the fluorescence emission spectrum for both L-and D-tyrosine suggests that stoppedflow fluorescence spectroscopy can be used to monitor the activation of D-tyrosine by tyrosyl-tRNA synthetase. To verify that this is the case, the change in the intrinsic fluorescence of tyrosyl-tRNA synthetase with respect to time was monitored using stopped-flow methods. When MgATP is mixed with enzyme that has been preincubated with D-tyrosine, a rapid single exponential decrease in the intrinsic fluorescence of tyrosyl-tRNA synthetase is observed (Fig. 3, panel A). A similar change in the intrinsic fluorescence of tyrosyl-tRNA synthetase is observed when D-tyrosine is mixed with enzyme that had been preincubated with MgATP (data not shown). For the reverse reaction, a rapid single exponential increase in relative fluorescence is observed when disodium pyrophosphate is mixed with the purified TyrRS⅐D-Tyr-AMP complex (Fig. 3, panel B). This observed increase in fluorescence mirrors the decrease observed when free enzyme was mixed with D-tyrosine and MgATP and corresponds to the conversion of TyrRS⅐D-Tyr-AMP ϩ pyrophosphate to TyrRS ϩ D-Tyr ϩ ATP. A single exponential increase in the intrinsic fluorescence of tyrosyl-tRNA synthetase is also observed when tRNA Tyr is added to the TyrRS⅐D-Tyr-AMP intermediate (data not shown). This increase in intrinsic fluorescence is identical to that observed for the aminoacylation of tRNA Tyr by the TyrRS⅐L-Tyr-AMP intermediate (2).
Activation of D-Tyrosine by Tyrosyl-tRNA Synthetase-The equilibrium constants for the dissociation of D-tyrosine from the TyrRS⅐Tyr and the TyrRS⅐Tyr⅐ATP complexes (K d D-Tyr and KЈ d D-Tyr , respectively) were determined by measuring the observed rate constant for the formation of TyrRS⅐D-Tyr-AMP as a function of the concentration of D-tyrosine. For determination of K d D-Tyr (Fig. 4, panel A), the concentration of ATP was 0.5 mM (ϳ1/10 K d ATP ), whereas for determination of KЈ d D-Tyr (Fig. 4, panel B), the concentration was 10 mM (ϳ3 K d ATP ). Comparison of the dissociation and rate constants for D-and L-tyrosine indicates that the enzyme has an 8.5-fold lower affinity and 3-fold lower forward rate constant when D-tyrosine is the substrate of the reaction than when L-tyrosine is the substrate ( Table 1). Pretreatment of the D-tyrosine stock solution with A and B), and 500 M D-tyrosine and 10 mM MgATP (panels C and D), respectively, are shown. Panels A and C, tyrosine is added first followed by MgATP, and in panels B and D the respective orders of addition are reversed. With the exception of the TyrRS⅐ATP spectra (panels B and D), all of the emission spectra in which MgATP is present are corrected for the inner filter effects of MgATP by multiplying the spectra by a factor of 1. 19. Panels E and F show changes in the relative fluorescence intensity ( ex ϭ 295 nm, em ϭ 300 -400 nm) for the TyrRS⅐L-Tyr-AMP and TyrRS⅐D-Tyr-AMP complexes in the presence of 0.8 mM disodium pyrophosphate. The concentrations of TyrRS⅐L-Tyr-AMP and TyrRS⅐D-Tyr-AMP were 1 M for these experiments. To ensure that formation of the enzyme-bound tyrosyl-adenylate complex is stoichiometric, all of the steady-state emission spectra were determined in the presence of inorganic pyrophosphatase.
L-amino acid oxidase does not alter the kinetics, indicating that the observed results are not due to L-tyrosine contamination (data not shown).
Equilibrium dialysis was used to determine whether the enzyme displays half-of-the-sites reactivity with respect to D-tyrosine binding. The equilibrium dialysis data were fit to Equation 2 to calculate both the dissociation constant for D-ty-rosine (K d D-Tyr ) and the number of D-tyrosine binding sites per tyrosyl-tRNA synthetase dimer (Fig. 5). Analysis of the equilibrium dialysis data indicates that the enzyme binds 1.2 (Ϯ0.1)
TABLE 1 Rate and dissociation constants for the activation of L-and D-tyrosine
Experimental errors are indicated in parentheses. molecules of D-tyrosine per enzyme dimer. This is consistent with the observation that comparing the enzyme concentration determined by A 280 measurements with that determined by active site titration also shows formation of a single tyrosyladenylate per tyrosyl-tRNA synthetase dimer (data not shown).
L-Tyrosine
The The equilibrium constant for the dissociation of ATP from the TyrRS⅐ATP complex (K d ATP ) was determined by measuring the observed rate constant for the formation of TyrRS⅐D-Tyr-AMP as a function of the ATP concentration. In these experiments, the concentration of D-tyrosine was 10 M (ϳ1/10 K d D-Tyr ). Under these conditions, the enzyme displayed typical hyperbolic kinetics with a K d ATP of 3.8 (Ϯ0.4) mM (Fig. 6). In contrast, measuring the ATP dependence of the observed rate constant for formation of the TyrRS⅐D-Tyr-AMP complex at saturating concentrations of D-tyrosine gives sigmoidal kinetics (see accompanying paper (45)).
Pyrophosphorolysis and Pyrophosphate Release-The kinetics for cleavage of the scissile bond between the ␣and -phosphates of ATP, and the subsequent release of pyrophosphate, were determined by monitoring the conversion of TyrRS⅐Tyr-AMP ϩ pyrophosphate to TyrRS ϩ Tyr ϩ ATP (i.e. the reverse of the tyrosine activation reaction). In these experiments, the equilibrium strongly favors formation of the free enzyme, as the total concentrations of D-tyrosine and MgATP released from the TyrRS⅐D-Tyr-AMP complex are well below their dissociation constants. As discussed previously, the addition of pyrophosphate to the purified TyrRS⅐Tyr-AMP complex results in a time-dependent increase in the intrinsic fluorescence of the enzyme that can be fit to a single exponential equation (Fig. 4). Surprisingly, pyrophosphate was found to bind to the TyrRS⅐D-Tyr-AMP complex with a 14-fold higher affinity than it binds to the TyrRS⅐L-Tyr-AMP complex ( Fig. 7 and Table 1). The rate constant for the conversion of TyrRS⅐Tyr-AMP ϩ pyrophosphate to TyrRS ϩ Tyr ϩ ATP (k Ϫ3 ) is 3.6-fold lower for the TyrRS⅐D-Tyr-AMP intermediate than it is for TyrRS⅐L-Tyr-AMP.
Aminoacylation of tRNA Tyr by D-Tyrosine-Both the forward rate constant for the transfer of the tyrosyl moiety to tRNA Tyr and the equilibrium constant for the dissociation of tRNA Tyr from the TyrRS⅐Tyr-AMP⅐tRNA Tyr complex (K d tRNA ) were determined by measuring the observed rate constant for the transfer of the tyrosyl moiety from the TyrRS⅐Tyr-AMP⅐tRNA Tyr complex to tRNA Tyr as a function of the tRNA Tyr concentration (Fig. 8). As shown in Table 1, the forward rate constant for the transfer of the D-tyrosyl moiety to tRNA Tyr (k 4 ) is not significantly different from the forward rate constant for the transfer of the L-tyrosyl moiety to tRNA Tyr . In contrast, the binding of tRNA Tyr to the TyrRS⅐D-Tyr-AMP intermediate is 2.3-fold weaker than the binding of tRNA Tyr to the TyrRS⅐L-Tyr-AMP intermediate ( Fig. 8 and Table 1).
Analysis of the Free Energy Profile for the Activation of D-Tyrosine-The Gibbs standard free energy values (⌬G 0 ) for each bound state in the reaction pathway were calculated relative to the free energy of the unliganded enzyme (Fig. 9). Values shown for L-tyrosine are taken from previously published data and were calculated using KЈ d ATP and K d L-Tyr values (26). It is not possible to calculate the stability of the TyrRS⅐D-Tyr⅐ATP complex from the values for K d D-Tyr and KЈ d ATP because formation of the TyrRS⅐D-Tyr-AMP complex displays sigmoidal dependence with respect to the concentration of ATP at saturating tyrosine concentrations (accompanying paper, Ref. 45). It is possible, however, to determine the stability of this complex using the lower pathway for TyrRS⅐D-Tyr-AMP formation shown in Fig. 1. In this case, K d ATP and KЈ d D-Tyr are used to calculate ⌬G 0 E⅐Tyr⅐ATP . Each of the steps up through formation of the TyrRS⅐[Tyr-AMP] ‡ complex is destabilized when D-tyrosine is substituted for the L-stereoisomer (⌬⌬G 0 TyrRS⅐Tyr ϭ 5.3 kJ/mol, ⌬⌬G 0 TyrRS⅐Tyr⅐ATP ϭ 8.7 kJ/mol, and ⌬⌬G 0 TyrRS⅐[Tyr-ATP] ‡ ϭ 11.3 kJ/mol). This effect is offset, however, by an increase in the affinity of the enzyme for pyrophosphate (K d PPi ) and a decreased reverse rate constant (k Ϫ3 ) when D-tyrosine is present. The net result is that the stability of the TyrRS⅐Tyr-AMP intermediate is nearly identical for L-and D-tyrosine (⌬⌬G 0 TyrRS⅐Tyr-AMP ϭ Ϫ2.2 kJ/mol). Modeling of D-Tyrosine Binding-To gain further insight into the physical basis of D-tyrosine activation, Autodock 4.0 was used to dock D-tyrosine to the B. stearothermophilus tyrosyl-tRNA synthetase. As shown in Fig. 10, panel A, D-tyrosine binds in a manner similar to that of L-tyrosine. In particular, the side chain and amino groups of L-and D-tyrosine are located in similar positions. In addition, the carboxylate oxygens of D-tyrosine are located in close proximity to those of L-tyrosine, suggesting that they are in an appropriate position for attack on the ␣-phosphate of ATP. Analysis of all atom contacts by Molprobity confirmed that there were no steric clashes between tyrosyl-tRNA synthetase and D-tyrosine. The dissociation constant calculated by Autodock 4.0 for the D-Tyr⅐TyrRS complex is 139 M. Although this value is very close to that observed experimentally (Table 1), it should be noted that docking of L-tyrosine to B. stearothermophilus tyrosyl-tRNA synthetase resulted a similar value (K d L-Tyr ϭ 133 M). As a result, it is not clear from the docking results why there is an 8-fold difference in the binding of the two stereoisomers. Because tryptophanyl-tRNA synthetase is a structural homolog of tyrosyl-tRNA synthetase (37), we investigated whether modeling the binding of D-tryptophan to B. stearothermophilus tryptophanyl-tRNA synthetase would reveal significant steric clashes between the ligand and enzyme. In contrast to the tyrosine docking simulations, docking of L-tryptophan to tryptophanyl-tRNA synthetase failed to produce a structure that superimposed well on the original coordinates. For this reason, modeling of D-tryptophan binding was done by superimposing the coordinates for D-tryptophan onto those of L-tryptophan from the TrpRS⅐L-Trp structure (PDB code 1mb2) and then selecting the rotomer that most closely resembled the conformation of D-tyrosine bound to tyrosyl-tRNA synthetase (Fig. 10, panel B). Molprobity analysis of all atom clashes for the resulting complex indicates that only the carboxyl oxygen of D-tryptophan and the ⑀-amine of Gln-147 in tryptophanyl-tRNA synthetase display significant steric overlap (0.519 Å). Soutourina et al. (38) have observed that E. coli tryptophanyl-tRNA synthetase is able to catalyze the formation of D-Trp-tRNA Trp in vitro, suggesting that the above overlap is not sufficient to prevent D-tryptophan from binding to tryptophanyl-tRNA synthetase.
DISCUSSION
Aminoacyl-tRNA synthetases are highly specific enzymes, with misacylation of tRNA occurring less than 1 in 10 4 -10 5 turnovers of the enzyme (39). Although there is some selection by EF-Tu to prevent the use of misacylated tRNA during protein synthesis (40), the accuracy of the translation process is primarily dependent on the ability of the aminoacyl-tRNA synthetases to recognize and aminoacylate their cognate tRNAs with the correct amino acid. For this reason, it was particularly surprising when Calendar and Berg (4) demonstrated that the E. coli and Bacillus subtilis tyrosyl-tRNA synthetases can aminoacylate tRNA Tyr with the D-stereoisomer of tyrosine. In a subsequent paper, these authors (5) identified and partially purified the E. coli D-Tyr-tRNA deacylase, which hydrolyzes the aminoacyl bond in D-Tyr-tRNA Tyr but not in L-Tyr-tRNA Tyr . Furthermore, Calendar and Berg (5) found D-tyrosyl-tRNA deacylase activity in E. coli, yeast, rabbit reticulocyte, and rat liver extracts, suggesting that it is widespread in nature. Blanquet and co-workers (6) have confirmed this finding by searching genome sequences for homologs of the E. coli and Saccharomyces cerevisiae D-tyrosyl-tRNA deacylases. D-Tyr-tRNA Tyr deacylase is not specific for D-Tyr-tRNA Tyr , but it will also catalyze the hydrolysis of other D-aminoacyl-tRNAs, including D-Trp-tRNA Trp and D-Asp-tRNA Asp in E. coli and D-Leu-tRNA Leu in S. cerevisiae.
In this paper, the mechanism by which B. stearothermophilus tyrosyl-tRNA synthetase recognizes D-tyrosine has been investigated using single turnover kinetics. In contrast to steadystate kinetics, single turnover kinetics allows one to determine the rate and dissociation constants for each intermediate step in the reaction. For catalysis of the tyrosine activation reaction by B. stearothermophilus tyrosyl-tRNA synthetase, replacing L-tyrosine by the D-stereoisomer increases the dissociation constant for tyrosine (K d D-Tyr ) by 8.5-fold and decreases the forward rate constant (k 3 ) by 3-fold (Table 1). This corresponds to a 24-fold decrease in the specificity constant (k 3 /K d D-Tyr ). The observation that there is only an 8.5-fold increase in the K d D-Tyr values when L-tyrosine is replaced by the D-stereoisomer suggests that the D-tyrosine side chain binds to the enzyme in a manner that is similar to that observed for L-tyrosine. In addition, the observation that there is only a 3-fold decrease in k 3 suggests that at least one of the carboxylate oxygen atoms is in the correct position for nucleophilic attack on the ␣-phosphate of ATP. The results of the docking studies are consistent with these hypotheses.
Surprisingly, pyrophosphate binds to tyrosyl-tRNA synthetase with a 14-fold higher affinity when D-tyrosine is activated than it does when L-tyrosine is activated (Table 1). This is particularly intriguing, as stabilization of the transition state for the L-tyrosine activation reaction is primarily due to interactions between the pyrophosphate moiety of ATP and the enzyme (1,(11)(12)(13)(14). Specifically, seven residues in tyrosyl-tRNA synthetase have been shown to interact with pyrophosphate in the TyrRS⅐L-Tyr-AMP⅐PP i intermediate complex: Thr-40, His-45, Lys-82, Arg-86, Lys-230, Lys-233, and Thr-234 (1,13). Four of these residues (His-45, Lys-230, Lys-233, and Thr-234) are in the HIGH and KMSKS signature sequences and are highly conserved among the class I aminoacyl-tRNA synthetases. The stearothermophilus tyrosyl-tRNA synthetase is shown. L-and D-tyrosine are shown as stick models with oxygen atoms in red, nitrogen atoms in blue, and carbon atoms in green (L-tyrosine) and cyan (D-tyrosine). B. stearothermophilus tyrosyl-tRNA synthetase is shown as a schematic representation (magenta). Docking was done using Autodock 4.0 as described in the text. Panel B, docking of L-and D-tryptophan to B. stearothermophilus tryptophanyl-tRNA synthetase is shown. L-and D-tryptophan are shown as stick models with oxygen atoms in red, nitrogen atoms in blue, and carbon atoms in green (L-tryptophan) and cyan (D-tryptophan). B. stearothermophilus tryptophanyl-tRNA synthetase is shown as a schematic representation with the Gln-147 side chain shown as a stick model (magenta). Modeling was done by superimposing the D-tryptophan coordinates onto those of L-tryptophan and selecting the rotomer that most closely resembled the conformation adopted by D-tryptophan in the TrpRS⅐D-Trp complex as described in the text. The molecular graphics for this figure were generated using PyMol (44).
observation that pyrophosphate is bound more tightly in the TyrRS⅐D-Tyr-AMP⅐PP i complex when D-tyrosine is present than it is in the L-tyrosine complex suggests that it is oriented in a manner that increases the interaction between pyrophosphate and one or more of the residues discussed above. This suggests that the decreased stability of the transition state when D-tyrosine is present may be due to the altered interaction between tyrosyl-tRNA synthetase and the pyrophosphate moiety of ATP.
Although it is encouraging that the docking results predict a binding affinity for D-tyrosine similar to the experimentally observed value (K d theor ϭ 139 M versus K d expt ϭ 102 M), this must be tempered by the observation that a similar value is predicted for the dissociation constant of L-tyrosine (K d theor ϭ 133 M versus K d expt ϭ 12 M). Recently published molecular dynamics simulations predict that the free energy difference between the binding of L-and D-tyrosine to E. coli tyrosyl-tRNA synthetase is 13 (Ϯ8) kJ/mol (41). Both our results and those of Thompson et al. (41) are close to the experimentally determined free energy difference for the binding of L-and D-tyrosine (⌬⌬G 0 ϭ 5.0 kJ/mol). Despite the success of the docking studies, however, the precise mechanism by which tyrosyl-tRNA synthetase discriminates between L-and D-tyrosine remains to be elucidated.
Several lines of evidence suggest that the binding of D-tyrosine has effects on the active site that extend beyond the tyrosine binding pocket. First, in contrast to the free enzyme, the TyrRS⅐D-Tyr complex exhibits sigmoidal kinetics with respect to ATP binding (see accompanying paper (45)). Second, pyrophosphate binds 14-fold more tightly when D-tyrosine is present (Table 1). Third, there is a 2.3-fold decrease in the binding affinity of tyrosyl-tRNA synthetase for tRNA Tyr when D-Tyr-AMP is bound (Table 1). These observations suggest that the tyrosine binding pocket is intimately connected to the ATP and tRNA Tyr binding pockets with apparently subtle changes to tyrosine binding affecting distal parts of the active site.
It is intriguing that Calendar and Berg (4) found that replacing L-tyrosine with its D-stereoisomer has a significantly larger effect on the value of K m Tyr for the E. coli enzyme than it does for B. subtilis tyrosyl-tRNA synthetase (K m D-Tyr /K m L-Tyr ϭ 23 for E. coli tyrosyl-tRNA synthetase versus 3 for B. subtilis tyrosyl-tRNA synthetase (4)). In contrast, replacing L-tyrosine with D-tyrosine results in a 10-fold decrease in the V max values for B. subtilis tyrosyl-tRNA synthetase, but only a 5-fold decrease in V max values for the E. coli enzyme (4). The effect that replacing L-tyrosine with D-tyrosine has on the K d Tyr and k 3 values in B. stearothermophilus tyrosyl-tRNA synthetase is consistent with the above results obtained by Calendar and Berg (4) using steady-state kinetic methods. The observation that the stereoselectivity for tyrosine differs between the E. coli, B. subtilis, and B. stearothermophilus tyrosyl-tRNA synthetases suggests that tyrosyl-tRNA synthetase has the potential to be more stereoselective. This raises the question as to why tyrosyl-tRNA synthetases have not evolved the level of stereoselectivity that is observed in other aminoacyl-tRNA synthetases (39,42,43). One possibility is that the levels of D-tyrosine are sufficiently low that there is little selective pressure to discriminate between the D-and L-stereoisomers. This is unlikely, however, as the widespread distribution of D-Tyr-tRNA deacylase in nature suggests that misacylation of tRNA Tyr by D-tyrosine is a significant problem in organisms that are not auxotrophic for tyrosine (6). An alternative explanation is that increasing the stereospecificity of tyrosyl-tRNA synthetase may come at the expense of its catalytic activity. In this scenario, the selective advantage of increasing the catalytic activity of tyrosyl-tRNA synthetase outweighs the energetic costs associated with the editing of D-Tyr-tRNA Tyr in trans by D-Tyr-tRNA deacylase. It remains to be determined whether mutations that alter the stereoselectivity of tyrosyl-tRNA synthetase also affect the forward rate constant for the activation of tyrosine. | 8,811.6 | 2008-05-09T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Biomechanics of Transcatheter Aortic Valve Implant
Transcatheter aortic valve implantation (TAVI) has grown exponentially within the cardiology and cardiac surgical spheres. It has now become a routine approach for treating aortic stenosis. Several concerns have been raised about TAVI in comparison to conventional surgical aortic valve replacement (SAVR). The primary concerns regard the longevity of the valves. Several factors have been identified which may predict poor outcomes following TAVI. To this end, the lesser-used finite element analysis (FEA) was used to quantify the properties of calcifications which affect TAVI valves. This method can also be used in conjunction with other integrated software to ascertain the functionality of these valves. Other imaging modalities such as multi-detector row computed tomography (MDCT) are now widely available, which can accurately size aortic valve annuli. This may help reduce the incidence of paravalvular leaks and regurgitation which may necessitate further intervention. Structural valve degeneration (SVD) remains a key factor, with varying results from current studies. The true incidence of SVD in TAVI compared to SAVR remains unclear due to the lack of long-term data. It is now widely accepted that both are part of the armamentarium and are not mutually exclusive. Decision making in terms of appropriate interventions should be undertaken via shared decision making involving heart teams.
Introduction, Search Strategy, and Selection Criteria
Transcatheter aortic valve implantation (TAVI) was first used by Cribier et al. 20 years ago [1]. Over the years, evidence has grown regarding the efficacy and safety of this novel modality, which has formed a major cornerstone in the treatment of structural heart disease. These minimally invasive procedures restore valve functionality in patients with calcific aortic valve stenosis (AVS) and have become routine approaches [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. TAVI is recommended for symptomatic patients with severe AS who are 65 to 80 years of age and have no anatomic contraindications to the use of transcatheter aortic valve implantation via transfemoral access. TAVI is considered an adequate treatment option as an alternative to standard surgical aortic valve replacement (SAVR) after shared decision making, weighing the balance between expected patient longevity and valve durability [19][20][21][22][23][24][25]. Evidence suggested that TAVI (compared to standard medical and surgical options) had lower associated rates of death from any cause. Mid-and long-term follow-ups provided no evidence of restenosis or prosthesis dysfunction [6,[9][10][11]18,[26][27][28][29][30]. Moreover, recent randomized clinical trials (RCTs), meta-analyses, and propensity score analyses, confirming registry reports, revealed satisfactory outcomes of TAVI in terms of feasibility, long-term hemodynamics, and functional improvement [12,14,27,[31][32][33][34]. However, the first and second generations of implanted transcatheter heart valves (THVs) had high related percentages of moderate to severe perivalvular aortic regurgitation [35], which is evidence that highlights the causes that determine one of the frequent complications associated with TAVI, which confers an increased rate of mortality [36]. During repeated follow-ups, the emerging data raised concerns about the incomplete apposition of prostheses related to calcification or annular eccentricity [37], the undersizing of the device, and the incorrect positioning of the valve, thus identifying the most common determinants of paravalvular aortic regurgitation [38].
Based on these observations, the criteria that are of utmost importance to avoid complications are the appropriate determination of the size of the annulus, the correct evaluation of the calcifications, and adequate sizing of the prosthetic valve. Pre-operative planning with biomechanical assessments should be completed for patients for whom TAVI is recommended, as suggested by international guidelines and by standardized endpoint definitions for transcatheter aortic valve implantation, dictated in the Valve Academic Research Consortium-2 (VARC-2) consensus document [19,20,38].
Finite element analysis using computational biomodeling is a crucial method used to obtain valuable measurements regarding complicated real-world systems which would otherwise be impossible to directly determine. Today, several studies have applied FEAs to the design of medical devices or to the analysis of mechanical processes integrated into the biological system in order to calculate stresses and investigate potential failure modes and locations. Finite element (FE) models require accurate 3D (3D) geometry in the zero-stress state, material properties, and physiological loading conditions .
To encourage a wider diffusion of TAVI, and to provide a guide for clinicians, we discuss the current evidence basis for the use of transcatheter heart valve implantation and review related articles focused on computational biomodelling aimed at predicting the failure of transcatheter heart valve therapy for the treatment of structural heart disease .
We searched MEDLINE, Embase, and the Cochrane Library using the search terms "aortic valve stenosis" or "aortic valve operation" together with "transcatheter aortic valve implant", "transcatheter aortic valve replacement", "standard surgical aortic valve replacement", "computational modelling", "finite element analysis", "aortic valve surgery", "transcatheter heart valve" or "valve thrombosis", and " structural valve degeneration". We selected publications primarily within the past 20 years; however, we did not exclude widely referenced and highly regarded older publications. Recommended bioengineering articles were cited to provide readers with further details and background references.
We broadly address the use of computational biomodelling to further appreciate complex mechanical processes regulating the workings of these new devices for aortic root implantation. Using advanced computational tools that integrate patient-specific information, it is thereby possible to obtain accurate modeling of the self-and balloonexpandable devices used to treat severe aortic valve stenosis. We propose an evidence-based algorithm for the choice of TAVI (Figure 1).
Engineering to Study the Features of Implanted Transcatheter Heart Valve
Transcatheter aortic valve implantation is becoming the prime destination on the road map for translational research since its first ideation and use in pediatric cardiac surgery to circumvent the complication of reopening the sternum and reoperation [53]. Using the finite element analysis (FEA) methodology, we marked the crucial differences between the biomechanics of the aorta and pulmonary artery [54,55]. We performed a tensile test in the native pulmonary artery and native aorta. Evidence suggested that tissue's response to stressors of the pulmonary valve leaflets caused stiffer behavior than the aortic valve, and decreased deformation for applied loads as high as 80 kPa (600 mmHg) was recorded. Importantly, the biomechanics of the valve annulus displayed less deformable structures of the root, suggesting that the weaker points of the PA were present in the free walls of the pulmonary artery (PA) distal to the valve. The aortic root suitably accommodated increasing hemodynamic loads without meaningful deformation. Again, the differential analysis performed on samples cut longitudinally and circumferentially revealed different behavior for both the aorta and pulmonary artery. The circumferential strength of the PA was greater than the aortic one, while similar properties in the longitudinal direction were comparable. Our results suggested that the PA may exhibit a consensual increase in stress and strain in both directions, while the aorta revealed better adaptability in the longitudinal direction and a steeper curve in the circumferential response, potentially suggesting the non-aneurysmatic tendency of the pulmonary artery root compared to the aorta [54].
The innovative use of FEA for research in cardiovascular science related to the mitral valve, pulmonary artery, and aorta [41][42][43][50][51][52][56][57][58][59][60][61][62][63][64][65][66][67] can provide an understanding of structural changes in biological systems such as degenerative processes in leaflet and vessel wall stresses, thereby preventing procedural failures. The distinct measurement of biomechanical stress resulted in different applicability in studies such as those investigating leaflet stresses related to the geometry of stented porcine and bovine pericardium xenografts [57] or examining stresses in the aortic root and calcified aortic valve aimed to prevent the risk of rupture [41,43,44,59,60,68]. Recently, the benefits associated with the use of FEA applied to TAVI were established in a landmark paper by Xuan et al. The investigators thoroughly evaluated TAVI with leaflets, stents, polyethylene terephthalate, and sutures to predict the mechanism leading to the structural valve degeneration of THV devices [56].
Confluence of Engineering and Medical Sciences
Finite element analysis is a discipline of the geometric algorithmic prediction of stress and the evaluation of deformation coefficients in complex structures through a complex system of predictable mathematical calculation applied to well-divided small geometric areas [68]. We have learned that from its first applications in the field of cardiac surgery, which date back about twenty years, the use of FEA has developed slowly despite the possible achievement of substantial progress. Since its introductory applications, the FEA methodology has been noted for its limited applicability in clinical practice. This 'distrust' is pertinent in surgical disciplines, which are based on clinical evidence, as the Finite Element Analysis investigation offers their field of research speculative data without correlated clinical evidence [40][41][42][43]54,55,[59][60][61]67].
Before the paradigm shift that radically changed the treatment of symptomatic calcific aortic stenosis, clinical and experimental studies produced scientific evidence without the use of FEA. Easier, more understandable methodologies and probably more reliable ones have been used to test hypotheses and prove theses. The revolutionary technology of the novel method that makes up the most advanced platforms for the treatment of structural heart diseases meant that SAVR had given way to the advent of TAVI. Rapid technological advancements have made it possible to obtain three generations of balloon-expandable devices in a span of 6 years and has given new impetus to FEA [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18].
In this context, the findings of Smuts et al. aided the development of new concepts for different percutaneous aortic leaflet geometries [69]. Instead, Wang et al. [43] and Sun et al. [70] studied the post-operative behavior of TAVI from a mechanical and hemodynamic point of view. A crucial advancement in the application of FEAs was offered by Capelli et al. [45], who effectively analyzed the feasibility of TAVI in morphological conditions and considered borderline cases for the percutaneous approach, paving the way for the treatment of failed bioprosthetic aortic valves with the use of TAVI.
A patient-specific simulation based on FEA that takes into account all procedures and has the potential to produce post-operative prosthesis simulations, by means of inclusion in the analysis of biological valve needlework in metal structures, was reported by our group in a landmark paper almost 10 years ago [71]. We subsequently reported evidence by comparing the post-operative medical data with the biomechanical investigation method. Recently, we developed a systematic TAVI simulation approach, tailored for clinical practice, for patients receiving both a self-expandable Medtronic Corevalve (Medtronic, Minneapolis, MN, USA) and a balloon-expandable SAPIEN (Edwards Lifesciences, Irvine, CA, USA). Studies based on the analysis of the pre-operative medical imaging of patients who have undergone TAVI are of particular interest [39][40][41][50][51][52]. The final goal derived from these studies is to predict the post-operative performance of the prosthesis with respect to the specific anatomical characteristics and potential complications such as structural/nonstructural valve degeneration and thrombosis [56].
Likewise, the new evidence emerging from these studies strengthened previous evidence on the potentially high levels of stress to which devices for THV implantation are subjected. Previous studies have revealed, both in a static or boundary conditions as well as during fatigue stress simulations that in individuals who are managed with the THV procedure, the predictable duration of TAV1 may be shorter than those who received a surgically implanted aortic bioprosthesis. This evidence confirms that leaflet deformation and stresses are significantly higher in TAVI, especially near commissures and along stent attachments [57,72].
Medical Image Processing
Biomechanical simulations using FEA analysis, starting from pre-clinical evaluations, have offered an original contribution as an advanced tool for clinical support for the following reasons. First, the aortic valve model is complete, including both the aortic sinuses, and the native valve leaflet as well as the material model considered are calibrated on human data. Second, the calcified plaque is included in the model, and it is based on the image recording. Finally, the geometry of the prosthetic stent is very precise, obtained from micro-tomography (micro-CT) reconstruction [39][40][41][50][51][52].
Another substantial advantage that makes this analysis reliable is represented by the possibility of obtaining post-operative data collected by physicians for the follow-up of individuals. These data are used for comparison with the numerical results obtained by the FEAs, with the ultimate goal of evaluating the capabilities of the proposed simulations to predict procedural outcomes [40,50].
Concerns related to validating TAVI simulations are crucial as it can usually be difficult to obtain good-quality post-operative data and images from standard post-operative procedures. Another point of divergence concerns post-operative CT control, which is sometimes excluded from routine protocols for TAVI because these patients are often frail, and it is not recommended to overload the kidneys with additional doses of contrast and high doses of radiation should be avoided in patients who are often in critical condition. Instead, evaluations on the outcome of the procedure are offered by intraoperative CT scans as well as by follow-up echotomography [73][74][75].
The computational framework adopted to simulate the implantation of TAVI includes four main phases, which are processing of the medical images, the creation of models suitable for analysis, the performance of the required analysis permitting the integration of the clinical procedure, and finally, the post-processing of the simulation results and subsequent comparison with the follow-up data [39][40][41]44,[50][51][52] (Figure 2). Morganti et al. worked on a biomechanical simulation model for TAVI starting from a standardized approach to scan the main parameters with cardiac CT. Pre-operative examinations were obtained using a dual-source computed tomography scanner (Somatom Definition, Siemens Healthcare, Forchheim, Germany). The investigators achieved contrastenhanced images using iodinated contrast medium which was injected as follows: scan direction, cranio-caudal; slice thickness, 0.6 mm; spiral pitch factor, 0.2; tube voltage, 120 kV [40,41].
Our group developed a reliable protocol to ensure the quality of the CT images, which must subsequently be processed using FEA [39,[50][51][52] (Figure 3). With a complete cardiac cycle in one beat (0-100%) and with the acquisition of a Dose Length Product (DLP) equal to 459 microgray (mGy)/cm, we offered an optimal image quality to be processed for biomechanics. This allowed the functional evaluation of the aortic valve, the morphological study of the aortic valve, and the anatomical determination of the AVS [39] (Figure 4).
Scientific reports that describe image analysis using established theoretical approaches have provided solid answers on the active contour segmentation process, which has experienced robust implementation. Despite the existence of powerful segmentation methods, the needs of clinical research have continued to be met, to a large extent, using manual slice-by-slice tracking. The landmark study of Yushkevich et al., performed in the context of a neuroimaging study of childhood autism, bridged the gap between methodological advances and routine clinical practice. The investigators developed a revolutionary open-source application called ITK-SNAP. This application aims to make the segmentation of level sets easily accessible to a wide range of users, including those with little or no mathematical skills. SNAP proved to be a reliable and efficient application compared to manual tracking [76]. Therefore, the most common method of obtaining a reliable model from CT data sets is their processing using ITK-Snapv2.4, as described by Yushkevich et al. [76]. Specifically, a confined region of interest, such as that represented by the aortic root, which is composed of the left ventricular outflow at the sinotubular junction, is extracted from the entire reconstructed body by exploiting the contrast enhancement, nibbling, and segmentation capabilities of the software. Again, the effectiveness of the TK-Snapv2.4 is highlighted using different Hounsfield unit thresholds, through which it is possible to distinguish the calcium agglomerates of the surrounding healthy tissue and evaluate it at intervals of both position and size. Once the segmented regions have been extracted, it is possible to export the aortic lumen morphology, as well as the calcium deposits like stereolithographic (STL) files [39][40][41][50][51][52] (Figure 5).
Analysis Suitable Model
A crucial step concerns the procedure to obtain suitable analysis models both for the native aortic valve, including calcifications affecting the leaflets along with the aortic wall, and for the prosthetic device.
Native Aortic Valve Model
In the native aortic valve model, different investigators reported that once the STL file containing the characteristics of the aortic root is obtained, it can be processed and implemented in Matlab (The Math works Inc., Natick, MA, USA). The latter serves as an effective system for defining a set of splines, similar to the cross-sectional profile of the aortic lumen. In this way, the curves obtained are used to automatically generate a volume model of the aortic root wall.
Several studies demonstrated that the geometric model of the aortic root obtained by processing the STL file represents the fundamental starting point for performing the finite element analysis of TAVI. Antiga et al. created the Vascular Modeling Toolkit (VMTK). This modeling framework was designed for patient-specific computational hemodynamics to be performed in the context of large-scale studies. The use of Vascular Modeling Toolkit exploits the combination of image processing geometric analysis and mesh generation techniques and stresses full automation and high-level interaction. Importantly, image segmentation is performed using inferred deformable models and by exploiting the advantage of a different approach for the selective initialization of vascular branches, as well as of a strategy for the segmentation of small vessels. Again, the advantage of using the Vascular Modeling Toolkit is the solid definition of center lines which provides substantial geometric criteria for the automation of surface editing and mesh generation [77,78].
Several investigators reported good results by processing STL files of calcifications using the Vascular Modeling Toolkit to extract a regular tetrahedral mesh [39][40][41][50][51][52][53]56,71,77,78]. Likewise, an efficient, robust procedure for the mesh generation leading to high-quality computational meshes includes the open-source Gmsh software [79] and the alternative framework described by Dillard et al., in which the entire image-based modeling process is performed on a Cartesian domain where the image is fixed within the domain as an implicit surface [80]. Gmsh software can generate different types of meshes including isotropic tetrahedral meshes, anisotropic tetrahedral meshes, and mixed hexahedral/tetrahedral meshes. In addition, Gmsh software had the crucial advantage of generating multiplelayered arterial walls with variable thicknesses. Alternatively, the structure developed by Dillard et al. gets around the need to generate surface meshes that have to adapt to complex geometries and the subsequent need to generate flow meshes adapted to the body. The three determining factors are identified as Cartesian mesh pruning, local mesh refinement, and massive parallelization, which are crucial to providing computational efficiency. The efficacy of the framework described by Dillard et al. lies in the full picture analysis, which revealed two 3D image reconstructions of geometrically dissimilar intracranial aneurysms which require computed flow calculations [80].
The finite element mesh generated with this procedure is effective for both reproduced aortic wall and native valve leaflets in obtaining a complete and realistic model to perform the simulations at the same time. Morganti et al. suggested that to include the native geometry of the leaflets, the first step consists of identifying nine reference points: six of them refer to the commissural extremes, while the others correspond to the center of the attachment of the basal leaflets. We recently adopted this method in a study comparing two different biomechanical features involving the two different TAVI device models, the self-expanding Medtronic CoreValve and the balloon-expandable Edwards SAPIEN [40,41]. Of note, Xuan et al. also revealed that stent and leaflet surfaces were combined using suture lines as a reference point for leaflet orientation [56].
It is important to highlight that the use of the aforementioned reference points offers the possibility of defining individual planes that can guide the distribution of the entire model of the aortic root, which ultimately serves to reproduce both the extraction of the leaflet commissures and the attachment lines [40,41,50,51]. The use of ultrasound is important to measure the length of the free margins, which appear as a circular arc. Determining the perimeter of the leaflets leads to the construction of the leaflet surface in the open configuration [40].
The modeling of the aortic wall is meshed with the use of a variable number of tetrahedral elements that take into account both the healthy part and the portion occupied by calcium conglomerates. Morganti reported a number between 235,558 and 265,976 tetrahedral elements for the healthy region of the aortic root, while the leaflet was discretized using a number between 3212 and 3258 shell elements for the healthy part. In cases where calcium agglomerates were present, the leaflets were discretized with reduced integration for healthy tissue. The discretization for the occurrence of calcified plaques ranged from 342 to 427 shell elements [40,41].
Xuan et al. worked to determine stent and leaflet stresses in a 26 mm first-generation balloon-expandable transcatheter aortic valve. The investigators imported the refined geometries of leaflets, stent, and polyethylene terephthalate into HyperMesh (Altair Engineering, Troy, MI, USA) to generate TAV mesh with 46,443 total elements. Their study did not require adjunctive discretization for the presence of calcified plaques located in the aortic wall and leaflets because the simulation was not performed in the aortic root and leaflets cluttered by calcifications [56].
Bianchi et al., in a comparison study between Sapien 3 and CoreValve, squeezed out the sinuses of Valsalva in Abaqus CAE, while the calcification deposits were processed in MATLAB and subsequently assembled in the AR. In a previous report, Bianchi et al. [47] incorporated calcifications in soft tissues to better mimic the morphology of the stenosis. The investigators finally re-meshed the aortic root with tetrahedral elements in Ansys Fluent Meshing to ensure mesh continuity at the interface between the sinus and the leaflets and between calcifications and surrounding soft tissues. The mesh size was approximately 1.4 million for SAPIEN cases and 2.5 million for CoreValve cases, as more of the ascending aorta were required for deployment.
In cases of biomechanical evaluations used to compare prosthetic devices, postoperative configuration, and performance, simplified St. Venant-Kirchhoff properties can be used to model native aortic tissue, leaflets, and calcifications. Several investigators used Young's modulus for the aortic root, leaflets, and calcifications (E, Poisson's ratio ν, and density ρ) [40,81]. Xiong et al. used Young's modulus for the native leaflet, and they used such a value to model the bovine pericardium aortic leaflet [81]. Stradins et al. reported that the same value of 8 MPa approximates the stiffer (i.e., circumferential) non-linear behavior of the human aortic valve. It is important to underline that considering the stiffer curve is reasonable given the greater stiffness recorded in aortic valve stenosis, which have stiffer tissues than the average patient [82].
Prosthesis Model and Material Model
Although several devices for TAVI have been described over the 20 years, [39] the two devices used in a large number of patients in clinical practice include the Medtronic Core Valve and the Edwards Lifesciences SAPIEN. While the CoreValve is self-expanding, the Edwards SAPIEN valve is primarily produced of three flexible biological leaflets sutured into an expandable balloon stent.
For example, in two independent works, Morganti et al. [40] and Nappi et al. [50] obtained a faithful geometrical model of SAPIEN XT 26 mm and of SAPIEN 3 using a high-resolution micro-CT scan (Skyscan 1172 with a resolution of 0.17 micron). These stent models were achieved using 84,435 solid elements. Xuan et al. [56] obtained a fully expanded first-generation Sapien valve (26 mm) which was conceived under 0 mm Hg pressure with a desktop cone-beam micro-CT scanner (microCT-40; Scanco MedicalAG, Baseldorf, Switzerland) in different orientations and intensities to discriminate stent and leaflet geometries. The refined geometries of leaflets, stents, and polyethylene terephthalate were then imported into HyperMesh (Altair Engineering, Troy, MI, USA) to produce TAV mesh with the use of 46,443 total elements [56].
Generally, the material model for the native aortic tissue is presupposed to be homogeneous and isotropic, as described by Capelli et al. [45] and Gnyaneshwar et al. [83]. Selvadurai [84] and Yeoh et al. [85] hypothesized the use of an incompressible reduced polynomial form aimed at reproducing the material behavior and indicating it as reduced polynomial strain energy, taking into account the material parameters of the deviatoric strain invariant and the deviatoric stretches.
Morganti et al. [40], in the computational modeling of SAPIEN XT, with regard to the material model, chose a sixth-order polynomial form, finding an unknown material constant. The investigators took as reference for the aortic leaflets and the Valsalva sinuses the data that emerged from the studies by Martins et al. [72] and Stradins et al. [82]. These data were integrated with those produced by Auricchio et al. to obtain the final characteristics of the material models. In particular, with regard to the aortic wall and the native valve leaflets, it was assumed that these had a uniform thickness of 2.5 and 0.5 mm, respectively. In observations of the evidence reported by Capelli et al. [45], for calcifications, an elastic modulus of 10 MPa; a Poisson ratio of 0.35; and a density of 2000 kg/m 3 were assumed. Again, as for the Von Mises plasticity model with isotropic hardening, Morganti et al. assumed 233 GPa as Young's modulus; 0.35 as the Poisson coefficient; 414 MPa as yield stress; 933 MPa as ultimate stress; and 45% from deformation at the break [40,41].
The computational model that evaluates the prosthetic valve leaflets of the SAPIEN device must consider the different factors concerning the constitutive characteristics of bovine pericardium after the fixation process. The leaflets were modeled as an isotropic material and, in particular, an elastic modulus of 8 MPa, a Poisson coefficient of 0.45, and a density of 1100 kg/m 3 were used following the evidence reported by Xiong et al. The prosthetic valve was meshed with 6000 quadrilateral shell elements, while a uniform thickness of 0.4 mm was considered [40,81,[86][87][88][89]].
Finite Element Analyses
Finite element analysis is a crucial step of computational biomodelling to be applied to the TAVI procedure for biomechanical evaluation. Since TAVI is a complex procedure that is divided into several phases, the simulation must respect rigid steps to be reliable, which are stent crimping/deployment and valve mapping/closure.
In the first stage, the prosthetic model is crimped to obtain the catheter diameter, which was usually 24 French (8 mm) in the transapical approach. Subsequently, the prosthetic prosthesis expands inside the AR. In the aortic root, the device is expanded according to the two most widely used systems: the self-and the balloon-expandable method [3,8,90,91]. A third system is represented by mechanical expansion [92,93]. The transapical approach has been replaced by the transfemoral one, which is currently a more commonly adopted procedure and benefits from the use of small catheter sizes of 18-16 and 14 French [15][16][17] ( Figure 6). Again, all the numerical analyses are subject to non-linear concerns involving large deformation and contact. For this reason, many investigators used the Abaqus system (solver v6.10 or CAE) [40][41][42]46,[50][51][52]56] to perform analyses on large deformations. Two points still need to be emphasized. First, quasi-static procedures were used, again assuming that inertial forces do not change the solution. Second, kinetic energy monitoring is crucial; kinetic energy is monitored to ensure that the ratio of kinetic energy to internal energy remains less than 10%.
For example, with regard to stent crimping and deployment evaluating the procedure for a 26 mm SAPIEN XT implanted with a transapical approach, the cylindrical surface is gradually crimped from an initial diameter of 28 mm to a final diameter of 8 mm [40]. The cylinder is meshed using 2250 four-node surface elements with decreased integration, and it is modeled as a rigid material with a density of 7000 kg/m 3 . In these cases, a frictionless contact must also be considered, which is generally defined between the crimp surface and the stent. After affixing the stent, its deformed configuration is then re-imported into Abaqus CAE, taking into consideration the tensional state resulting from the crimping analysis as the inceptive state. Conversely, to reproduce the stent expansion, it is important to keep in consideration that a pure and uniform radial displacement is gradually applied to the node of a rigid cylindrical surface. Note that if a balloon-expandable device is used, it is assumed that the cylindrical surface is represented by the wall of the expanding balloon. Finally, the rigid cylinder is expanded from an initial diameter of 6 mm to a final diameter of 26 mm. Another fundamental point to consider in the simulation is that during the expansion of the stent, the axis of the balloon always remains fixed. This hypothesis can be considered valid because it is observed through intraoperative angiographic control that shows negligible axis rotation and translation [40][41][42]46,[50][51][52]56].
The second stage is constituted by valve mapping and closure, in which the prosthetics leaflet is delineated onto the embedded stent ensuring physiological pressure that is requested to revive the diastolic behavior of implanted THVs. The pivotal study of Auricchio et al. [71] offered a substantial contribution to reproducing the realistic features of the prosthetic device, thereby evaluating the post-operative performance of implanted THVs. The investigators realized that pre-computed shifts are assigned to the base of the valve and at the nodes of the commissures of the leaflets so as to obtain a complete configuration of the implanted prosthetic device [40][41][42]46,[50][51][52]56].
By respecting these steps, it is possible to reproduce the post-operative diastolic features of both the balloon-and the self-expandable TAV within the patient-specific model of the aortic root. As reported by Wiggers et al., to simulate valve behavior at the end of the diastolic phase, uniform physiologic pressure needs to be applied to the prosthetic leaflet of the THV. Furthermore, a frictionless self-contact that is settled for the prosthetic valve must be considered [94] (Figure 7).
Insight on the Use of Biomechanical Evaluation to Predict Paravalvular Aortic Regurgitation
We have learned that the choice of the size and type of the prosthetic device is very important to avoid or at least reduce aortic regurgitation and/or other TAVI complications [35,37,95]. Detain et al. [35] and Delgado et al. [37] first independently reported that the occurrence of aortic regurgitation (AR) was related to incongruence between prosthesis and annulus. Since then, adequate annular sizing of the prosthesis has been considered essential to reduce paravalvular aortic regurgitation. Evidence that emerged from pivotal RCTs in patients who underwent THV implantation disclosed that very few TAVI candidates were supported with the anatomic and morphological study on the features of the aortic valve annulus to predict aortic regurgitation after device implantation [2,3,8].
Detain studied 74 patients who underwent TAVI with a balloon-expandable device with all-embracing echocardiographic examinations. The most favorable targets to disclose the occurrence of AR > or = 2/4 were greater patient height, larger annulus, and smaller cover index (all p < 0.002), while the ejection fraction, severity of stenosis, or prosthesis size were not indicative of AR-related events. Significantly, AR >2/4 was never displayed in patients with aortic annuli < 22 mm or with a cover index >8%. The increase in the ability to perform the procedure did not appear to have a statistically significant effect. Significant improvements were obtained from the first 20 cases in which the rate of AR > 2/4 was 40%, while in the last 54 AR > 2/4, it decreased to 15% (p = 0.02); however, the former versus the last procedure was an independent predictor for RA recurrence (odds ratio: 2.24; 95% confidence interval: 1.07 to 5.22, p = 0.03) [37]. One study reported that the use of the three-dimensional transesophageal planimetry of an aortic annulus proved that the 'mismatch index' for the 3D planimeter annulus area was the only independent predictor of significant aortic regurgitation (odds ratio: 10.614; 95% CI: 1.044-17.21; p = 0.04). Threedimensional transesophageal planimetry improved the assessment of prosthesis/annulus incongruence and predicted the appearance of significant AR after TAVI as compared to the two-dimensional transesophageal approach [96].
MDCT is the type of imaging by which most of the evidence for the study of the aortic root is derived. In fact, four studies compared the anatomy of the aortic root with the size of the TAVI. Multi-detector row computed tomography was demonstrated to be a very effective tool to enable the accurate sizing of the aortic valve annulus and constitutes a valuable imaging implement to evaluate prosthesis location [95] and deployment after TAVI. Again, MDCT was a better predictor to detect a mismatch between prosthesis area and aortic annulus area [97] as compared to echocardiography, revealing pre-and postprocedure examination paravalvular aortic regurgitation (PAVR) ≥2+ at a rate of 20% at 1-month follow-up [98]. In one of the largest TAVI series published to date which checked patients pre-and post-operatively with MDCT, Katsanos et al. found that patients who were managed with TAVI and presented ≥2 mm difference between the maximum aortic annulus and nominal prosthesis diameters and depth of the frame into the left ventricular outflow tract of <2 mm were independently associated with PAVR ≥2+ occurrence.
Madukauwa-David et al. [99] performed retrospective anatomical measurements post-TAVI in 109 patients with aortic stenosis obtained from the RESOLVE study using 4DCT scans. The investigators assessed the diameter of the aortic root at the level of the annulus, left ventricular outflow tract (LVOT), sinus of Valsalva, sinotubular junction (STJ), and ascending aorta. Again, the heights of the STJ and coronary arteries were determined. The major finding of the study proved that, by homogeneously distributing all aortic root dimensions in the cohort, they were susceptible to a statistically significant change between pre-and post-TAVR conditions (p < 0.01). The post-TAVR dimensions changed significantly from the peak systole to the end of diastole (p < 0.01). Regression models confirmed all measurements of the aortic root in terms of annular diameter, disclosing an excellent coefficient of determination (R2 > 0.95, p < 0.001). Researchers have suggested that there are significant differences between pre-and post-TAVR, affecting the anatomy of the aortic root both at the systolic peak and in the final diastolic part of the cardiac cycle.
These findings can help select optimal THV device sizes that are appropriate to anatomical dimensions, as geometry varies greatly during the cardiac cycle [99].
Concerns related to the occurrence of PVAR and its worse evolution is due at least in part to the heterogeneity of the methods for assessing and quantifying PAVR. Moreover, the lack of consistency in the timing of such assessments leads to an obstacle to understanding its accurate prevalence, severity, and effect [35]. Choosing a correct prosthetic size does not seem to be the only way to avoid PVAR, but also, the complex original morphology of the aortic root and the location and size of the calcifications are crucial determinants to take into consideration. Again, the occurrence of solid annular calcium deposits that protrude more than 4 mm is a negative predictor of moderate to severe PVAR in patients undergoing TAVI. The morphology of calcium conglomerates is involved in the genesis of PVAR in relation to the size of the annular bulky calcification, which is another predictive factor, unlike adherent calcium, which has a "sealant" effect [100].
Currently, the clinical benefits of computational analysis to guide the TAVI are not well established, and the approach represents the cornerstone of modern transcatheter heart valve therapy. The data that emerged in favor of computational analyses take into account the recipient of the transcatheter procedure and both the specific structure of the native aortic valve and an accurate evaluation of calcifications. These two parameters can offer a substantial contribution and, in association with dynamic fluid assessments, can support and guide device selection.
Many investigators have confirmed the effectiveness of computational analyses by defining a reliable framework for reproducing the TAVI procedure and predicting any complications. As has been reported in several studies, the distribution of stress is characterized by concentrated spots of higher stress values that are recorded at the points of contact between the stent and the aortic wall [39][40][41][42][43][44][45][46][47][48][49][50][51][52]56]. We corroborated the evidence of Wang et al. [43], showing that the highest stress values were recorded in the aortic regions close to the calcifications both in self-expanded and balloon-expanded THV devices [50].
Similarly, Morganti et al. [40], in a computational analysis performed on a balloonexpandable device, found major stress levels in the region where the SAPIEN T-stent was most adherent to the aorta wall. Therefore, it has been suggested that higher stress values may be related to the greatest adhesion force between the aortic wall and the stent. Likewise, Eker et al. [101] firstly revealed that the creation of high levels of stress located in the annular region is not devoid of increased risk of aortic rupture, as a possible early complication of TAVI leading to cardiac tamponade or nefarious events was described among the unfavorable occurrences. Kodali et al. [102] achieved the same results by studying the high aortic rupture risk, coronary artery occlusion, and PVAR with the FEA method both in retrospective and prospective patients (n = 3). Of note, the simulation computational analysis revealed that the broad calcified aggregates placed inside the left coronary sinus between the coronary ostium and the aortic annulus were propelled by the stent, leading to aortic rupture. The most important consideration emerging from this study showed that the expected results from the simulations allowed a correct shared decision-making process once presented to the heart team clinicians. Therefore, engineering evaluation with FEA is recommended for rating patient-specific aortic rupture risk [102].
Robust evidence suggests that PVAR, rather than aortic rupture (aortic wall or annulus), as an emerged complication of TAVI, is associated with further worsening in late outcomes. The benefits of the application of the computational modeling of TAVI to highrisk patients, offering a quantitative evaluation of the area of perivalvular holes, become evident within the first post-operative 5-years, disclosing a survival advantage that tends to increase with time [9,10]. The location of incomplete adherence of the prosthetic stent to the aortic wall modifies the extent of the survival advantage of TAVI. Importantly, Morganti et al. suggested that the area of paravalvular holes was proportional to the volume of retrograde perivalvular blood flow and was in accordance with echocardiographic evidence [40,41].
Auricchio worked on measured eccentricity and stent configuration, revealing that the eccentricity of the deployed stent substantially affects valve closure and especially the coaptation of leaflets [103]. The evidence presented by Morganti et al. indicates that nonsymmetric closure is attributed to elliptical stent configuration, leading to the incongruity that one leaflet can close under the other two. Again, although a small central gap may be generated, thus causing a regurgitant flow, the geometrical asymmetry of a stent is a crucial determinant of the central gap during diastole, and it is related to the choice of the leaflet material model. The latter has been shown to have a substantial impact on the coaptation values, being able to alter the early and long-term results [104,105].
Seven years after Auricchio et al., Bianchi et al. [42] evaluated post-procedural complications such as PVAR and related thromboembolic events that have been hampering the spread of the TAVI procedure in lower-risk patients receiving the last generation of the device. Finite element analysis and computational fluid dynamics analysis were performed in recipients of either Edwards SAPIEN or Medtronic CoreValve. The engineering-based simulation revealed that parametric analyses directly affected positioning and balloon over-expansion, thus suggesting a direct impact on the post-deployment TAVI performance to reach a maximum of 47% in the reduction in the PVAR volume [42].
Dowling et al. [49] used patient-specific computer simulations for TAVI in patients with clinically bicuspid aortic valve (BAV) morphology who were deemed suitable for the TAVI procedure and enrolled nine individuals in the study. Computational analysis simulation was effective for eight patients (89%) who required a change in treatment approach with self-expanded TAVR Evolut and Evolut PRO (Medtronic, Minneapolis, Minnesota). The evidence from simulations suggested the occurrence of moderate to PVAR for three recipients after the use of TAV, which were re-discussed by the heart team and considered for SAVR. As for the remaining six patients, the percutaneous treatment strategy was modified. Five patients who received TAVI (83%) with a self-expanding THV had altered size and/or implantation depth to minimize paravalvular regurgitation and/or conduction disturbance. In one patient, the computed analysis was performed, and significant conduction disturbance occurred after TAVI, requiring a permanent pacemaker that was inserted before mechanical intervention. Concerns about PVAR onset were correlated with no recurrence to the mild recurrence of AV regurgitation in all nine individuals. Note that the patient who required a pre-procedure permanent pacemaker implant with definitive dependent pacing revealed a conduction disturbance classified as a third-degree atrioventricular block. The investigators highlighted the remarkable value of the use of FEA simulation applied to TAVI in BAV which may predict important clinical outcomes, such as PVAR and conduction disturbance [49].
Finally, modern platforms to treat structural heart valve disease should entail the use of computational biomodelling, at least in the presence of major clinical or anatomic contraindications, and substantial efforts should be made to integrate computational biomodelling into MDCT and 3D echocardiography during TAVI procedures, avoiding the concern related to a central mild intraprosthetic leak [39,[95][96][97][98][99][100]. Therefore, the scant evidence produced, which offers a comprehensive analysis of the effect of procedural parameters on patient-specific post-TAVR hemodynamics, limits the correct assessment of the effect of the TAV implant depth and balloon over-inflation on anchoring the stent. Ultimately, the occurrence of post-distribution PVL and the risk of thrombus formation remain the true Achilles' heel. A deeper direct analysis of the aforementioned objectives can offer valid help to understand the effect of the choice of the interventional cardiologist on post-procedural complications and help reduce their impact on the basis of patient-specific data [40][41][42][43]50].
Evidence to Deploy Biomechanical Evaluation and to Definitively Accept the Use of Transcatheter Heart Valve Implantation as a New Paradigm Shift
Both cardiology and cardiovascular surgery have witnessed an era of consistently evolving change, and this new scenario has mainly been driven by the emergence of percutaneous coronary intervention, with novel options for the treatment of coronary heart disease. The new endovascular platforms have evolved rapidly and established themselves as vital cogs in the armamentarium available to address structural heart disease [106]. In the past ten years, the innovation has initially been primarily invested in the management of aortic valve stenosis and subsequently the pathological mitral valve with the progressive affirmation of transcatheter valve therapy (TVT) [22,24,60]. From the first experimental study by Bonhoeffer, who pioneered the transcatheter pulmonary valve implant, [53] the use of TVT to treat aortic valve stenosis progressed rapidly. In 2010, the first PARTNER (Placement of AoRTic TraNs cathetER Valve Trial) reported a series of high-risk patients who were treated using this novel technique as opposed to conventional aortic valve stenosis surgery [3]. In less than 10 years, PARTNER III affirmed the safety and efficacy of the transcatheter aortic valve replacement in low-risk patients [16]. It is conceivable that future generations of transcatheter valves with the advancement of device technology will herald improvements in the hemodynamic profile, longevity, and durability alongside reduced adverse events.
Thomas Kuhn, an American physicist and philosopher, introduced the term "paradigm shift" for the first time in The Structure of Scientific Revolutions in 1962 [107]. In this report, the author explained how a process can lead to a transition from the previously widely accepted worldview to a new model for demonstrating new emerging evidence. Cardiology and cardiovascular surgery have often faced paradigm shifts because these disciplines are constantly open to a transition that has, over time, progressively fostered the innovative spirit of those who practice them. We can note that historically, numerous paradigm shifts emerged: coronary bypass grafting, heart transplantation, percutaneous coronary intervention, mechanical and bioprosthetic valves, generations of life-saving drugs for heart failure, and mechanical circulatory support [108,109]. The current summit of these advancements is the emergence of devices used for the replacement of the aortic valve with TVT.
Calcific aortic valve stenosis (AVS) is a pathoanatomic process of aortic valve leaflets that are affected by structural changes sustained by an inflammatory and atherosclerotic process associated with calcium deposition. The morphological changes generated at the level of the cusps alter the function of the valve with a consequent reduction in the opening of the variably narrow leaflets during systole. Aortic valve disease causes abnormal hemodynamics and increased mechanical stress on the left ventricle (LV) [110].
Prior to the advent of TAVI, surgical aortic valve replacement (SAVR) was considered the ideal treatment option for patients at risk of severe valve obstruction. However, new platforms for the treatment of structural heart diseases have fueled clinical attention that has shifted towards the use of new less invasive armamentarium represented by THV devices.
The PARTNER Ia study proved the superiority of the transcatheter balloon-expanded procedure in patients receiving TAVI over those who were managed with optimal medical therapy in short-and medium-term mortality (43.3% in the TAVI group and 68.0% in the standard-therapy group (p < 0.001, at 2 years, respectively) [5]. As for prohibitive/high-risk patients with severe AVS who were suitable to receive surgical treatment, the use of TAVI revealed the same mortality at 5 years as compared to SAVR (67.8% TAVR cohort vs. 62.4% SAVR). However, patients who received TAVI disclosed a rate of moderate to severe AVR of 14% as compared to 1% in those receiving SAVR [9]. Not least, evidence from the use of a first-generation CoreValve Self-Expanding System revealed that the 1-year all-cause death rate was higher in patients after SAVR as compared to recipients of TAVI [8].
THVT has proven to be a revolutionary and decisive procedure in the last decade thanks to the achievement of efficacy and safety. In fact, evidence from THVT offered a clear answer to the use of the only life-saving solution for high-and extreme-surgical-risk patients who cannot tolerate the open surgical option due to the presence of significant comorbidities [111]. Given the promising results associated with technological advancement which has undergone very rapid development, the use of TAVI has been approved for the treatment of intermediate-risk patients. The results reported by the pioneering RCTs suggested increased rates of residual aortic valve regurgitation and more pacemakers implanted in the population intended for the TAVI procedure; however, the use of THVT was directed toward the design of randomized trials involving the intermediate/low-surgicalrisk population [9,10,13,[15][16][17].
The SURTAVI trial enrolled 1660 patients who were eligible to receive either transcatheter aortic-valve bioprosthesis (n = 864) or SAVR with the standard procedure (n = 796). All patients were symptomatic of severe aortic stenosis at intermediate surgical risk. The primary objective was to demonstrate the non-inferiority, safety, and efficacy of the first and second generations of the CoreValve System [15].
In SURTAVI, 84% of patients were managed with the first-generation CoreValve System while 16% of recipients of TAVI had the second generation of Evolut R bioprosthesis. This cohort of individuals had an STS score Society for Predicted Risk of Mortality at 4.5 ± 1.6% [15].
At 2 years, the results revealed that the composite of death from any cause or disabling stroke was higher in the SAVR group as compared to the TAVI group (14% vs. 12.6%, respectively) [15]. The New York Heart Association values for clinical symptoms were significantly improved in both cohorts compared to pre-operative data and were consistent throughout the 24-month follow-up. In addition, the KCCQ summary score revealed a substantial and stable improvement in both populations at 2 years of follow-up, although patients managed with the TAVI procedure had a greater percentage of improvement at 1 month than those who received a standard aortic valve replacement [15].
Evidence of the non-inferiority of TAVI over SAVR recorded for intermediate and highrisk patients offered favorable points to undertake the randomized PARTNER 3 trial [16] and the multi-national randomized clinical Evolut Low Risk Trial Investigators 26 for patients presenting with severe AVS at low risk for death after surgical procedure [17]. In the third series of results reported from the two RCTs, the composite of death from any cause, stroke, or re-hospitalization at 1 year was less in TAVI recipients after the implantation of the device. Again, the investigators found shorter hospitalization rates for individuals undergoing TAVI, while there were no significant differences between groups in terms of major vascular complications, new permanent pacemaker insertions, or moderate or severe paravalvular regurgitation [16,17].
Certainly, a decisive impetus for the success of the large-scale TVT procedure has been linked to refined technological progress, which has seen the use of introducers of reduced diameter and an improvement in the use of stents which have proved to be safer and more effective. However, it is important to consider that the results must be confirmed by longer-term follow-ups.
Paravalvular Aortic Regurgitation
Although there has been substantial initial growth in the use of TAVI confirmed by the success of the results, intra-and post-procedural clinical complications have questioned the paradigm shift, questioning the potential expansion of TVT in low-risk patients.
Surely the Achilles' heel of the TAVI is constituted by the altered hemodynamics due to the occurrence of PVAR, in which the emergence of narrow gaps which are exposed to high gradients of systolic pressure can lead to an altered function of the platelets, which are therefore exposed to high flow shear stress. This pathoanatomic condition triggers platelet activation, perturbing the aggregation/coagulation balance, with the formation of microemboli. The latter are then expelled at the next systole and can remain trapped and/or deposited in the region of the Valsalva sinuses, which offer a suitable location for typical low-shear recirculation areas. Therefore, PVAR may be linked to the deposition of thrombi around the THV device as well as to the potential circulation of thromboembolic clots, which is followed by an increased risk of stroke. Several pieces of evidence have reported that thromboembolism is less common than the hypo-attenuated thickening of the leaflets; however, it is still a fairly common and dangerous phenomenon that requires adequate clinical treatment [115]. Another point to consider is the close association of leaflet thrombosis and the development of a structural degeneration of the valve incorporated in the device.
Several studies have suggested that the occurrence of PVAR in recipients of the TAVI procedure is directly correlated with higher late mortality, cardiac death, and repeated hospitalization even in the presence of traces of regurgitation [116]. Five-year results from Partner Ib RCT disclosed a rate of 14% moderate or severe aortic regurgitation in patients who received TAVI as compared to those who were managed with SAVR. This evidence caused an increased risk of mortality at 5 years for patients who developed moderate or severe aortic regurgitation after TAVI [9].
All the indicators testify that the mortality rate was proportional to the severity of the regurgitation, and in this regard, Generaux et al. [35] reported that even slight PVAR can lead to a doubling of the mortality rate after 1 year. However, Webb et al. [2] pointed out that the progression of PVAR can be unpredictable. The investigators observed that at 2 years, regurgitation increased by ≥ 1 grade in 22.4% of patients, remained unchanged in 46.2%, and improved by ≥ 1 grade in 31.5%.
In this context, substantial differences emerged after the installation of a balloonexpandable THV device or the use of the self-expandable valve. Two independent studies revealed that recipients of the Medtronic CoreValve self-expanding device experienced a higher PVL rate and worsening severity than patients who received an expandable Edwards SAPIEN balloon [50,117]. However, substantial improvements have been made in the new devices involving the low-profile delivery system and external skirt, thereby improving the sealing of the THV device and promoting more precise valve positioning. A lower rate of PVAR at short-term follow-up has been reported [118].
Patients who exhibit PVAR post-TAVI require clinical and imaging modality evaluation. The quantification of regurgitation is generally determined with the use of echocardiography.
In detail, methods such as transesophageal echocardiography, cineangiography, and hemodynamic measurements are commonly used during the procedure, while transthoracic echocardiography offers substantial support for the evaluation and follow-up of PVAR after TAVI [119]. Above all, the continuous wave echo is the most commonly used method to evaluate the overall hemodynamic performance of the valve, but with the disadvantage of not being able to obtain a spatial localization of leaks. The relative consequence is that aortic regurgitation is quantified as the ratio of reverse flow to direct flow. As reported by Hatoum et al. [120], the most obvious limitation is that the measurement and determination are experimental. However, a semi-quantitative description of jets by pulsed wave color Doppler can be used to obtain a precise localization and evaluation of the gravity of PVAR jets.
Concern related to the quantification of PVAR persists after TAVI due to a lack of standardization, leading to a challenging diagnosis. In fact, it is often qualitative, and different classification schemes are adopted (trace, mild, moderate, and severe) [119,121]. Several interventional alternatives to reduce paravalvular regurgitation have been put in place and include post-implantation balloon dilation, repositioning, entrapment maneuvers as well as the valve-in-valve (ViV) procedure [122]; all of these are not free from an increasing risk of vascular complications. A critical aspect of the procedure is represented by the positioning of the THV device with respect to the patient's aortic annulus, which was directly associated with the degree of hemodynamic performance of TAVI as well as the rate of reintervention [123]. There is early evidence from Nombela Franco et al. [124] and Takagi et al. [125] who reported that balloon over-inflation is often used to reduce the degree of PVAR. The investigators revealed the post-balloon dilation decreases regurgitation in the preponderance of patients by at least one degree [124,125]. However, how crucial the post-dilation effect is on survival remains elusive. Again, an association with a higher incidence of cerebrovascular events was recorded [124]. The goal of a correctly performed transcatheter procedure necessarily involves minimizing the amount and incidence of PVAR in order to gain improved clinical outcomes in the long term.
The development of computational models was identified early as the correct method of studying the interaction between TAVI stents and native aortic tissue and predict the performance of the post-procedural device from the point of view of structural dynamics [41,43,47,126,127]. Recently, several studies have substantially quantified the degree of interaction between the device and the implantation site, as a surrogate measure of PVAR, by measuring the gap between the stent [40,48] or the skirt [128] from native tissue, considering the specific anatomical characteristics of the patient's aortic root. Chang et al. reported ideal characteristics that offer better results in terms of PVAR occurrence [129]. We compared the two most commonly used devices, documenting a better performance of the third generation of the balloon-expandable device compared to the third generation of the self-expandable device in adapting to the dynamics of the aortic root, reducing the risk of PVAR [50].
Similarly, great interest has been aroused in the creation of a maximum flow algorithm [46], producing a one-dimensional connected graph capable of representing the flow network based on the size of the gap existing between the stent and the aortic root. Although in the absence of PVAR the results showed a good correlation, nevertheless, the reliability was reduced with the development of models that lacked precision for patients with PVAR recurrence. A significant report was described by De Jaegere et al. [44], who referred to a large series of computational models that tested the predictability of 60 Medtronic CoreValve deployment cases in which the results were validated through angiographic and echocardiographic measurements. The limitation of the work lay in the lack of an adequate description of the reconstruction of the patient's anatomy with respect to the modeling hypotheses. Finally, in a recent study, Mao et al. [130] evaluated the effect of CoreValve orientation and modeling assumptions, such as skirt shape and stent thickness, on post-deployment hemodynamics. However, the formation of post-TAVI thrombus only involved the generated clots on the valve leaflets following a ViV procedure. Vahidkhah et al. analyzed blood stasis by assessing and quantifying idealized ViV models with intra-annular and supra-annular TAVI positions [131].
Transcatheter Heart Valve Thrombosis
Evidence based on several reports displayed that recipients of TAVI experienced an unclear rate of bioprosthetic valve thrombosis (BPV-TH) and thromboembolic complications of the device. It is of note that both results from the RCTs and EU Partner Registry lack complete and satisfactory data. The PARTNER and CoreValve System randomized clinical trials did not note significant BPV-TH [9,10,25]. On the other hand, the EU Partner Registry [132] also revealed very poor data on thromboembolic events in patients who were managed with THV devices. The reported thromboembolic complication rate was only 1 case out of 130 patients undergoing TAVI. Latib et al. noted that from a large number of patients (n = 4266), only 27 cases of BPV-TH thrombosis (0.61%) occurred within a median of 181 days after TAVI procedure [132].
Importantly, Stortecky et al. observed that the risk of BPV-TH was higher in the first 3 months after device implantation. In addition, the risk curves showed a marked reduction in events in the subsequent months, which almost matched the curves of the general population [133]. A histopathological analysis from the CoreValve device thrombotic complication suggested that clot formation was completed approximately 3 months after the implantation of the THV device [134][135][136][137][138]. Makkar et al. [139] offered important data systematically using 4D computed tomography to prove bioprosthetic valve thrombosis events. Fifty-five patients included in the PORTICO Studio IDE (Portico Re-sheathable Transcatheter Aortic Valve System US IDE Trial) revealed the occurrence of BPV-TH at a median of 32 days after valve implantation with decreased leaflets movement in 40% of recipients. In total, 132 patients were included in the Savory study (subclinical aortic valve thrombosis assessed with 4D CT) and were eligible to receive either TAVI or SAVR, or were included in RESOLVE (surgical catheter and aortic evaluation of thrombosis of the bioprosthetic valve and its treatment with anticoagulation) and underwent 4D computed tomography within 3 months, recording reduced leaflet motion at a rate of 13% of recipients. Of these, 14% were treated with TVI, while 7% underwent SAVR with the use of a conventional bioprosthesis [139,140].
Pache et al. [141] corroborated the previous evidence [139,142] on 156 consecutive patients who were managed with TAVI using SAPIEN 3 (Edwards Lifesciences, Irvine, CA, USA). At a median of 5 days after the procedure, the investigators observed by the mean of multi-detector computed tomography that 10.3% of TAVI recipients disclosed leaflet thickening with hypo-attenuation. Although the absence of symptoms was considered a relevant point for a normal clinical evolution, individuals experienced a higher mean transvalvular gradient, and anticoagulant drug therapy led to the complete resolution of leaflet thickening [141]. Likewise, in patients who were treated with dual antiplatelet therapy (DAPT) less frequently than those who were managed with a single antiplatelet drug (37.5% and 50%, respectively) [141], a correlation between increased transvalvular gradient and uncontrolled neointimal proliferation was noted with thickening of the device leaflets [141,142].
Three recent studies reached significant relevance in BPV-TH and thromboembolic events [135,143,144]. Hansson et al. [135] monitored patients who underwent a TAVI procedure with the use of balloon-expandable valves (Edwards Sapien XT or Sapien 3 valves) by means of transthoracic or transesophageal echocardiography and multi-detector computed tomography to screen the incidence and predictors of BPV-TH at 1-3 months. The evidence of thrombosis was observed in a rate of 7% of patients with MDCT. In addition, 18% of individuals experienced bioprosthetic valve thrombosis events with clinical complications. Cox's multi-variate regression analysis revealed that the two independent predictors of BPV-TH were related to the use of the TAVI and were the identified in the lack of warfarin administration and the larger size of the device measured at 29 mm [135].
Nührenberg et al. [143] studied hypo-attenuated leaflet thickening (HLAT) as a potential precursor of clot formation and thromboembolic events after TAVI. In all cohorts of patients, including those who underwent oral anticoagulation treatment, dual antiplatelet therapy with aspirin and clopidogrel was administered for at least 24 h before the procedure. In patients who had pre-existing indications for oral anticoagulation treatment, aspirin was discontinued, and the administration was pursued after TAVI for the rest of the cohort. Additionally, 18% of TAVI patients revealed hypo-attenuated leaflet thickening; however, lower complication rates were observed in patients receiving oral anticoagulation, suggesting that the administration of dual antiplatelet therapy (aspirin and clopidogrel) did not change the occurrence of early HLAT [143]. GALILEO 4D RCT [144] included 231 patients for antithrombotic strategy assessment, in which long-term anticoagulation was administered, either with the use of rivaroxaban (10 mg) associated with aspirin (75 to 100 mg) once daily or with the administration of a dual antiplatelet-based strategy with the use of (clopidogrel (75 mg) plus aspirin (75 to 100 mg) once daily. Four-dimensional CT was used after randomization to check all cohorts of individuals. Patients were successfully treated with TAVI with no indication for long-term anticoagulation therapy. The primary endpoint of the study comprehended the percentage of patients who experienced at least one prosthetic valve leaflet with grade 3 or higher motion reduction. Of note, this process involved substantially more than 50% of the leaflet as follows: 2.1% of patients with rivaroxaban administration revealed at least one prosthetic valve leaflet with grade 3 or higher motion reduction compared to 10.9% in the dual antiplatelet protocol. The thickening of at least one leaflet was recorded in 12.4% of patients in the rivaroxaban cohort compared to 32.4% in which the dual antiplatelet was administered. Lastly, concerns about the increased risk of death or thromboembolic events and the risk of life-threatening or disabling events, or greater bleeding were remarkably higher in patients who received the rivaroxaban administration [144].
One of the concerns affecting clot formation after the TAVI procedure is related both to the extent of bulky native valve calcification and its position with respect to the annulus of AV and the aortic root, as well as to stent deformation and the size of the patient's annulus. Even more so, in these specific morphological features, the role of physiological blood dynamics plays a crucial role that has not been fully investigated [39].
Khalique et al. [145] noted that calcified blocks substantially affect the amount and asymmetry depending on the extent of aortic valve calcification. This condition led to the involvement of all regions of the aortic valve complex in predicting various grades of PVAR from greater than or equal to mild PAVR and the post-deployment performance of the device, thereby potentially evolving towards the bioprosthetic valve thrombosis of the THV device. The preexistent leaflet asymmetry was excluded so as to confirm the diagnosis of PAVR. The quantity of bulky calcification at the level of the junction between the annulus and LVOT, as well as the occurrence of leaflet calcification, independently predicted PVAR and the post-deployment of TAVI when taking into account the multidetector row computed tomography area cover index [145].
For this reason, the use of computational biomodelling can lead to predicting both the extent of PVAR and the risk of clot formation [39][40][41][42][50][51][52]. Likewise, the bulky calcification penetrating the aortic annulus may have a different texture, thus raising some reflections about the ideal choice of device to implant [40,41,50,145]. So, the use of self-and balloon-expandable system prostheses can lead to different geometric alterations of the aortic annulus after deployment, with a greater or lesser risk of potential disturbance of the blood fluid dynamics that generate clot formation [5,[40][41][42].
In this regard, we revealed that both balloon-and self-expandable devices were poorly effective in the presence of bulky native AV calcifications, and the different degrees of device deformation were studied. Two independent reports based on computational biomodelling suggested that both Sapien XT and Sapien 3 disclosed high values of the maximal principal stress in the aortic regions close to bulky calcification, resulting in a deformation of the stent that assumed an elliptical shape [40,52]. Accentuated geometric modification with incorrect post-deployment can lead to paravalvular leakage, leaflet mal-coaptation, and hypo-attenuated leaflet thickening. The extreme shape of elliptical deformation is likely to favor subclinical thrombosis due to the presence of residual calcifications that favor hypomobility [40,52]. The SAPIEN device is shown in Figure 8.
Again, the core valve is based on the self-expansion mechanism that may succumb to the mechanical distortion phenomena. In self-expanding TAVI, the crucial role of positioning in determining valve anchorage is pivotal. The occurrence of non-uniform expansion related to extensive calcifications can lead to prosthetic device deformation that ranges from an increased eccentricity > 10%, resulting in the incomplete expansion of the nitinol frame at almost all levels and potentially causing clot formation [41,42,50].
No evidence has demonstrated a statistically significant correlation between the occurrence of moderate PVAR and abnormal flow patterns on the TAV implanted leaflets and in the left main coronary artery that could favor thrombosis of the THV device and the accelerated progression of the atherosclerotic process [146]. However, several observations suggest that clot formation has been hypothesized to be more directly related to PVAR with the clinical occurrence of a thrombotic embolism [52,135,[139][140][141][142][143][144].
An explanation can be offered by the existence of localized flow at the PVAR level with the development of high-pressure gradients associated with the presence of small, tight, empty areas. This condition implies that the platelets are subjected to high flow shear stress [41,42,52]. This phenomenon, as we have reported, has attracted ever-increasing clinical interest [41,52]. Bianchi et al. [42] evaluated the relationship between PVAR and platelet activation with a computational model to study the thrombogenic potential of three procedural configurations of TAVI, two of which were Sapien 3 and one was CoreValve Evolute. Investigators calculated the stress accumulation of platelets along particle trajectories in the PVAR region. All the probability density functions in the three simulations performed showed comparable patterns. For example, in one Sapien 3 with a valve measured 26 mm, in which an over-inflated aortic configuration was exhibited, the major stress accumulation of platelets was evident. This phenomenon can be related to the higher speed that can be recorded in PVAR jets, which leads to higher flow shear stress. In addition, HS values were observed to be in agreement with the largest overall regurgitation volumes. The information obtained from the probability density functions showed that the variation in the diameter of PVAR affects the activation potential of platelets. For example, in CoreValve Evolut 29, a reduction in PVAR grade led to slightly higher thrombogenic potential, as platelets were subjected to more shear stress which was related to their flow through smaller paravalvular spaces [42]. Finally, dynamic fluid has also shown us that when the volume of regurgitation is considerably higher, the cause-effect relationship established between PVAR reduction and susceptibility to platelet activation is supported by a more complicated interaction [41,42,52].
Structural Valve Degeneration
The term structural valve degeneration (SVD) implies an acquired anomaly of the valve bioprosthesis due to a substantial deterioration of the flaps and of the structural support that integrates the device. The correlated patho-anatomic consequence is the thickening, calcification, laceration, or rupture of the materials that make up the valve prosthesis. This context of the pathological disorder suggests the development of associated valvular hemodynamic dysfunction, such as the development of stenosis or regurgitation. To date, a thorough understanding of the precise mechanisms underlying SVD has not yet been substantially offered. However, the mechanisms that support SVD are multiple, both mechanical and related to fluid dynamics, which are responsible for tissue rupture or thickening over time [27][28][29][30][31][32][33].
Several factors cause SVD. First of all, a crucial role is provided by the mechanical stress levels associated with both flow anomalies and the occurrence of shear stresses on the surfaces of valve leaflets. These two factors are potentially responsible for the progression of SVD, leading to the breakdown of the collagen frame of the fibers and the calcification of the tissues [159,171]. Second, other clinical conditions, in which the pathological features of intrinsic structural deterioration of the valve tissue are not detectable, cannot be classified as SVD. However, they deserve to be taken into consideration. SVD may be related to the mismatch between prosthesis size and patient size, device malposition, paravalvular regurgitation, and abnormal frame expansion. Likewise, these abnormal situations attributable to the implanted bioprosthesis can lead to early SVD or be considered a cause of its development. Dysfunction involving the prosthesis implanted due to mismatch is difficult to distinguish from the structural degeneration of a valve. Therefore, it is not considered to be SVD as it exhibits normal leaflet morphology, but instead has a valve area that is relatively small with a high gradient [27][28][29][30][31][32][33].
A crucial point that characterizes the difference between the prosthetic mismatch and the SVD is related to the time during which the anomaly is established. The prosthetic maladjustment reveals hemodynamic anomalies of the valve which occur at the moment of the implantation of the prosthesis with the manifestation of the patient's hemodynamic deterioration, which occurs in conjunction with an increase in gradients and a decrease in the valve area; these conditions reveal a progressive increase in the patient's clinical conditions on repeated echocardiographic checks. In patients who develop SVD, associated stenosis develops progressively and is seen with the characteristics of a faded lesion during follow-up. Although both prosthetic valve thrombosis and infective endocarditis are not included in the definition of SVD, SVD may be noted despite having recorded therapeutic success. Intense debate currently surrounds SVD due to its potential to involve and therefore influence the TAVI procedure . Indeed, since a less invasive transcatheter approach is available for patients presenting with comorbidities and at high risk with conventional surgical strategies, fewer cases of SVD were detected, possibly because the deceased patients were not included in the long-term follow-up. Cardiologists believe that SVD is not a reliable criterion for establishing true biological valve durability. They suggested that the actuarial freedom found by re-intervention is inherently lower than the freedom from SVD [147,148] (Figure 9).
Only the NOTION RCT [31] with 6 years of follow-up disclosed SVD rates that were significantly greater after SAVR than the TAVI procedure (24.0% vs. 4.8%; p < 0.001). The investigators reported in post-procedural echocardiographic controls a mean gradient of >20 mm Hg in 22% of patients who experienced SVD compared to 2.9% for those who were managed with TAVI (p < 0.0001). This evidence was also corroborated at a 3-month post-procedure check where a modified definition of SVD was fixed and a mean gradient increase >10 mmHg was established (AVR-S 12.4% vs. TAVR 1.4%; p < 0.001) [31].
In Figure 9 panel A an echocardiographic focal point of the SVD of the stent/stentless xenograft is depicted.
On the other end, patients who were checked at a 5-year follow-up in the PARTNER trial disclosed no structural valve deterioration with the preservation of low gradients and increased valve areas [9,10]. The results of the two randomized studies are encouraging, but a longer follow-up is necessary to confirm and give more solidity in terms of the safety and effectiveness of the transcatheter procedure [9,10].
The bioprosthesis designed as part of the Sapien THV balloon-expandable device consists of bovine pericardium as opposed to calf pericardium which characterizes the surgically implanted Edwards bioprosthesis. However, it should be noted that the treatment procedure is identical [171]. The use of the TAVR 22 Fr and 24 Fr systems has been adapted to the leaflets of the TAV, which are thinner than surgical bioprosthesis. Rapid technological advances have led to the development of delivery systems reduced to 18 Fr before and 4 Fr after for the second generation of Sapien XT and for the third-generation Sapien 3 (Edwards Lifesciences, Inc.), which accompanied the changes made to the stent in cobalt-chromium and thinner leaflets to obtain a lower crimped TAV profile. The useful elements to define SVD as valve-related dysfunction were the mean aortic gradient ≥20 mm Hg, the effective orifice area ≤0.9-1.1 cm 2 , a dimensionless valve index <0.35 m/s, and moderate or severe prosthetic regurgitation. Phase 0 displays the absence of morphological leaflet anomaly and absence of hemodynamic alteration. Phase 1 discloses early morphological changes without hemodynamic compromise. The morphological alterations typical of stage 1 are also referable to prostheses where the degenerative process is controlled using antithrombotic drugs that reduce the thickening of the leaflet. Phase 2 reveals morphological abnormalities of valve leaflets of SVD associated with hemodynamic dysfunction. The bioprosthesis in this phase can manifest as stenosis or regurgitation. The thrombosis is a factor favoring phase 2, leading to stenosis or paravalvular leakage and regurgitation. Phase 2 includes two subcategories, phase 2S and phase 2R. In the evolutive stage of 2S degeneration, an increase in the mean transvalvular gradient (≥10 mm Hg) and decrease in the valvular area without leaflet thickening occur. SVD may occur in the 2RS form including moderate stenosis and moderate regurgitation. Phase 3 of SVD highlights severe stenosis or severe regurgitation with severe hemodynamic change. Abbreviations: R, regurgitation; SVR, structural valve degeneration; S, stenosis; VARC, Valve Academy Research Consortium.
The study by Xuan et al. [56] revealed that the major and minor stresses in the Sapien 26 mm valves are located proximally in the annulus, where the stent is deployed and narrowed. The investigators highlighted that maximum and minimum principal stresses were exhibited at the level of TAV leaflets that were attached to the stent located in close contact with the commissures. It is reasonable to suggest that these regions where the peak stress and the highest stress levels occur locally could result in the areas most prone to initiate degeneration. To date, we have no knowledge of studies that have shared a comparison on the relative duration of TAVI compared to surgical bioprosthesis. Evidence reported from studies on the degeneration of surgical bioprosthesis suggests that degeneration associated with calcification or tearing of the flaps correlates with areas of high tensile and compressive stresses [56].
Sun et al. [172] performed the first computational biomodelling using FEA on two bovine pericardial valves from Edwards Lifesciences Inc. The test was performed with quasi-static loading conditions set below 120 mm Hg, with leaflet material properties fixed from those valves and respecting the exact valve geometry 11. The investigators recorded a maximum in the plane stress that ranged from 544.7 kilopascals (kPa) to 663.2 kPa, reliant on the material properties of the leaflet were used. Of note, the degree of stress had different locations. In fact, they revealed that the stresses on the leaflets were greatest near the commissures and inferior near the free edge of the leaflet. In a subsequent study, the authors reported the results of an FEA simulation performed on a 25 mm surgical bioprosthesis, which is the closest dimension to the size of the commonly implanted Sapien balloon-expandable device. Again, Xuan et al. [56] suggested levels of maximum principal stress for a 26 mm Sapien valve that were significantly higher than those recorded for a surgical bioprosthesis, offering an explanation due to the difference in the design of the leaflets or different interaction with the respective frame that constitutes the device [56]. Alavi et al. revealed that the crimping process physically damages TAV leaflets and may undermine leaflets, leading to increased leaflet stress [173].
Conclusions
TAVI and SAVR are both options that should be seen as part of the treatment armamentarium offered to patients. Future research should be focused on detecting and addressing cumbersome calcium deposits which may increase the risk of paravalvular leaks, early valve degeneration, and permanent pacemaker insertion The use of adjuncts such as FEA and MDCT can help steer the decision-making process of heart teams while considering the patients' wishes. Although currently comparable, the long-term effects of TAVI are still uncertain, but advancements are being made at a rapid rate to ensure it remains a pivotal option for treating aortic valve stenosis. Further longitudinal studies are also needed to assess the long-term outcomes of TAVI valves vs. SAVR.
Limitations
There are several limitations to this review in that it is by no means a systematic review or metanalysis. The heterogeneity of the studies paralleled with the advancement of valves makes direct comparisons unreliable. To ensure the material presented was up to date, only recently published papers were used with the addition of well-cited older articles. The use of finite element analysis is also limited in the clinical setting, with few centers offering this. Studies assessing the impact of TAVI on the other valves during implantation are also scarce. Given the recent emergence of TAVI, direct comparisons to SAVR may be limited by intangibles such as increasingly diligent follow-ups compared to routine standard of care. | 17,738 | 2022-07-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Does globalization matter in the relationship between renewable energy consumption and economic growth, evidence from Asian emerging economies
The study aims to investigate the impact of social, economic and political globalization on the renewable energy-economic growth nexus in a panel of six Asian emerging economies over the period 1975–2020. The results of the CS-ARDL approach show that renewable energy consumption contributes significantly to long run economic growth. Economic and political globalization firmly hold back economic growth, while social globalization directly promotes economic growth. The nonlinear effects of political, social, and economic globalization on economic growth clearly demonstrate the validity of the inverted U-shaped relationship between political globalization, economic globalization, and economic growth, and the U-shaped relationship between social globalization and economic growth. The study also found that economic, social and political globalization moderated the impact of renewable energy on boosting economic growth. Based on the renewable energy consumption model, it is revealed that economic growth significantly promotes long run renewable energy consumption. Economic, social, and political globalization have significantly boosted long run renewable energy consumption. However, the nonlinear effect model reflects a U-shaped relationship between globalization indicators and renewable energy consumption. The interaction of political, economic, and social globalization with economic growth has also witnessed an increase in renewable energy consumption, which supports the scale effect hypothesis. The causality test concludes that there is a two-way causal relationship between renewable energy consumption and economic growth, thus supporting the feedback hypothesis. The policy implications for Asian emerging economies are discussed based on the empirical analysis of this study.
Introduction
In recent years, policy circles and academia have paid great attention to issues related to economic growth and renewable energy consumption. Numerous theoretical and empirical studies have been feverishly scrutinizing the interplay between economic growth and various aspects of energy consumption, including the mechanisms by which renewable energy consumption can sustain economic growth in the long run [1][2][3]. The ongoing debate on the relationship between economic growth and renewable energy consumption has produced conflicting signals and remains inconclusive [4,5]. For example, few studies show a weak relationship between economic growth and renewable energy consumption. Another stream of empirical research revealed a possible causal relationship between economic growth and renewable energy consumption, leading to the development of four hypotheses: neutral [4,[6][7][8][9], feedback [10-12], growth [13,14], and conservation [15][16][17] hypotheses. Groundbreaking empirical research focuses on bivariate models to detect causal relationships between renewable energy consumption and economic growth. Recent empirical studies on the link between renewable energy consumption and economic growth employ multivariate and advanced econometric methods to explore the direction of the causal relationship between renewable energy consumption and economic growth. Thus, other variables such as infrastructure development, financial development, institutional quality, capital, energy prices, urbanization, industrialization, carbon emissions, and industrialization have been added to empirical models of the relationship between renewable energy consumption and economic growth to prevent variable omission bias [18][19][20][21][22][23][24][25][26][27][28]. However, there is limited literature on the important role of social, political and economic globalization in the link between renewable energy consumption and economic growth. This raises concerns about the impact of structural and energy efficiency policies in both advanced and developing economies, as the associated policy consequences are temporary. Globalization is said to facilitate technology transfer, thereby affecting renewable and nonrenewable energy use and economic growth [29][30][31]. Moreover, globalization may stimulate the demand for factors of production to facilitate the production of goods, thereby promoting the use of renewable and non-renewable energy sources and economic growth [32][33][34]. It has also been suggested that globalization may promote specialization in production, thereby driving economies of scale and higher economic production [35,36]. To support the recognition of the future impact of globalization on economic growth and renewable energy consumption, it is crucial to strengthen the link between renewable energy consumption and economic growth through social, political and economic indicators of globalization. Studies also use foreign direct investment (FDI) and trade as alternatives to globalization to explore their effects on economic growth or renewable energy consumption [37][38][39]. However, globalization also has political and social dimensions and is not limited to trade and foreign direct investment, so few empirical studies have revealed the impact of social, political and economic globalization on the link between renewable energy consumption and economic growth, especially in Asian emerging economies. Thus, this study employs panel data techniques to investigate the impact of social, economic, and political globalization on the link between renewable energy consumption and economic growth in China, India, Bangladesh, South Korea, Singapore, and Taiwan during the period 1975 to 2020. The consideration of the above-mentioned countries is because these countries have high energy utilization rate and fast economic growth, and bear the brunt of the globalization process [40,41]. Thus, consideration of the role of globalization in renewable energy consumption and economic growth in emerging Asian economies can make a significant contribution to the current debate on the impact of globalization on the link between economic growth and renewable energy consumption.
This study contributes to the existing literature on globalization, renewable energy consumption, and economic growth in selected Asian emerging economies by precisely addressing three key questions. Different from the traditional methods used in previous panel studies, this study adopts the CS-ARDL method, which is robust to cross-sectional dependence (CD), heterogeneity and endogeneity, to explore the impact of globalization indicators (social, economic and political) on renewable energy consumption and economic growth. Second, in contrast to previous empirical studies on the subject, this study examines the moderating effects of economic growth and globalization indicators on renewable energy consumption, and the moderating role of renewable energy consumption and globalization indicators on economic growth. Finally, this study explores the non-linear effects of social, political, and economic globalization on renewable energy consumption and economic growth as an extension of the empirical literature.
Literature review
The literature review in this section focuses on the role of globalization in the link between renewable energy consumption and economic growth. The literature is organized into three core sections: the link between renewable energy consumption and economic growth; the association between globalization and economic growth; and the link between globalization and renewable energy consumption. As noted in each subcategory, findings on these topics are mixed.
The link between renewable energy consumption and economic growth
A large literature reveals the link between renewable energy consumption and economic growth, reporting inconclusive empirical and theoretical evidence. However, research also shows a weak relationship between renewable energy consumption and economic growth [42-44]. Other researchers have identified possible causal relationships between renewable energy consumption and economic growth that fall into four central hypotheses [45,46]. First, the growth hypothesis proposes that the expansion of renewable energy consumption will lead to the extension of economic growth [47][48][49][50]. Second, conservative assumptions suggest that higher economic growth can stimulate renewable energy consumption [16,[51][52][53]. Studies report evidence for feedback hypothesis based on two-way causality between renewable energy consumption and economic growth [13,28,[54][55][56]. The neutral hypothesis suggests an independent link between renewable energy consumption and economic growth [4,[6][7][8][9]. Results related to these assumptions fluctuate according to the methodology used, energy dimension, region and income grouping countries. [57] illustrated that renewable energy consumption made a significant contribution to economic growth; and supported a feedback hypothesis based on a two-way causal relationship between renewable energy consumption and economic growth in the Turkish economy. [58] reveal that a shock to renewable energy consumption leads to a decline in real GDP per capita, while real GDP per capita increases with a shock to nonrenewable energy consumption. [59] investigate the causal relationship between renewable energy consumption and economic growth by conducting a Granger causality test for 12 EU countries over the period 1990-2014. The findings support the feedback hypothesis by suggesting a long-term two-way causal relationship between renewable energy consumption and economic growth. However, in the short term, the results support conservative assumptions based on a one-way causal relationship from economic growth to renewable energy consumption. [60] establishes a dynamic causal relationship between renewable energy prices and economic growth using a Markov switching vector autoregressive (MS-VAR) model in the cases of Canada, New Zealand, and Norway. The results highlight a one-way link from economic growth to renewable energy consumption in Canada and New Zealand, thereby supporting the conservative assumption. [61] employ a bootstrap panel causality test to explore the causal relationship between renewable energy consumption and economic growth over the period 1990-2015 in 15 emerging countries. The analysis provides evidence for the neutral hypothesis for all selected countries.
[25] asserts the conservative hypothesis by revealing that there is a one-way causal link from non-renewable energy consumption to economic growth, while the neutral hypothesis is supported based on the absence of a causal relationship between renewable energy consumption and economic growth in PIMC countries.
[52] concluded that using a Fourier causality test, the energy-led growth hypothesis is valid for both energy sources in the United States, India, the United Kingdom, and Spain, while the non-renewable energy-led growth hypothesis is valid for Italy. The conservation hypothesis applies to energy in Germany and renewable energy in China.
Evidence for the direction of causality between renewable energy consumption and economic growth is also mixed for individual countries. [62] use a VECM model to reveal shortand long-term causal relationships between renewable energy consumption and economic growth in Saudi Arabia over the period 1990-2020. The results support the feedback hypothesis, as there is a short-and long-term bidirectional causal relationship between renewable energy consumption and economic growth. [63] examines the direct and indirect effects of renewable energy on economic growth in Ghana over the period 1990-2015 using Granger causality tests and a mediation model. The results identify the feedback hypothesis between renewable energy consumption and economic growth. [64] adopted the causality test of Toda and Yamamoto (1995) and showed no significant causal relationship between renewable energy consumption and economic growth in Morocco, thus supporting the neutral hypothesis. [65] adopted a Granger causality test for the period 1990-2019, revealing a two-way causal relationship between renewable energy consumption and economic growth, supporting the feedback hypothesis in Argentina.
The relationship between globalization and economic growth
Globalization includes economic, political and social dimensions and is not limited to trade openness. As this phenomenon of globalization affects the development of a country is still a controversial topic because its impact varies according to the dimensions and relative resources of the country [66]. The role of globalization in development defines conflicting perspectives advanced by theoretical and empirical growth research. Globalization is said to drive economic growth [67,68], through technological diffusion, effective resource allocation, capital augmentation and improved factor productivity [69][70][71]. Globalization leads to the transfer of advanced technology from developed to developing countries, thereby promoting the division of labor to benefit more from a country's comparative advantage in producing different specialized activities [72,73]. [74] pointed out that overall globalization has a positive impact on economic growth, while the impact of fragmented globalization on economic growth shows that the social and political levels promote economic growth, while the economic level destroys economic growth in low-income countries. [75] use the Pooled Autoregressive Distribution Lag (ARDL) method to conclude that globalization has a positive effect on economic growth, which may not be sustained by rising interest rates and inflationary pressures, however, economic globalization can be used as a tool to stimulate investment, curbing corruption and subsequently sustaining economic growth in South Asian economies.
Conversely, globalization can undermine growth in countries with weak institutions and political instability [30,76]. Few researchers argue that the impact of trade on economic growth is limited by a country's structural progress and claim no strong positive effects [77,78]. Other research has shown that lack of consideration of important growth indicators reduces the positive impact of globalization on growth by linking such evidence to globalization indices [79,80]. According to the Stolper-Samuelson Heckscher-Ohlin theorem, countries with relatively scarce resources will lose from freer trade, while countries with relatively abundant resources will benefit [81]. Thus, people worry about the distributional effects of globalization on the economy, while classical and neoclassical literature affirms the benefits of globalization. [82] contended that when globalization affects labor markets with gender consequences, the distributional effects of traded inputs, non-traded goods and outsourcing may affect social justice. For instance, when low-income countries have a comparative advantage in producing low-skilled labour-intensive goods, low-skilled women enter the labor force.
Trade liberalization is good for economic growth, which is concluded after reviewing the historical evidence on the link between globalization and economic growth [83], however, successful capital liberalization requires high-quality systems [84,85]. [86] demonstrated that Globalization influence institutional significantly, such institutional reforms in turn facilitate economic growth in East Asian countries. Using the KOF globalization index and institutional governance indicators, [87] adopts a two-step systematic GMM approach to a sample of 45 Asian economies and reveals their impact on GDP growth during 2003-2017. The results show that globalization makes a significant contribution to economic growth through sound regulatory controls and political stability. [88] investigate the role of economic globalization on renewable energy consumption in panel data covering 30 OECD countries from 1970 to 2015. The results show that higher levels of economic globalization promote the development of renewable energy, and the evidence for different measures of economic globalization is strong. However, [89] uses panel quantile regression for OECD countries to determine that economic globalization reduces renewable energy consumption, while overall globalization (economic, social, and political) increases renewable energy consumption. Similarly, [90] conclude that the short-and long-term overall globalization process and its long-term economic and political globalization dimensions have a significant positive impact on Turkey's renewable energy consumption. However, social globalization does not have any significant impact on Turkey's short-and long-term renewable energy consumption. [91] uses a nonlinear PSTR model to reveal the link between globalization, renewable energy consumption, and carbon emissions for 33 OECD countries during the period 2000-2018. The results show that with the increase of the level of globalization, the carbon emission reduction effect of renewable energy is stimulated, that is, globalization changes the relationship between renewable energy consumption and carbon emissions in OECD countries.
The relationship between globalization and renewable energy consumption
In conclusion, views vary on the link between renewable energy consumption and economic growth; the association between globalization and economic growth; and the relationship between globalization and renewable energy consumption. However, no studies have considered globalization as a channel influencing the link between renewable energy consumption and economic growth. For the most part, these relationships are explored independently in each model; thus, further research on the role of globalization in creating a win-win situation for specific emerging Asian economies is urgently needed.
Model specification, variable data measurements, and sources and methods
This study reveals the impact of disaggregated globalization (social, political, and economic globalization) on renewable energy consumption and economic growth in emerging Asian economies. To detect this relationship, two main empirical models were developed.
Economic growth model
Following [92][93][94][95], the Cobb-Douglas production function is extended to explore the impact of renewable energy consumption and globalization on economic growth. [92] enhanced the Cobb-Douglas production function by including renewable energy consumption and nonrenewable energy consumption, revealing the impact of renewable energy use, non-renewable energy use, capital and labor on economic growth. [93] used financial globalization and other labor and capital control factors in the model to reveal its impact on economic growth. Thus, the model can be more fully interpreted as economic growth (Y) as a function of capital (K), labor (L), renewable energy consumption (RE), disaggregated globalization (GLO), and other variables (Z) that may affect economic growth with potential.
The above function can be expressed in log-linear form as: Few studies have emphasized that the impact of globalization on economic growth is not always linear but nonlinear [96][97][98]. The above equation based on this argument can be augmented with a quadratic term for globalization to capture its non-linear effects: Uncovering the moderating role of decentralized globalization in the link between renewable energy consumption and economic growth, Eq (3) is further extended to include an interaction term (GLO × RE). Inserting an interaction term into this model is crucial as it may help to understand how globalization is intertwined with the impact of renewable energy consumption on economic growth: Where α 0 , α 1 , α 2 ,. . ... α j reflect parameters to be estimated and μ i,t is the random error term in the economic growth model.
Renewable energy consumption model
Following [99][100][101], the renewable energy consumption model can be specified as economic growth (Y), labor force (L), capital (K), decomposed globalization (GLO) and other variables (Z), which impact on renewable energy consumption.
Renewable energy consumption model in the log linear form can be specified as: Eq (6) can be augmented with a quadratic term for globalization to capture its non-linear effects: Uncovering the moderating role of decentralized globalization in the link between economic growth and renewable energy consumption, Eq (7) is further extended to include an interaction term (GLO × Y).
Where β 0 , β 1 ,β 2 ,. . ... β j reflect parameters to be estimated and ε i,t is the random error term in the renewable energy consumption model.
Variable data descriptions, measurements and sources
The Table 1 below highlights variable measurements and descriptions and data sources, renewable energy consumption data measured in million tons of oil equivalent (Mtoe), obtained from the [102]. Subdivided globalization includes economic globalization, social globalization and political globalization. The data of economic globalization can be obtained by making an index of the indicators of trade globalization and financial globalization. Similarly, social globalization can also be measured by making an index on the indicators of personal globalization, information globalization and cultural globalization. However, political globalization data can directly be obtained from [103]. Likewise, data for the Social Globalization Index and the Economic Globalization Index are also available from the [103]. GDP can be used as a proxy for economic growth, and data on GDP and capital in constant 2015 US$ are available from WDI [104]. Employed labor force data in millions is also available from WDI [104].
Cross-sectional dependence and slope heterogeneity tests
As interdependence increases, panels may have significant cross-sectional dependence (CSD), so more stringent tests are needed to examine for cross-sectional dependence across countries. Thus, the current study explicitly examines cross-sectional correlation tests to address issues with panel data estimation and to ensure that empirical estimates are unbiased, consistent, and valid. Primarily Pesaran CSD, Pesaran scaled LM, bias-corrected scaled LM and Breush-Pegan LM test introduced by [105][106][107][108], expressed by the following Eqs (9&10), respectively.
CSD ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi 2T NðN À 1Þ where the symbols T and N replicate the sample size (period) and panel size (cross-section), respectively. It turns out that panel datasets with cross-sectional dependencies will be more prominently used in second-generation panel data techniques. More appropriate steady-state results can be established using second-generation panel stationary testing.
Panel unit root tests
First-generation panel stationary tests, Augmented Dickey Fuller; Phillip Perron; Lin and Chu; Levin; Hadri; Breitung; Im, Pesaran, and Shin failed to account for cross-sectional dependencies in longitudinal datasets. Thus, to reduce this concern, the current study used second-generation panel unit root tests, namely the cross-sectional Im, Pesaran and Shin (CIPS) test developed by [109], and the cross-sectional augmented Dickey-Fuller (CADF) test. These panel unit root tests are more robust and perform better due to the asymptotic assumption and not requiring (N*1). CADF and CIPS panel unit root tests can be used to generate accurate information about the order of integration of the series. Eq (11) below represents the CADF panel unit root test.
Substituting the lag term (t-1) in Eq (11) yields the following Eq (12) where � z itÀ 1 and Δz it−1 signify the lag level of each cross section and the mean value of the firstorder difference operator, respectively. Eq (13) below represents the second-generation CIPS unit root test.
The coefficient λ i (N, T) the test statistic of CADF, so replacing λ i (N, T) with CADF, the following Eq (14) can be obtained.
Panel cointegration test
The results of the unit root approach describe integration at mixed levels, such as I(0) and I(1), so we turned to cointegration analysis, using the [110] test to estimate possible cointegration relationships in panel data. This approach is consistent even in the presence of cross-sectional dependencies, heterogeneity, and non-stationary regressors, and relies heavily on Durbin Hausman's principles to generate two statistics. Long-term relationships under heterogeneity can be revealed with the first statistic (DHg), while relationships under the assumption of panel data homogeneity can be explored with the second statistic (DHp). The explanation of the Durbin-Hausman test statistic is determined as: The null hypothesis of the Durbin Hausman panel statistic (DHp) can be expressed as H 0 : ; i = 1 for all i, where, i = 1,2,. . ...n. However, the alternative hypothesis can be exposed as H P i :
Long run estimates
The next step after cointegration analysis is to estimate the long-term relationships among the proposed variables in the model. This study uses the CS-ARDL approach proposed by [111], which has recently outperformed traditional estimation techniques (i.e., OLS, FMOLS, DOLS) in estimating short-and long-run elasticities. The CS-ARDL estimation technique can account for cross-sectional dependence, serial correlation, endogeneity, and heterogeneity issues [112].
The CS-ARDL equation can be expressed as: The cross-sectional average can be expressed as , Z represents the set of explanatory variables. The Augmented Means Group (AMG) estimator developed by [113] is also used in this study as a complement to the CS-ARDL approach. This method can overcome the problems of cross-sectional dependence, slope heterogeneity and endogeneity of panel data and provide long-term results, so the AMG strategy can be used as a robustness test.
Granger causality test
The Granger causality test of [114] can be used after long-term estimated parameters to reveal bidirectional causality between variables of interest. This method is not subject to any restriction of T>N, and is very flexible and applicable. This test is suitable for heterogeneous and unbalanced panels. This method provides robust conclusions even for small samples and cross-sectional dependence, as recognized by Monte Carlo simulations [115]. The Dumitrescu-Hurlin causality test equation can be expressed as:
Analysis results and interpretation
First, cross-sectional dependencies in panel datasets can be detected to obtain robust estimation results. Pesaran Scaled LM, Pesaran CSD, Bias-corrected scaled LM, and BreuschPagan LM tests yielded strong evidence of cross-sectional dependence in the panel datasets for the six emerging Asian countries highlighted in Table 2. Thus, the results obviously illustrate a strong correlation among specific countries in the panel. With the emergence of cross-sectional dependencies in panel datasets, second-generation techniques will produce reliable, robust, efficient, and consistent results. The slope heterogeneity results in Table 3 clearly demonstrate heterogeneity problems in the leading models of renewable energy and economic growth, reflecting biases in the results of traditional cointegration techniques and unit root tests. The test statistics of both models are statistically significant, thus, in this case, the most appropriate method adopted in this study is the second-generation unit root test of CADF and CIPS, and the results are summarized in Table 4. The results of the CADF and CIPS tests identify stationary outcomes at 1(1) for the entire variable in the model, allowing cointegration analysis to be used.
The variable descriptive statistics highlighted in Table 5 reflect an average GDP of $2,096.87 billion for the six largest emerging economies in Asia, showing a large standard deviation for the period 1975-2020. The average levels of social globalization, economic globalization and political globalization are 56.47%, 51.43% and 48.19% respectively, and the deviations are 15.91%, 7.68% and 8.65% respectively. These statistics reflect the greater economic, social and political integration of particular emerging economies with the global economy. The average renewable energy consumption in selected emerging economies in Asia was 6.26 metric tonnes of oil equivalent (Mtoe), with a variation of 0.96 Mtoe. The skewness statistics are positive, reflecting that all variable data are positively skewed. In addition, average gross fixed capital formation (K) and employed labor force (L) recorded $4.89 trillion and 628 million, respectively. The kurtosis results reflect the elongation and peaking phenomenon of capital and GDP data due to large statistics. Also, the Jarque-Bera test reflects the normal distribution of the data for the entire variable, since none of the variables are significant.
Correlation coefficients and variance inflation factors (VIF) for each variable in Table 6, used to check for multicollinearity problems in the series. The results of the correlation matrix illustrate that social globalization, political globalization, capital and labor force are positively correlated with GDP, while economic globalization, and renewable energy consumption are adversely associated with GDP. The results for the variance inflation factor (VIF) reflect the absence of multicollinearity in the model, as all VIF values are below 5.
Next, the [110] cointegration test, which is widely used in the energy growth, globalizationrenewable energy consumption, and globalization-economic growth literatures, is applied based on the use of partial integral regressors. This test gives way to accommodating elite characteristics of stationary regressors in the model. The possibility to place partially stationary regressors in the model is a distinguishing feature of the test. Thus, it is more appropriate to use this test in the current study. The results of the Westerlund test highlighted in Table 7 demonstrate the acceptability of the alternative hypothesis, showing cointegration in the panel data for selected emerging Asian economies.
PLOS ONE
Globalization link between renewable energy consumption and growth, evidence from Asian emerging economies
PLOS ONE
Globalization link between renewable energy consumption and growth, evidence from Asian emerging economies Table 8 highlights the estimated long-run coefficient elasticities of the economic growth model revealed by the CS-ARDL approach. Every 1% increase in renewable energy consumption can significantly boost economic growth by 0.609%. The analysis results show that renewable energy consumption has a significant progressive effect on economic growth. This result identifies renewable energy consumption as a major factor affecting economic growth in selected Asian emerging economies, which is inconsistent with the arguments of classical economics. Thus, implementing policies to expand renewable energy efficiency will boost economic growth in Asian emerging economies, consistent with previous studies by [25, 28, 47, 49, 50, [116][117][118][119]. Also, gross fixed capital formation has significantly boosted economic growth in specific emerging Asian economies. Every 1% increase in gross fixed capital formation can significantly promote economic growth by 0.476%. This result is consistent with the neoclassical growth model, which is based on the fact that higher capital accumulation can boost economic growth. Likewise, the employed labor force has a significant positive effect on economic growth in specific emerging Asian economies. Every 1% increase in the employed labor force can significantly boost economic growth by 0.309%. Focusing on decomposed globalization, it can be seen from the direct effect model that both political globalization and economic globalization have hindered the economic growth of emerging economies in Asia. For every 1 percentage point increase in political globalization and economic globalization, the economic growth rate can be significantly reduced by 0.218 percentage points and 0.533 percentage points respectively. Hence, economic globalization, including the flow of goods, services, and financial assets, has hindered economic growth in specific emerging Asian economies. This may be because it stimulates expertise in economic sectors with strong comparative barriers to technological innovation, learning by doing, and possible productivity growth that have mostly disappeared in emerging Asian economies [120,121]. In addition, economic globalization may hinder economic growth through the failure and unemployment of new industries and widening income inequality [122,123]. Moreover, the negative impact of economic globalization on economic growth is caused by weak institutions in emerging economies, which is consistent with [70,124,125]. This result is consistent with [70,74,126], also contradicting [75,127,128]. Moreover, weak institutions and governance replicate the deterrent effect of political globalization on economic growth in emerging Asian economies. Institutional weaknesses in emerging Asian economies impede growth by undermining the effectiveness of domestic institutions. In view of the restrictive effect of economic and political globalization on the economic growth of Asian emerging economies, political factors reflect economic factors, and the two complement each other. The restrictive effect of political globalization on economic growth is very consistent with the research results of [30,70,76,129,130]. Conversely, social globalization has accelerated economic growth in Asia's emerging countries. Thus, given the free flow of communication and information enabled by social globalization through television, telephone, and the Internet, it may stimulate economic growth by lowering transaction costs [66,131]. This result is consistent with studies by [70,127,132].
Long-term coefficient estimation of economic growth model
The non-linear impact of political, economic and social globalization on economic growth shows that political and economic globalization significantly promotes economic growth, while social globalization significantly reduces economic growth. However, the square of political and economic globalization significantly reduces economic growth, while the square of social globalization significantly boosts economic growth in emerging Asian economies. This result clearly proves the validity of the inverted U-shaped relationship between political globalization, economic globalization and economic growth, and the legitimacy of the U-shaped relationship between social globalization and economic growth. This means that economic growth in the short run is boosted by political and economic globalization and held back by social globalization. However, in the long run, economic growth will only increase due to social globalization and decrease due to political and economic globalization. Furthermore, the moderating role of renewable energy in the link between globalization indicators and economic growth suggests that renewable energy consumption interacts with economic, social, and political globalization to promote economic growth in emerging Asian economies.
To check the robustness of the long-term results of the economic growth model through the CS-ARDL test, the AMG estimator is used. Table 9 highlights the AMG estimation results, reflecting broad agreement with the CS-ARDL results in terms of coefficient signs. Table 10 highlights estimates of renewable energy consumption, showing that economic growth has significantly boosted renewable energy consumption. This clearly shows that demand for renewable energy consumption has been stimulated in Asia's emerging economies as economic growth picks up. This is because converting raw materials into finished products requires economic agents to use renewable energy through higher economic growth in emerging Asian economies [133,134]. The result of higher economic growth leading to higher renewable energy consumption is consistent with [15,135]. The findings also show that capital accumulation significantly increases renewable energy consumption, and argue that capital accumulation and renewable energy are complementary, that is, higher capital stocks in Asian emerging economies will lead to higher renewable energy consumption. In addition, employed labor has a significant adverse effect on renewable energy consumption, suggesting that labor and renewable energy are substitutes, and that an increase in the labor force reduces renewable energy consumption in emerging Asian economies. The results also point out that decentralized globalization (economic, social, and political globalization) has significantly boosted renewable energy consumption in emerging Asian economies. These globalization practices thus play an important role in renewable energy consumption in emerging Asian economies. For every 1 percentage point increase in economic, social and political globalization, renewable energy consumption will increase significantly by 0.488 percentage points, 0.383 percentage points and 0.373 percentage points, respectively. This result is consistent with [88,89]. The results show that economic globalization increases the consumption of renewable energy by promoting the import of advanced and friendly technologies. Evidence for the strong impact of social globalization factors on renewable energy consumption is based on the fact that globalized social norms have increased the dependence of emerging Asian countries on renewable energy consumption. In terms of political globalization, emerging economies in Asia have recently highlighted renewable energy plans, and politicians have shown interest in shifting their reliance on renewable energy. However, the nonlinear effect model reflects that the main term of economic, social and political globalization has a significant negative impact on renewable energy consumption, while its square term has a significant incremental impact on renewable energy consumption. This result suggests a U-shaped relationship between globalization indicators and renewable energy consumption. Thus, renewable energy decreases at the beginning of globalization, but renewable energy consumption increases after globalization reaches a certain threshold. The interaction of political, economic, and social globalization with economic growth has also witnessed an increase in renewable energy consumption, which supports the scale effect hypothesis.
The AMG estimator is used to check the robustness of the long-term results of renewable energy models through CS-ARDL tests. Table 11 highlights the AMG estimation results, reflecting broad agreement with the CS-ARDL results in terms of coefficient signs.
Next, the Dumitrescu-Hurlin causality approach was used to detect causality among our variables of interest for specific emerging Asian economies, and the results are highlighted in Table 12. Renewable energy and economic growth have bidirectional causality, thus supporting the feedback hypothesis. There are also two-way causal relationships between social globalization and economic growth, and between economic growth and economic globalization. Moreover, there is a one-way causal relationship from renewable energy to political globalization, from social globalization to capital formation, from economic globalization to renewable energy consumption and capital formation. There are also one-way causal relationships from political globalization to economic growth, from capital accumulation to economic growth, renewable energy consumption, and social globalization.
Conclusion and policy implications
Using the CS-ARDL approach, this study examines the impact of globalization indicators on the link between renewable energy consumption and economic growth in six emerging Asian economies over the period 1975 to 2020. The research analysis draws several findings. First, the results of the economic growth model show that renewable energy consumption has a significant contribution to economic growth. The conclusions further point out that economic and political globalization firmly hold back economic growth, while social globalization directly promotes economic growth in selected Asian emerging economies. The non-linear impact of political, economic and social globalization on economic growth shows that political and economic globalization significantly promotes economic growth, while social globalization significantly reduces economic growth. However, the square of political and economic globalization significantly reduces economic growth, while the square of social globalization significantly boosts economic growth in emerging Asian economies. This result clearly proves the validity of the inverted U-shaped relationship between political globalization, economic globalization and economic growth, and the legitimacy of the U-shaped relationship between social globalization and economic growth. The study also found that economic, social and political globalization moderated the impact of renewable energy on boosting economic growth.
Second, for the renewable energy consumption model, it is revealed that economic growth significantly promotes renewable energy consumption. The results also point out that decentralized globalization (economic, social, and political globalization) has significantly boosted renewable energy consumption in emerging Asian economies. However, the nonlinear effect model reflects that the main term of economic, social and political globalization has a significant negative impact on renewable energy consumption, while its square term has a significant incremental impact on renewable energy consumption. This result suggests a U-shaped relationship between globalization indicators and renewable energy consumption. The interaction of political, economic, and social globalization with economic growth has also witnessed an increase in renewable energy consumption, which supports the scale effect hypothesis. The causality test concludes that there is a two-way causal relationship between renewable energy consumption and economic growth, thus supporting the feedback hypothesis.
The findings provide important policy implications for Asian emerging economies. Existing evidence in this study suggests that economic and political globalization promotes shortterm economic growth but hurts long-term economic growth. However, social globalization inhibits economic growth in the short run and promotes economic growth in the long run.
Although globalization brings short-term economic benefits, it is crucial for policymakers to implement favorable policies to limit the adverse effects of economic globalization on longterm economic growth. Economic globalization drives long-term economic growth in emerging Asian economies, which is critical for policymakers to formulate corporate policies that encourage a deeper understanding of economic sectors with dynamic comparative advantages in terms of output growth. Furthermore, given the technological impact of globalization, ensuring the transfer of technological innovation to less technologically innovative productive sectors is imperative to foster long-term economic growth. Finally, economic globalization in light of income inequality and unemployment may hinder long-term economic growth. Policymakers have an obligation to use necessary social interventions as a tool to provide a "safety net" for those harmed by economic globalization. Building on existing politically and economically extractive institutions remains crucial for globalization to foster long-term economic growth in emerging Asia. Renewable energy consumption interacts with economic, social, and political globalization to boost economic growth in emerging Asian economies. This means that productivity and commodity flow in the economic globalization based on advanced and friendly technology, will not only promote economic growth, but also bring about sustainable development of the environment. The nonlinear effect model of renewable energy consumption reflects a U-shaped relationship between globalization indicators and renewable energy consumption. Thus, globalization indicators only worsen renewable energy efficiency in the short run, while improving renewable energy efficiency in the long run. Thus, from a policy perspective, it has been recommended that policymakers in the Asian emerging economies should not misjudge the effects of globalization in renewable energy demand models while formulating and implementing environmental conservation policies.
For future research directions, the role of renewable energy consumption in the link between fragmented globalization and environmental sustainability should be explored for Asian emerging economies. In addition, an N-shaped EKC assumption between globalization indicators and renewable energy consumption should be established for the specific panel to predict future environmental protection measures. | 8,731.8 | 2023-08-16T00:00:00.000 | [
"Economics"
] |
Economic integration and stock market linkages: evidence from South Africa and BRIC
Purpose – Thisstudyexaminestheimpactofregionaleconomicintegration(REI)onstockmarketlinkagesinthe BRICS (Brazil,Russia,India, Chinaand South Africa) economic bloc. In this type of study, the BRICS framework is an appealing empirical case, given its uncommon characteristics. For example, BRICS member states come from remotegeographiclocations(Africa,Asia,EuropeandSouthAmerica)andhavecontrastingsocioeconomicprofiles. Design/methodology/approach – An empirical design is framed from the perspective of bilateral trade betweenSouthAfricaandBRIC.Theauthoracceptstradeintensityasaproxyofregionaleconomicintegration andthenexaminestheresultingeffectonthestockmarketco-movementwithinBRIC.Thestudyappliesatwo-stepeconometricprocedureoftheBEKK-MGARCHandpaneldatamodels. Findings – Overall, bilateraltrade, as a proxy of economic inwctegration, is associated with an increase in stock market integration. This positive relationship is particularly observedduring episodes ofsurplustrade, and more interestingly, was initiated three years after BRICS ’ existence and continues to grow at an increasing rate. Practical implications – The study outcome should benefit international trade practitioners and global investors interested in portfolio diversification or concerned with risk spillovers. Originality/value – First, notwithstanding South Africa ’ s significant economic presence in the African continent, to the best of the author ’ s knowledge, this is the first study to empirically evaluate the BRICS economic integration on their stock market linkages from the perspective of South Africa. The value of this contributionisthatfurtherworkmayinvestigatethebidirectionalspilloverimpactconveyedbySouthAfrica ’ s trade interactions within the juxtaposition of Africa and BRICS economies. Second, given that research on REI andstockmarketintegrationhashistoricallyconcentratedonmatureregionalblocsofEurope,Asia,Southand NorthAmerica,thecurrentstudyadvancesknowledgewhilecorrectingtheprevailingliteratureimbalance.
Introduction
History and economic rationale suggest that there are general benefits derived from regional economic blocs like the European Union, North American Free Trade Agreement, Community of Sahel-Saharan States and Southern African Development Community (SADC), to mention a few.Similarly, countries have long observed the socioeconomic advantages of global trade cooperation, such as the General Agreement on Tariffs and Trade, which was replaced by the World Trade Organization in 1995.An inclusive empirical assessment is imperative to understand the nature and extent of beneficiation towards financial markets flowing from regional economic integration (REI).A textbook explanation says that the essence of economic integration is to create a conducive environment for cooperating countries to benefit from increased international trade, minimised tariffs, synergy for monetary policy and favourable markets' regulatory regimes (Carbaugh, 2018).A developing research stream that flows from this concerns the trade-alliance-induced economic integration with financial markets, which is the focus of the current study.Extant literature shows that economic integration and financial market linkages vary across economies based on integration structure (Kim et al., 2018) and financial development (Lahrech and Sylwester, 2013).Also, the effect of REI on financial markets differs according to emerging markets (Guesmi and Nguyen, 2011), economic aggregation of the national, industry, and the firm level (Fazio, 2007;Garcia-Herrero et al., 2009;Karim and Majid, 2010).Further, some economic blocs show a tendency of member states to trade outside their regions or display misaligned trade behaviours (Garc ıa-Herrero et al., 2009;Lombana et al., 2021).These studies reveal insightful characterisation of REI and financial integration.
Unfortunately, the REI and financial integration nexus research tends to concentrate on economies with long histories of regional economic blocs like Asia, Europe, South America and North America.Another literature skewness is evident in the recent reviews that show separate or parallel research on economic integration (Upalat, 2022) and financial integration (Patel et al., 2022), with few exceptions (like Paramati et al., 2016;Song et al., 2021).The current study corrects the indicated research weaknesses by extending the empirical examination to the unique setting of economic integration in the BRICS regional bloc.In particular, the present study takes South Africa, located in the farthest south of the African continent, as a target market and investigates whether South Africa's bilateral trade with BRIC impacts the economic bloc's stock market co-movement.BRIC (Brazil, Russia, India and China) was formed in 2006 and was later joined by South Africa in 2010 to form BRICS.
Unlike the set-up of many economic blocs, the member states of BRICS are situated in distant geographic areas (Africa, Asia, Europe and South America).This dispersed REI framework introduces a critical empirical case with a bearing on the stylised literature finding that countries' geographical proximity impacts stock market linkages (Asgharian et al., 2013;Fazio, 2007;Karim and Majid, 2010;Paramati et al., 2016).The author is unaware of a study with the same empirical objective as the current research.The rest of the paper is organised as follows: Section 2 reviews related literature.Section 3 explains the research methodology.Section 4 presents and interprets the results.Section 5 discusses the results, and Section 6 concludes the study.
Literature review 2.1 The BRICS regional economic bloc
The need to understand the economic experience of South Africa in BRICS, coupled with ongoing innovation of the domestic stock market, the Johannesburg Stock Exchange (JSE), inspires the current research topic.Many innovative advancement emerged in the JSE in recent decades (the late 1990s to the 2020s).The JSE launched a significant acquisition programme that resulted in the takeover of the South African Futures Exchange (SAFEX) in 2001, followed by the Bond Exchange of South Africa (BESA) in 2009.Just before South Africa became a member of BRICS, the JSE installed an electronic clearing and settlement technology known as, Shares Transactions Totally Electronic (Strate) in 1997.Another innovation includes the collaboration of the JSE with the London Stock Exchange (LSE) to introduce a series of joint indices labelled FTSE/JSE since 2002.In 2003, the JSE introduced a stock-listing board exclusively for small and medium-sized firms called Alternative Exchange (AltX) and a special board for currency and interest rate markets, the Yield X.Although some of the changes occurred before BRICS, this fact is appropriately controlled in the empirical design and explained under the methodology section.Unlike many regional blocs, BRICS integrates with Africa's prominent economy of South Africa.In this context, South Africa has a distinct and influential position in relation to the economies of the African continent.South Africa is a dual member of the Southern African Customs Union (SACU) and of a sixteen-country economic bloc, SADC, among other influential African organisations.Furthermore, in Africa, South Africa's JSE (founded in 1887) is the largest and the second oldest after Egypt's Stock Exchange (established in 1883).Given this background, it is probably not surprising that the literature (Agyei-Ampomah, 2011;Boamah, 2016) observes that African economies are segmented, with the exception of South Africa.Due to their geographic proximity and shared socioeconomic characteristics, studies on traditional blocs may be constrained to trace the unexplained variations in co-membership trade behaviour.Therefore, unconventional case studies such as BRICS should provide a different perspective from the data.
International trade, economic integration and stock market linkages
Economists have long known the benefits of international trade.The James Steuart Mercantilism theory (Steuart, 1767) was a protectionist economic system that favoured trade surplus, in contrast with the free trade emphasis of Adam Smith's proposition of an absolute advantage (Smith, 1776), in which a country was expected to maximise trade benefits by specialising in a product it is good at.Later, David Ricardo suggested a modification of the comparative advantage (Ricardo, 1817).Followed by the Swedish economists Eli Heckscher and Bertil Ohlin, who recommended the Factor Proportions theory (Ohlin, 1933), which contends that a country should be better off by producing and exporting products in which it has an abundance of production factors.The above theoretical paths show that the trade theory continues to evolve, but with a common question: How can a country maximise its benefits from international trade?The principle of regional economic integration has much value-add in this regard (Balassa, 2012) and much more than just decreasing tariff barriers.Different variations of regional economic integration include free trade areas, customs unions, common markets, economic unions, monetary unions and fiscal unions, inter alia.The benefits of REI manifest in a feedback recurrence at the industry level through favourable product prices and macroeconomic growth paths.
Hypothesis development
H1.The hypothesis says that there exists an association between economic integration and stock market linkages within BRICS.
The economic theory shows that international trade has an impact on domestic economies via knock-on effects that cascade from the firm level to the stock market.One of the objectives of capital budgeting theory in corporate finance is to determine the value of a firm using a stream of future cash flows, such as Gordon's growth model (Gordon, 1959;Gordon and Shapiro, 1956).Specific case studies of stock market correlations with economic variables at the firm level (like, Huy et al., 2020) do not address the same problem as the literature stream of financial market integration but are informative.In the stock price valuation, the Gordon model measures the value of a firm by examining the expected dividends payable by a stockexchange listed company.The link between economic integration and the stock market is elaborated further in Soydemir (2000) and Asgharian et al. (2013) on how bilateral trade encourages synchronisation in business and its consequent impact on stock markets.
H2.The hypothesis says that there is stock market integration between South Africa and BRICS.
Economic integration and stock market linkage
The empirical design of the current study is to evaluate stock market integration within BRICS from the perspective of the South African trade relationship.In line with the economic theory discussed above, it is intuitive to expect an economic knock-on effect among bilateral trade, the aggregate economy and stock market activity.Consistent with prior works (Forbes and Chinn, 2004;Paramati et al., 2016;Song et al., 2021), the current study accepts bilateral trade as a proxy for economic integration.While the economic rationale provides the foundation for the link between REI and the stock market, the actual nature or behaviour of this relationship is subject to empirical examination.
Method
Similar to Paramati et al. (2016) and Song et al. (2021), the study uses a two-stage econometric procedure.First, we use a multivariate BEKK-MGARCH model to estimate interlinked timevarying correlations between the South African stock market and each of the four BRIC countries, concurrently.From this, we save four sets of correlation series.In the second stage, the retrieved correlation time series is employed as a response variable in the next modelling stage.In the second stage, the panel data model is used to determine the effect of REI on stock market linkages.The benefit of using the BEKK-MGARCH system to compute dynamic correlations is to capture the potential spillovers in the BRICS-wide stock markets, which is valuable in measuring the extent of linkage.Mishra et al. (2022) employ the same model in a related application.
BEKK-MGARCH model
The first of the two econometric models to be estimated is the BEKK-MGARCH (Engle and Kroner, 1995).In this study, our preference is the BEKK over the DCC version of the MGARCH model, even though the two models are assumed to be equally competent at lower dimensions.Nevertheless, some researchers insist that there are unanswered questions regarding the asymptotic theory of the DCC model.For instance, after reviewing the relevant literature, Caporin and McAleer (2012, p. 746) concluded that ". . . the proofs [of consistency and asymptotic normality] for DCC have typically been based on unstated regularity conditions.When the regularity conditions have been stated, they are untestable or irrelevant for the stated purposes".We proceed with system (1) of vector-autoregressive, VAR(p) and BEKK-MGARCH (p, q) models in Equations ( 1a) and (1b): In the VAR(p) system (1a), y t is a k 3 1 vector of stock returns from the stock market indices of BRICS, while Π is a k 3 k matrix of parameters to be estimated.In Equation (1b), C is k 3 k lower triangular matrix, while A and B are k 3 k coefficient matrices to be computed.The disturbance term is assumed to be ε t ∼ N ð0; Σ t Þ, where Σ t is the covariance matrix.
Panel data model
The second and main econometric procedure of the study employs the panel data model, as presented in Equation ( 2).The objective of this model is to examine the core empirical question of whether REI has an impact on stock market linkages.
JEFAS 28,56
The dimensions of the panel data model, N and T, are four (BRIC countries) and 300 (months), respectively.The definition of variables is as follows: ρ is the response variable of dynamic correlations between South Africa and each of the BRIC countries.That is, country i at time t.
The vector, z, contains variables that proxy for REI of which trade intensity (trade) is key.The covariate trade is generally assumed to have a positive effect on stock market integration (Bracker et al., 1999;Paramati et al., 2016) due to possible national economic interaction and firm-level beneficiation.The variable trade is quantified in Equation ( 3) as: where g it is South Africa's total trade (imports plus exports) from BRIC country i at time t.This means that the denominator in Equation ( 3) represents South Africa's aggregate trade with the four BRIC countries.The other two variables are: a dummy variable, BRICSexist, which takes the value of zero before BRICS' existence and one otherwise, while the second variable, BRICSexp, captures the accumulation of BRICS experience measured as weighted age (or duration) of BRICS in years.The variable is also applied as an interaction of itself (squared).We use the weighting to capture the idea of relative influence in the South African-BRIC bilateral relationships.For this we conjecture that the ratio of each country's distance (d it ) from South Africa divided by the average distance of BRIC Africa should be appropriate and quantified in Equation ( 4) as: Distance on its own is known to negatively impact market integration owing to cost implications (Hooy and Goh, 2008).Therefore, this weighting has a moderating effect on the proxy for BRIC integration experience (BRICSexp).Considered together, the two variables, BRICSexp and its square, should answer the question of how the continued existence (or experience accumulation) of BRICS affects their stock market integration.In Equation (2), x is a set of control variables, namely, interest rate differential, volatility index (VIX) and geopolitical risk, which are summarised in Table A1 (in Appendix).The variable, interest rate differential (rate), is a common inclusion in market integration studies.It measures interest rate parity between markets with a potential impact on capital flow and should influence a firm's profitability leading to positive effects on stock markets' co-movements (Bracker et al., 1999), assuming capital mobility and other things are constant.Dedicated studies on risk integration in global stock markets (Marfatia, 2017) and BRICSspecific works (Mroua and Trabelsi, 2020;Yildirim et al., 2022) have shown that risk spillover prevails in both short and long frequencies.In financial markets, VIX is a well-known measure of market risk based on the S&P500 option index and it gauges financial uncertainty, fear and/or stress.History has shown that ". . .emerging stock markets have become less segmented from world stock markets" (De Jong and De Roon, 2005, p. 583).In this regard, we use VIX as an attribute of global financial market risk and the literature (Carrieri et al., 2007) shows that the direction of the effect is not pre-defined.
The regressor geopolitics is included to control for uncertainties emanating from changes in geographic political environments.This measure of political risk is a practical index that tracks the country political climate over time, based on newspaper reports (Caldara and Iacoviello, 2019).Higher and extreme sentiments of domestic political risk should have a Economic integration and stock market linkage lower contribution to stock market co-movement.Therefore, a negative association is expected.The coefficients, α, β, δ, γ are model parameters to be estimated.The terms, γ t and μ i are period and panel-fixed effects, respectively.
Data description
The BRICS stock market indices used in the current study are Brazil's Bolsa de Valores de São Paulo (BOVESPA), Russian Trading System (RTS), National Stock Exchange of India (NSE), China's Shanghai Stock Exchange (SSE Composite) and the JSE's All Share Index.The time horizon for the sample range was restricted by the shortest time series available, namely Russia's Stock Market Index, which is only obtainable from 1995.Therefore, the datasets used in MGARCH and panel data models are monthly time series for the period October 1995 to September 2020 from several sources, which are summarised in Table A1 (in Appendix).
Prior to estimating the econometric models of the study (Equations 1 and 2), it is important to preview the summary descriptive statistics of the regression time series.Therefore, it is useful to observe whether there is a preliminary discernible co-movement between the South African stock market and those of the BRIC countries.Figure 1 shows a historical graph of stock market price indices for South Africa against each of its BRICS counterparts.Overall, there is prima facie evidence of stock market correlation within BRICS.Also, apart from the gradual upward trend, the graphed series reflects a common response to significant structural changes such as the global financial crisis (2008)(2009), European financial crisis (2012-2015) as well as COVID-19 (2019COVID-19 ( -2020)).
Figure 2 shows the impulse response function for the South African stock market (All Share Index) in relation to the aggregate stock market of BRIC countries (MSCI BRIC Index).show that there is a two-way shock response between South Africa and BRIC.Although the two graphs are not able to reveal the origin of the shocks, the information is suggestive enough that there is a bi-directional effect between the markets.However, it is interesting to observe from these graphs that shocks trigger market reactions in opposite directions.
The response of the South African market is positive, while that of the BRIC is negative.In all cases, the full effect is maximised on the fifth month after the shock.Figures A1 and A2 (in Appendix) detail South African and BRIC response functions at country levels.In all cases, there is a bi-directional shock effect.
Pre-modelling and data-validation tests
In this study, panel data unit root tests are used to confirm whether the time series is stationary, which will help avoid spurious and misleading regression results.There are several alternative test procedures that researchers may apply to assess stationarity.To select an appropriate panel unit root test, we consider what different test procedures say about four factors of the econometric theory: whether we have balanced panels, the relative magnitudes of N and T, the speed at which N and T approach infinity as well as the extent to which N and T are fixed.The asymptotic conditions of available test procedures include T → ∞; N finite f g based on the test suggested by Choi (2001), or fT; N → seq ∞g by Breitung (2000), Breitung and Das (2005) and Hadri (2000) or f√N =T → 0; or N =T → 0g by Levin et al. (2002), or fN → ∞; T fixedg by Harris and Tzavalis (1999) and Im et al. (2003).The nature of our dataset is closer to the first two tests.The reason is that the size of the panel in our study is a fixed N of four BRIC countries, while T, the study horizon, is readily extendable considerably faster than BRIC membership.The relevant unit root test equation is given in Equation ( 5): (5) ∀i ¼ 1; 2; 3 . . .N and t ¼ 1; 2; 3 . . .T In Equation ( 5), S it is the series to be tested, ΔS itÀj captures a set of augmented lags, while v it is the regression error term which is assumed to be stationary.Fixed vs random effects: The Hausman test (Hausman, 1978) is employed to choose the appropriate model between fixed-and random effect models.The null hypothesis (H 0 ) of the test is that the random effect model is preferred.The test output in Table A3 (row 2) confirms a rejection of H 0 at less than 1% level of significance in favour of the fixed effects model.
Period effects: The purpose of this test is to verify whether the time-fixed effects should be included in the chosen fixed effect model.This test uses the F-statistical test.The null hypothesis is that all time effects are not relevant.The test results in Table A3 (row 3) reject H 0 leading to the conclusion that period effects are necessary in the panel data model.
Cross-sectional dependence: Countries within a formalised regional economic bloc are expected to have some form of interdependence in the real world.Two tests by Breusch and Pagan (1979) and Persaran ( 2004) are used to examine whether there is cross-sectional dependence among the panels (the BRIC countries).The null hypothesis for both tests says that there is no cross-sectional dependence.Based on the test results in Table A3 (rows 4 and 5), we reject H 0 under both tests and conclude that the cross-sectional dependence is prevalent in this panel data model.
Heteroscedasticity: A test of heteroscedasticity is well explained in mainstream econometrics textbooks (like Greene, 2000), and it is applied to inspect the assumption of homoscedasticity indicated in iid ∼ ð0; σ 2 Þ.The null hypothesis says that the assumption of homoscedasticity is not violated.According to the test results in Table A3 (row 6), we reject H 0 and conclude that heteroscedasticity is present in the panel data model.
Autocorrelation test: To assess whether the econometric assumption of serial correlation is satisfied, we use the test designed by Born and Breitung (2016).The null hypothesis of the test is that there is no serial correlation in the panel regression model.In the light of test results in Table A3 (row 7), we reject H 0 and conclude that the assumption of no autocorrelation is violated.
Residual normality: To investigate whether the model assumption of residual normality is sustained, we apply two tests (Shapiro and Wilk, 1965;Shapiro and Francia, 1972).The null hypothesis in both cases is that residuals are normal.Based on tests results reported in Table A3 (rows 8 and 9), we fail to reject H 0 in both tests at 1 and 5%, respectively.This means that the model residuals are fairly normal, and this fact is confirmed by the graphical illustration in Figures A3, and A4 (in Appendix).
The overall finding of the post-estimation validation is that normality treatment is not indicated, whereas the same panel data model is afflicted with problems of heteroscedasticity, serial correlation and cross-sectional dependence.To address these issues collectively, we apply the Driscoll and Kraay (1998) robust standard errors using the xtscc program by Hoechle (2007, p. 282), who confirms that the Driscoll-Kraay "covariance matrix estimator . . .produces heteroskedasticity-and autocorrelation-consistent standard errors that are robust to general forms of spatial and temporal dependence".Therefore, Table 1 presents the original OLS results in Model 1, while Model 2 is estimated with the Driscoll-Kraay robust standard errors.The choice of the Driscoll-Kraay robust model over alternatives is based on two factors.First, the conventional solutions include Newey and West (1994) and cluster (Rogers, 1994) robust errors.While these traditional robust methods provide a successful control for both heteroscedasticity and autocorrelation simultaneously, they fall short in addressing the cross-sectional dependence problem, necessitating using the Driscoll-Kraay method to solve all problems.Secondly, in the current study, the number of panels is very limited (only four BRIC countries), making the choice of Rogers' cluster robust errors less effective.After using the Driscoll-Kraay robust standard errors, the results are indeed robust because the variables maintain their statistical significance.
Stock market correlation between South Africa and BRIC
The MGARCH model in Equation ( 1) was used to generate dynamic correlation time series graphed in Figure 3.A visual inspection of the graphs shows that in the period prior to BRIC establishment (June 2006), the stock market correlation between South Africa and the BRIC countries had a steady downward trend.This period of steady correlation decline was concurrent with the Asian financial crisis (from middle to late 1990s) and dot.com technology shocks (in the early 2000s).The correlation of all the country pairs of South Africa and individual BRIC countries experienced a significant upward spike during the global financial crisis of 2008-2009, which was later followed by a steep decline (2010)(2011)(2012)(2013)(2014)(2015).Overall, the correlation time series appears to oscillate between zero and 40%, with a few occasions of pronounced negative deepening such as in the early 2000s and around the period of the The legend for statistical significance is: *** 1%, **5% and *10%, respectively.Under the column for coefficients, the numbers without brackets are coefficients.The numbers right below the coefficients in square brackets are test statistics.Under the column labelled std errors, the numbers without brackets are standard errors.The numbers below standard errors in round brackets are p values Source(s): Authors' computations Table 1.
Main empirical results
panel data model
Economic integration and stock market linkage
European crisis (middle of the 2010 decade).Therefore, a general observation suggests that the dynamic correlation between South Africa and BRIC is susceptible to significant financial crises, but the corresponding shock responses vary unpredictably per different episodes of crises.
Panel data results interpretation
The panel data model was employed in this study to investigate the relationship between BRIC regional economic integration proxied with bilateral trade and stock market linkages quantified as dynamic correlation of South Africa against individual BRIC countries.The results are presented in Table 1 and Figure 4.The results of the panel data model, presented in Table 1, provide answers to and evidence on whether BRIC economic integration has an impact on their stock market integration.Stock market integration (dependent variable) is measured as a dynamic correlation of stock market indices between South Africa and each of the BRIC countries.Economic integration is proxied primarily with trade intensity (trade).The regression output shows a strong negative coefficient for trade, which is statistically significant at less than 1% level.At first sight, the result looks counterintuitive since basic economic reasoning hints at a positive association between trade activity and stock market due to economic rationale.Also, the dummy variable (BRICSexist) of "before and after" BRIC establishment is negative, suggesting that economic integration encourages stock market segmentation.However, a comprehensive inspection of all proxy variables for economic integration reveals the opposite, and this is in line with expected economic intuition.First, the frequency of leadership changes along with the interaction term of trade with each of BRICSexist as well as an indicator of trade surplus or deficit (surplus-deficit), says that bilateral trade surplus (as opposed to deficit) after BRIC establishment has a positive impact on stock market co-movement.Second, a positive coefficient on the proxy for BRIC experience and a negative coefficient on its square provide further clarity.Prior to BRICS' existence and its early years of integration, trade from BRIC countries was related to equity market segmentation, but this changed to a positive association with market integration after three years of BRICS' existence.
The rest of the variables in Table 1 control for other factors regarding the stock market fundamentals and show both statistical and economic significance.The increase in the interest rate differential may signal positive investment opportunities, which are captured by a positive effect that signals stock market integration.A positive coefficient of financial market risk (VIX) may indicate that the individual BRICS stock markets have a similar shock response to global risk.The negative coefficient for geopolitical risk may reflect incompatibilities in socio-political and cultural differences among the BRICS countries which support stock market segmentation.The literature (Hooy and Goh, 2008) indicates that market segmentation may also arise because of institutional inefficiencies and differences.
Graphical results interpretation: Figure 4 provides further clarity on the relationship between bilateral trade intensity and stock market integration.The graph presents the predictive margins of South Africa-BRIC bilateral trade on stock market integration.On the x-axis we have an index of BRICS experience (BRICSexp) proxied with log of distanceweighted BRIC age (see Equation 4), while on the vertical axis we capture the predicted values of BRIC integration as measured by dynamic stock market correlation.The minimum turning point of BRICSexp corresponds to three years of BRICS' existence.Each of the four lines is a measure of the u-shaped BRICS experience.Comparing the bottom pair of lines (before BRICS' existence) with the top pair (after BRICS), the graph shows that the effect of surplus bilateral trade intensity on stock market integration always exceeds deficit and that the overall impact is highest after BRICS' existence and slops upwards after three years of entrenchment.Therefore, taken together, Table 1 and Figure 4 show that surplus bilateral trade between South Africa and BRIC manifests a positive association with BRICS' economic integration and that this relationship will continue to strengthen as the years of BRICS' Economic integration and stock market linkage existence increase, other things being constant.To summarise, the results provide an overall support and confirmation for the study hypothesis.
Discussion
Other things being equal, ". . .capital should automatically flow from capital-abundant to capital-scarce countries" thereby encouraging market integration.On the contrary, "Asia's considerable net savings tend to flow to capital-abundant countries rather than to capitalscarce ones as the theory would predict" (Park, 2013, p. 1).This narration confirms that in practice, market integration is more an empirical question than a solely economic theory, which calls for diverse testing before styled facts may be deduced.
The current study is closest to that of Paramati et al. (2016) and Song et al. (2021).These two papers studied market integration in Asia, focusing on the same countries (Australia, China, India, Japan and Thailand).Paramati et al. (2016) investigated market integration in Australia-Asia bilateral trade, while Song et al. (2021) focused on China-Asia bilateral trade.In the current study, we adopt the same framework and investigate market integration in the BRICS economic bloc from the perspective of South Africa and BRIC bilateral trade.While there is significant variation in the methodological procedures and proxy variables of economic integration, overall, the results concur.Other supportive studies that confirm a positive relationship between REI and stock market integration include those of Fazio (2007) and Karim and Majid (2010).The current study provides a different perspective from the BRICS studies that found partial stock market integration (Chittedi, 2010) or insignificant stock market inter-linkages (Sharma et al., 2013).More importantly, the current study prompts an interesting further study in the sense that if South Africa manifests trade-linked stock market integration with BRICS, how much of this integration is conveyed (or spills over) to other African regional blocs with which South Africa is observed to be integrated (Ekpo and Chuku, 2017;Piesse and Hearn, 2002), such as SACU or SADC economic regions.
Theoretical implications
The BRICS economic bloc is over 10 years old, and it is now due for performance scrutiny.Chatterjee and Naka (2022, p. 3) have examined BRICS's life from commencement to date and concluded that "Academic scholarship on the implications of BRICS as an entitythe power of its pooling together of economic resources or its political valence as a discursive formation is relatively underdeveloped."The outcome of the current research contributes to the needed performance diagnosis of the BRICS configuration.Going forward, the positive results of the trade-inspired financial integration are subject to the extent to which the BRICS member countries respond to economic crises and whether the group cohesion is deepened and sustained.
Policy and business implications
The research outcomes of the current study have policy and practical implications with respect to government economic administrators, regulation agencies, multinational firms, export financiers, global investors as well as stock market institutional practitioners.Conceivably, the economic policymakers who are involved in international trade promotion need to be aware of the feedback effect emanating from the stock market regulatory process due to the association between financial market integration and economic integration.Multinational firms and the international export financing agencies should benefit from knowing the potential effect of trade liberalisation policy on stock market performance, which should influence the timing of their financing plans.Global investors need to have wide JEFAS 28,56 knowledge of stock market drivers (including economic integration).Therefore, this type of research caution analysts that if their global risk modelling excludes the economic integration issues, then such price determination models may be incomplete.
Limitations and future research agenda
Even though the results of the study are insightful, the empirical work was restricted by data availability.Therefore, a similar study could be extended to other regional economic blocs where data access is more generous.The emergence of unique economic blocs like the BRICS, the ongoing economic transformations and the economic disruptors of technological progress continue to make the subject of economic integration and financial market integration topical and essential research.Further studies may probe the association from the perspective of each BRICS country and to identify the source and destination of the dominant impacts.
Conclusion
This study has shown that prior to the BRICS' existence and including its early years of installation (prior to three years), there were signs of market segmentation when observed from the perspective of South African bilateral trade with BRIC countries.However, this changed three years after the inception of the BRICS economic bloc (indicative of experience accumulation and/or entrenchment), when the results show unambiguous evidence of market integration, particularly during the surplus bilateral trade episodes.The findings of the study allow us to conclude that there is a positive association between regional economic integration and dynamic stock market linkages.Overall, the results of a study of this nature are beneficial to policymakers and global financial investors, where diversification considerations are essential.
Figure 1.The historical trend line of BRICS stock market price indices Figure 3. Dynamic correlation between the South Africa and BRIC stock markets Figure 4.The predictive margins of South Africa's bilateral trade with BRIC Figure A2.Shock responses of individual BRIC countries to one standard deviation innovations from South Africa Panel data model validationPrior to results' interpretation, it is important to address the model validation necessities.In this regard, a battery of tests is applied to confirm the model selection (among, pool vs fixed effects, fixed vs random effects and time effects), as well as to validate post-estimation model assumptions, which entails tests on heteroscedasticity, autocorrelation and cross-sectional dependence.
The variable, w it , represents the panel means time-trend, and it takes the value of zero if none is included in the regression.The null hypothesis of the unit root test is H 0 : f i ¼ 0 for all i against the alternative, H 1 < 0: The test results of stationarity are presented in TableA2(in Appendix) and explained next. | 7,782.6 | 2023-11-06T00:00:00.000 | [
"Economics"
] |
Terahertz spectral imaging based quantitative determination of spatial distribution of plant leaf constituents
Background Plant leaves have heterogeneous structures composed of spatially variable distribution of liquid, solid, and gaseous matter. Such contents and distribution characteristics correlate with the leaf vigor and phylogenic traits. Recently, terahertz (THz) techniques have been proved to access leaf water content and spatial heterogeneity distribution information, but the solid matter content and gas network information were usually ignored, even though they also affect the THz dielectric function of the leaf. Results A particle swarm optimization algorithm is employed for a one-off quantitative assay of spatial variability distribution of the leaf compositions from THz data, based on an extended Landau–Lifshitz–Looyenga model, and experimentally verified using Bougainvillea spectabilis leaves. A good agreement is demonstrated for water and solid matter contents between the THz-based method and the gravimetric analysis. In particular, the THz-based method shows good sensitivity to fine-grained differences of leaf growth and development stages. Furthermore, such subtle features as damages and wounds in leaf could be discovered through THz detection and comparison regarding spatial heterogeneity of component contents. Conclusions This THz imaging method provides quantitative assay of the leaf constituent contents with the spatial distribution feature, which has the potential for applications in crop disease diagnosis and farmland cultivation management.
Background
Plant leaves, composed of water, solid matter, and gas, play a key role in photosynthesis [1,2], respiration [3] and water transport [4,5]. The water abundance of leaves is closely connected with the vigor and phylogenic traits such as the structure, shape and photosynthetic efficiency of plants [6]. In addition, solid matter distribution and gas transport network are also linked with the dynamic metabolic activity in either raw material or product based on the formula of photosynthesis and respiration [7,8].
Usually, abnormal concentration and distribution characteristics of constituent substances indicate the lack of nutrients or pests and diseases attack. Consequently, qualitative and quantitative methods for assessing spatial variability of water, solid matter and gas contribute to our understanding of plant response to environmental changes under normal and stress conditions [9,10], which are indispensable for managing agricultural production. Thereinto, it is worth emphasizing that quantitative evaluation would play a more important role in establishing unified measurement standards, facilitating cross comparison between different samples and different species, and reflecting the state of plants more conveniently and accurately [11,12], including leaf transpiration kinetics, plant water stress, and dry matter accumulation.
A number of techniques have been used to test the leave's components quantitatively and provide technical support for precision agriculture. Gravimetry is the standard method for quantifying water contents of leaves by comparing the difference between the fresh and fully dried weights of the leaf. Gravimetric method is simple and reliable, but is destructive and non-real-time, unsuitable for noninvasive and continuous monitoring in greenhouses or fields. To avoid the disadvantage of destructive measures, more and more nondestructive testing (NDT) methods based on various parts of the electromagnetic spectrum have been used to develop agricultural sensing technologies [13]. The leaf water content of plant could be quickly and nondestructively detected through infrared [14], microwave [15], nuclear magnetic resonance [16], thermal imaging [17], and hyperspectral imaging [18], with the advantages of being label-free and in vivo, meeting the real-time and in situ monitoring requirements, such as water flow dynamics monitoring, temporal heterogeneity and spatial heterogeneity determination. Individually or jointly, these NDT techniques provide powerful support in the field of plant physiology and agronomy.
Meanwhile, a developing NDT technique based on the terahertz (THz) spectral region has shown great potential for water detection in the research fields of botany and agronomy [19,20]. It is because that THz wave, extending from approximately 0.1 THz to 10 THz, coincides with the low-frequency vibration of the hydrogen bond, and is vested the high sensitivity to water molecules. Also, its low photon energy is soft enough for samples to avoid radiation damage. With the increasing ability of analyzing spectral information, the water content testing method based on THz spectroscopy has been maturing after undergoing three developmental stages, i.e., qualitative analysis, relative quantitative analysis and absolute quantitative analysis [21][22][23]. Unfortunately, almost all studies have overlooked the analysis of solid content other than water content, giving up on relevant information included in the spectrum due to the skeletal vibrations between or within nucleic acids, proteins, sugars and lipids, all falling within the THz range. In fact, the Landau-Lifshitz-Looyenga model (LLL model) [24] has the ability to calculate target contents by relating the dielectric permittivity of a heterogeneous mixture with the dielectric permittivity of its components. However, to date it has only been used to determinate leaf water content [23,25], while the potential of comprehensive analysis of leaf component contents needs to be ascertained and realized, which would help differentiate the plant growth and development stages [26] and understand what parts of the leaf are the most sensitive at different metabolic states or under changing environmental conditions. Furthermore, nearly all of the quantitative analytical methods are based on a single-point spectral measurement, such as, establishing the line equation between water content and spectral parameters [27,28], and calculating water content using an effective medium model [25,29]. Such approaches would neglect the spatial heterogeneity of leaves, which is precisely the foundation for dissecting the metabolic processes of different leaf areas and tissues. More quantitative details about the spectral, temporal, and spatial variability of leaf contents by using graphical display could offer decision makers with intuitive and useful information about plants and crops.
In this article, we present a THz spectral imaging method to quantify the volumetric fraction of water, solid matter, and gas in leaf. In this method, the dielectric relationship between leaf and each component is given by an effective medium model, and the particle swarm optimization algorithm is introduced to calculate the relevant parameters. Our quantitative imaging results of leaves in different water states are strongly correlated with the traditional method, demonstrating the feasibility to monitor the spatial variability of leaf water, solid-matter and gas contents quantitatively using this new method, which are linked to the physiological characteristics of crops under conditions of damage, pests and disease. This study could help expand the application scenarios of terahertz spectral imaging in the field of plant physiology and agronomy.
Sample preparation
About 80 individual whole leaf samples were picked from Bougainvillea spectabilis for this study, which were cultivated ( Fig. 1a) on the campus of Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, in Chongqing, China. Among them, more than 50 leaves randomly selected from different plants were used to establish the THz-based method. Besides, six sets of leaves, growing in different parts of a branch, from different plants were collected for investigating the sensitivity and test extremity of this method due to the subtle differences in water content of leaves at different maturity level. Further, five wounded leaves were selected to validate the ability of the method to detect special structures in leaves, especially for the suspected cases caused by pests and diseases. Figure 1b-i show the photos of representative leaves. All leaf samples were first used for THz spectral imaging, and then evaluated by gravimetric method.
For establishing the calculation method of distinguishing water, solid matter and gas components directly from the spectra of leaf complex, the individual solid matter and water need to be collected and tested firstly. Solid matter of leaves was prepared from the freezedried leaves by using the freeze dryer (SCIENTZ-30ND, Ningbo Scientz Biotechnology Co., Ltd., CN) and then pressed into compact dry-matter wafers about 300 microns thick by a small manual tablet press (HY-12, Tianjin Tianguang Optical Instrument Co., Ltd., CN). While, the pure water meets the grade III standard of Chinese laboratory water was obtained from a Reagent Water System (CLW-K10, Chongqing Qianlai Instrument Co., Ltd., CN).
THz-TDS system and optical parameter extraction
A terahertz time-domain spectroscopic (THz-TDS) system (T-Gauge 5000 model from Advanced Photonix, Inc., USA) was deployed in our research, with a spectral bandwidth ranging from 0.1 to 3.5 THz, and a signal to noise ratio better than 60 dB at the frequency of analysis. The schematic diagram of the whole system in the transmission mode is displayed in Fig. 2a, and the XY two-dimensional moving stage was assembled into the system for imaging, shown in Fig. 2b. The raster scanning step size is 0.25 mm and the scanning speed is 50 mm/s, i.e., the dwell time at each pixel is 5 ms, longer than the spectral scanning time of 1 ms of the THz system. The imaging area of the XY-stage is 4.5 cm × 5.5 cm. The spatial resolution of the THz imaging system is about 1 mm.
All the main specifications of the system meet the test requirements [30,31].
For a sample with a rough surface, like leaves, the attenuation of electromagnetic waves results from both absorption and scattering, which means the contributions of absorption and scattering loss need to be considered, i.e., the total absorption coefficient α = α abs + α scat . A Rayleigh roughness factor was employed to describe the influence of scattering due to surface roughness [32]. The scattering-induced absorption was given by: here, τ denotes the degree of surface roughness expressed by the standard deviation of the height profile, θ is the angle of incidence, is the free-space wavelength, and d is the thickness of the leaf. Considering the scattering effect will lead to a more accurate leaves' absorption coefficient.
The solid matter was tested in an iron square holder with a round hole, Fig. 2c, whose refractive index and absorption coefficient in THz band can be obtained by [33] (1) The optical parameters of water were obtained by attenuated total reflectance (ATR), using a setup that is mainly composed of a silicon prism with a refractive index of 3.42 in the THz band, shown in Fig. 2d. The THz pulse incident onto the ATR system is totally reflected at the interface between the prism and the sample, where an evanescent field is created on the sample side of the interface that interact with the sample. In THz-ATR spectroscopy, complex refractive index of liquid sample can be determined with higher accuracy than in commonly used reflection or transmission modes of THz-TDS [34,35]. Assuming incidence from the prism side with the complex refractive index of n pri to the sample side with the complex refractive index of n sam , with the incident angle θ , n sam is given by the solution of the simultaneous equations where r and r ′ are the Fresnel's reflection coefficient of the prism-sample interface and of prism-air. The complex refractive index is n = n + ik , and k = cα(ω)/(2ω) is the All the measurements were carried out in an environment with a constant temperature during the whole process (22 °C ± 0.1 °C). And the leaves were placed in ambient air during THz imaging, while the humidity was kept under 2% by dry nitrogen (N 2 ) purge when measuring liquid and solid matter.
Calculation of water, solid matter and gas contents
Effective medium approximation is an appropriate model to obtain the terahertz dielectric function of hydrated tissue. When the leaf is regarded as a combination of water, solid matter and gas, the leaf 's effective permittivity will be related with the permittivity and volume fractions of each component. Conversely, if the permittivity of leaf, 'pure' water, solid matter and gas are individually known, an effective medium model could be used to calculate their respective volume fractions.
Within a third-order extension of the LLL model proposed by Jördens et al. [29] to calculate leaf water content the dielectric function of the leaf is given by where ε i are the dielectric functions and a i are the relative volumetric concentrations of the different components; the indices refer to leaf (L), water (W), solid matter (S) and gas (G) respectively.
As stated in the previous section, the dielectric function of the solid matter, shown on Fig. 3 (black lines), was determined by measuring compact dried B. spectabilis leaf wafers using THz-TDS; and the closed numerical value of different kinds of plants were detected due to the similar material composition (Additional file 1: Fig. S1). The dielectric function of deionized water determined by THz-ATR is shown on Fig. 3 (blue lines), in complete agreement with previously reported data [36]. The dielectric function of solid matter and water were used as inputs for the effective dielectric model.
A theoretical value of the dielectric function of leaf could be calculated by Eq. (10). Furthermore, Eqs. (3, 4, 8 and 9) are used to calculate a theoretical transmission coefficient of leaf T theo . At the same time, the transmission coefficient of leaf T exp can be determined by THz-TDS. The coefficients a i that match T theo and T exp can be considered as the true volume fractions of each component of the leaf.
In order to match the theoretical transmission coefficient of leaf with that of the experimentally obtained, particle swarm optimization (PSO) algorithm with soft boundary condition has been adapted to adjusts the parameters a W , a S and a G . This algorithm is easy to implement and able to generate feasible solutions within a manageable amount of computation time.
Particle swarm optimization is a population-based iterative optimization technique that locates the solution to an optimization problem by allowing candidate particles to fly around the solution space. The candidate particles' trajectories are affected by the best performing candidate solution (Gb) and the best location they have visited (Pb) [37]. The movements of the particles in canonical PSO are described by the following equations where v i,k (i = water and solid matter) and p i,k (i = water and solid matter) are the velocity and position of the kth particle; ω is the inertial weight; the constants c 1 and c 2 are acceleration coefficients; r is a uniform random number within [0,1].
The particle positions and velocities are randomly initialized. Afterwards, they move in the solution space guided by Eqs. (11,12). Once it goes beyond the boundary, the particle will be reset to its previous position and its flight path will be reprogrammed. The fitness of all particles are evaluated and the global and personal best positions are updated if needed. The global best at the The real ( ε ′ ) and imaginary ( ε ′′ ) parts of the dielectric functions of solid matter ( ε s , black lines) and water ( ε w , blue lines) measured by THz-TDS and THz-ATR, respectively end of the simulation is taken as the solution to the problem, which is the volumetric fraction of water, solid matter and water in the present case. The convergence reliability of PSO for solving percentage volume is the fundamental element for quantifying the distribution map of each component. Therefore, 10 THz spectra were randomly selected from 50 leaves' THz imaging data sets and 30 calculations were performed independently on each spectral data to verify the performance of PSO applied in quantitative analysis of the leaf 's three-component model. Table 1 shows a typical set of the results; the relative standard deviation of the calculated result of each component content is less than 5%, and in particular, that of water less than 1%. This fact indicates that the content of each component of the leaves obtained by the PSO calculation is statistically trustworthy.
Terahertz spectral imaging for quantitative distribution map
Point-by-point THz spectral imaging was performed on the tested leaf by THz-TDS and the two-dimensional moving stage for collecting spectral data of each point on the leaf. The imaging data were then processed by a software program written in MATLAB which integrates the LLL model and PSO algorithm to calculate the water, solid matter and gas content. The program could recombine single point data to draw distribution maps of water, solid matter and gas content. These results were stored as a data group and displayed visually through the images.
Gravimetric water content testing
To evaluate the accuracy of the THz measurements, each leaf was stored in a plastic bag directly after imaging, and the fresh weight (FW) was determined by using an electronic balance (ME204E, Mettler-Toledo, CH) immediately afterward. After drying the leaves at 95 °C for 4 to 8 h until the weight is constant, the corresponding dry weights (DW) were determined. The gravimetric water content (GWC) and the gravimetric solid content (GSC) were calculated for each leaf using the following equations [38,39]:
The establishment and adequation of the method for the quantitative assessment of spatial variability of water, solid tissues and gas in plant leaf
By collecting the THz transmission spectral data and processing them with the LLL model and PSO, the leaf (Fig. 1b) 2-D images were constituted (Fig. 4) and each pixel (0.25 mm by 0.25 mm in size) in the image corresponds to the percentage volume of water, solid matter or air on one certain point of the leaf. The color bar represents the exact values of percentage volume and a lighter color matches a higher material content. These three images respectively reflect the spatial distribution of water (Fig. 4a), solid matter (b), and gas (c) in the leaf, and show the spatial positions of veins and mesophyll. In general, there is more water, less solid matter and less air in veins. Next, the leaf water and solid matter contents were measured through the traditional gravimetric method to further corroborate the reliability and applicability of the THz-based method. A group of leaf disks with different matter contents, made from the blade stripped of main veins and cut into rectangles of about 2 cm 2 with different natural air-drying times, were used for both THz imaging and weight-based water content determination. For terahertz imaging, the average of corresponding parameters is used to characterize the substance content state of the whole leaf. From Pearson analysis results, the correlation coefficients of water (Fig. 5a) and solid matter (Fig. 5b) are 0.936 and 0.937. And the coefficient of determination of water and solid matter are 0.87 and 0.88 respectively. The results indicate that there is a strong linear positive correlation between THz-based measurement and the gravimetric one, which demonstrated that the algorithm proposed in this paper converges to the correct constituent substances proportions for the leaf, and the quantitative analysis results based on the THz signals are credible.
Sensitivity of THz spectral discrimination
Water accounts for the largest proportion in leaf constituents and is vital to the healthy growth of plants. Under normal physiological conditions, the dynamic range of water content in leaves is limited [40], requiring a sufficiently sensitive test method for predicting material changes. Water content is the most representative index for testing the sensitivity of a given quantitative assessment. Three whole leaf blades located in the same branch, as shown in Fig. 1c, were comparatively analyzed with the THz method and the weighing method. The water content of L-a (at the tip of the branch, shown in Fig. 1d), L-b (at the middle of the branch, shown in Fig. 1e), and L-c (near the trunk, shown in Fig. 1f ) are 70.52%, 74.11%, and 75.29%, respectively, measured by gravimetric method. THz images were used to quantitatively describe subtle distinction or variation among leaves at different maturity level. During the process, the contrast ratio of THz images was enhanced by stretching the gray value of 0.6-0.9 in order to improve the differentiation of water content visually. From Fig. 6a-c, the brightness of the grayscale image gradually increased, indicating that the water content of the leaves increases gradually from top to base of the branch, consistent with the trend of the results tested by the traditional method. This agreement is demonstrated more clearly by exact numerical values extracted from THz images, shown in Fig. 6d and e, where the average THz-based water content of the mesophyll region are 70.09%, 74.26% and 76.44%, while that of the lateral veins are 77.32%, 81.43% and 86.48%. These tests proved that THz-based water content measurement method has sufficient sensitivity to quantitatively detect the water content difference of leaves in similar states.
THz leaf wound marking based on changes in composition and structure
Water, solid matter and gas as fundamental chemical constituents in plants formed vegetative tissues together, and individual content abundance is closely influenced by the leaf vigor, and abnormal morphological characteristics always indicate the lack of nutrients or the damage by pests and diseases. In this study, several types of leaf wounds, including a perforation and some yellow spots in one naturally damaged leaf (Fig. 1g, f ) are clearly presented through THz spectral imaging (Fig. 7). These different kinds of foliar injuries are displayed clearly in these images based on the spatial variability of water (Fig. 7a), solid matter (b), and gas (c). The predictable and obvious lower water, lower solid matter and higher gas content to penetrating injury are detected in the THz images, and the numerical value is the same as the blank background. Further, the number, positions and shapes of yellow spots at different stages are also confirmed through the THz (Fig. 1d), middle (Fig. 1e) and root (Fig. 1f ) (Fig. 1g) based on water (a), solid matter (b), and gas (c) content. There is a perforation in the lower left side and some yellow mottling in the upper right side of the leaf (Fig. 1h-j). d-i Magnification of the damage details. The color represents the actual value of component content, and scale bars in a-c = 1 cm; d-i = 3 mm locations the spots in water-and solid matter-based THz images are darker, and lighter in a gas-based image. Besides, the lighter areas around the wound in Fig. 7b, g-i show the higher solid matter content. From the above, the ability of THz imaging to describe the wounds and other special structure in leaves has been revealed.
Discussion
The water-based image (Fig. 4a) and gas-based image (Fig. 4c) show clear outline of veins, while the differences between veins and mesophyll are somewhat cloudy in the solid matter-based image. Veins are penetrated with vascular tissue and have special channels for water transport, thus containing more water than mesophyll [16], resulting in the significant contrast between vein and mesophyll in water-based image. On the other hand, the gas transport network in leaf was mainly composed of the intercellular air space through the mesophyll. Gas molecules permeate mesophyll due to the abundant tissue with large interstitial space. But the wall cells that make up the catheter and sieve tubes in veins are densely arranged and the extracellular matrix increased collagenic and fibrous contents in the wall, impeding gas exchange and circulation [41]. These morphological features could be reflected in gas-based image. The blade section stained images of a serials of different kinds of dicotyledonous plants, with dyeing nuclei, chromosomes and plant proteins, could be used to help ascertain that the anatomy and morphology characteristics between vein and mesophyll were indeed different, but solid matter content alone exhibited too little difference to act as a marker [42]. Moreover, the similarity of the THz transmission signals at the top of the main vein, secondary veins and mesophyll after natural drying also indicated a similar solid matter content between veins and mesophyll [31], consistent with the solid matter-based image (Fig. 4b). In addition, it should be pointed out that the bright and dark streaks at the leaf margin are due to diffraction, which has nothing to do with the structure of the leaf itself, and it does not affect the internal details of the images. Thus we observe that the THz imaging results suggest that the proposed method can be used to image leaf morphology and has the potential to detect the spatial variability of the water, dry-mater and gaseous-matter contents of plant leaves.
Gravimetry reflects the average water distribution in leaf including mesophyll and veins. The mesophyll water content based on THz imaging is closer to the gravimetric one (Fig. 6d) due to the higher proportion of mesophyll in leaf. And the fact that the water content in veins is higher than that in mesophyll determines [16] that the measurement value from THz imaging in veins is greater than weighing result (Fig. 6e). The results from these two methods cannot be perfectly matched, but the variation trend from the THz imaging was consistent with the gold standard for the water content measurements. Moreover, our analysis revealed another interesting phenomenon that there was a greater difference in the values of water content measured by the two methods in leaves farther away from the tip of the branch. It is known that the leaf water content actually consists of both free water and bound moisture (hydration water) [43]. And the absorption of bound water in the THz band is larger than that of free water [44]. When the difference between free water and hydration water was ignored during the calculation, the water volume percentage would appear to be higher than the actual value, and the error would increase with the increase of the bound moisture content. The tender leaf in the tip had more free water due to the faster metabolic rate in cells, resulting in the smaller difference. Nuances in the results such as this seem to indicate that the new method based on THz spectral imaging has greater potential to divide the intracellular water into free water and bound moisture, which proportions are strongly associated with metabolic levels [45].
In addition, a potential application of wound detection has been proposed because the grayscale characteristics in THz images of yellow spots are different from those of normal mesophyll and penetrating injury (Fig. 7). Compared to the normal blade the solid matter contents in the gangrene affected regions which were under conditions of environmental stress and/or insect pests and plant disease is lower [46] but still existed. The increase of dry matter around the wound shown in the THz images may come from the accumulation of lignin, callous, and such, which can help slow down the water loss and improve the disease resistance of the plant [47][48][49]. Besides, the thickness of the leaf was overestimated at the wound, as the leaf was considered to be uniform during the calculation process. Consequently the calculated value of gas content was higher, as the gas content near the wound surface was incorrectly included. More noteworthy is the fact that this developing method based on THz imaging determined the boundary cleanly and clearly through different indicators in the earlier stage when the optical instrument was not able to define the scope of the yellow leaf spot. This development of sensitive THz imaging platforms and precise image processing schemes has opened up the possibility toward disease diagnostic studies based on water, solid matter and gas variation of plant leaves. These results heralded the capability of THz imaging for early warning of plant blade damage, and the potential to identify disease types, because the substance compositions and morphological characteristics of plant lesion tissues vary greatly with different disease types and development degrees [50].
Conclusions
A THz spectral imaging method combining a particle swarm optimization based on an extended Landau-Lifshitz-Looyenga model was proposed and demonstrated, which could visualize water, solid matters and gas quantitative distributions in the leaves. Good agreement between the THz-based measurement results and standard gravimetric data suggested that the THz detection method has a huge potential to measure the leaf material contents in a simple, fast, non-destructive, and label-free manner. Details of component content and morphological structure in different blade parts could be detected clearly through the leaf THz images, which indicated that the THz imaging would play an important role in the field of crop disease diagnosis and farmland cultivation management. With the continuous development and improvement of the theory and technology of THz spectroscopy and imaging methods, it could be expected to become a standard tool in the research field of botany, agronomy, and crop science in the near future.
Additional file 1: Fig. S1. Dielectric permittivity of water and three kinds of leaves' solid matter. The three kinds of solid matter from leaves have similar values of real part (black lines) and imaginary part (blue lines) of dielectric permittivity, which are much smaller than those of water. | 6,592.6 | 2019-09-13T00:00:00.000 | [
"Environmental Science",
"Materials Science",
"Physics"
] |
Design of Ultra Fast RCE Photodetectors for Optical Communications systems
Two key parameters for RCE photodetectors that govern their suitability for ultrafast optical communication systems are considered. These are the quantum efficiency and the bandwidth efficiency product. A closed analytical form has been derived for quantum efficiency, which incorporates the structural parameters of the photodetector. Based on the simulation results, an optimization and design procedure for these photodetectors has been developed.
Introduction
ptical photodetectors are the key components in optical communication systems. Their quantum efficiency and optical bandwidth characteristics are of major importance in the design of fiber optic based data communication systems. A number of structures and design optimization techniques have been proposed in the literature to achieve high values for these two parameters (Kishino, 1991), (Unlu, 1992(Unlu, , 1995, (Tan, 1995), , (Tung, 1997), (Jervase, 1998(Jervase, , 2000. One of the promising types is the Resonant Cavity Enhanced (RCE) photodetector. In the RCE, the active region is placed inside a Fabry-Perot cavity between two mirrors made of quarter-wave stacks (QWS) to allow the signal light to have more than one absorbing path inside the active region (Kishino, 1991), (Unlu, 1992(Unlu, , 1995, (Tan, 1995), , (Tung, 1997), (Jervase, 1998(Jervase, , 2000, , (Ozbay, 1997). The resulting structure will need only a very thin absorbing intrinsic region to achieve high quantum efficiency as well as a high speed of response. Hence, RCE photodiodes with quantum efficiencies close to unity can be designed.
In this work, the quantum efficiency is formulated in a closed analytical form. This includes the structural parameters of the photodetector and takes into consideration the wavelength dependence of the end mirrors reflectivities and the absorption coefficient of the intrinsic region. Using this formulation, an optimization and design procedure for RCE photodetectors has been developed and design charts have been generated with the quantum efficiency, quality factor and frequency bandwidth as input design parameters. Figure 1 shows the integrated device of our investigation, which is based on the system of heterojunctions In 0.53 Ga 0.47 As/InP. The speed response for this type of PIN structure is mainly limited by the transit time and the diffusion time of photogenerated carriers in the space charge region and the neutral regions. It is also limited by the charging and discharging time for traps at the heterojunction interfaces and by the inherent and parasitic capacitances of the structure. The last two effects can be minimized by incorporating non-absorbing lateral layers having graded heterojunctions with the active region and to have the heterojunctions and the active region depleted of any trapped charges that might be stored. In addition, the introduction of super lattice bandgap grading layers can eliminate the interface hole trapping process and consequently decrease the capacitive effect on the generated carriers. To further improve the frequency bandwidth requirements, the active region should be shortened enough to minimize the transit time of the photogenerated carriers crossing it. To reduce the device dark current, several types of junction architectures have been proposed based on the enhancement of the barrier heights at the interfaces of the InGaAs active layer and the adjacent layers to decrease tunneling currents of minority carriers. PIN photodetectors based on InGaAs/InP have been analyzed by many researchers e.g. (Bottcher, 1992) and the references therein). In fact, the typical PIN photodiode configuration (Dentan, 1990) has been integrated in our proposed configuration for its simplicity and ease of fabrication as well as its realistic mathematical description. The expression for quantum efficiency for the RCE photodetector shown in Figure 1 will be derived following a similar approach to that used in (Kishino, 1991) and (Jervase, 1998). This technique is based on finding the reflectivities of the top and bottom mirrors separately and then using their values to determine the fields inside the cavity by considering the whole structure of the photodetector i.e. mirrors and cavity together. With reference to the device model shown in Figure 1 for the RCE photodetector, the quantum efficiency due to the power absorbed in the i-region may be expressed as follows,
Analysis
In the derivation of Eq. 1, the p-and n-regions are assumed to be transparent at the design resonant wavelength λ 0 =1.3µm.Thus α ex =0 for the material adopted in the model (InP) at this wavelength.
It is apparent from Eq. 1 that the external quantum efficiency depends on the design of the top and bottom QWS's (R 1 , ψ 1 , R 2 , ψ 2 ), the choice of the materials for the cavity (α, n, n ex ) and its physical dimensions (d, L 1 , L 2 ) as well as the operating wavelength (λ). Within the range of interest in this analysis (1.2µm < λ <1.5µm), there is no much variation in refractive indices of the different layers of the photodetector. This does not apply, however, to the iregion attenuation constant α. It has been shown in the literature that the absorption coefficient α of In 0.53 Ga 0.47 As is a function of wavelength and doping (Humphreys, 1985). Using the experimental results in (Humphreys, 1985), a nonlinear curve fitting technique has been used to obtain analytical expressions for α. With reference to Eq. 1, the maximum quantum efficiency occurs when , or multiples thereof. It then follows that (Kishino, 1991), (Jervase, 1998), This expression is used in a constrained optimization procedure for the design of RCE photodetectors with maximum quantum efficiency. Equation 5 serves as a check on the optimized values (L 1 , L 2 , d) in whether they yield the maximum quantum efficiency. The design procedure developed is summarized below: Step 1: Specify the resonant wavelength λ 0 , the quantum efficiency η, the quality factor Q and the frequency bandwidth BW.
Step 2: Select the materials for the cavity and the quarter-wave-stacks (QWS's).
Step 3: Design the QWS's with the following guidelines: Choose the number of layers N 2 to achieve a reflectivity R 2 at λ 0 close to unity for the bottom QWS. Choose the number of layers N 1 to achieve a reflectivity 0.6<R 1 <0.9 at λ 0 for the top QW Step 4: With the values of R 1 and R 2 now known, invoke a search and optimization program to determine d, L 1 and L 2 .
Step 5: Generate values for η for a range of wavelengths centered at λ 0 .
Step 6: For each set of d, L 1 and L 2 , deduce FWHM and compute the quality factor and the frequency bandwidth.
Step 7: Select the values of d, L 1 and L 2 that satisfy the design criteria on Q, η and BW.
Following the design procedure outlined earlier, 42 pairs of layers were needed to achieve a reflectivity of 0.998 at λ 0 . For the top QWS, only one pair of layers is needed. This yields a reflectivity of 0.8. The resonance condition to achieve maximum quantum efficiency The theoretical result is also in very good agreement with recently published experimental data on high-speed RCE Schottky photodiodes (Jervase, 2000), , and high-speed Si-based RCE photodetectors (Ozbay, 1997). With reference to Figure 2, the use of the free spectral range (FSR) to the wavelength full width at half maximum (FWHM) as a measure of wavelength selectivity of detection (Kishino, 1991) is no longer valid. The conventional quality factor definition used for filters design defined as the ratio of λ 0 to the FWHM is more appropriate in this case as a measure of selectivity (Jervase, 2000).
It has been noticed that ignoring the variation of the attenuation constant α with λ results in a higher off-resonance peak for λ> λ 0 and lower off-resonance peak for λ 0 .This is due to the fact that α monotonically decreases with λ.
Following a design and optimization procedure, all the sets of values obtained previously for d, L 1 and L 2 were used to obtain the quality factor Q corresponding to every spectrum of quantum efficiency η. The objective is to obtain design charts relating the quality factor Q to the maximum achievable quantum efficiency η max for a given structure dimensions. The varied, quantum efficiency increases, peaks at d=0.1µm and then decreases. Thus, there is an optimum value for d, which achieves maximum quantum efficiency. The BW and BWE on the other hand, decrease with increasing d. Thus, for d=0.1µm, which corresponds to maximum quantum efficiency of 0.99 the BWE is 290GHz. It is worth pointing out here that the simulated values for BWE merely serve as upper limits for the bandwidth-efficiency product. In practice, other parasitic factors, such as the leakage RC, will lower the achievable values.
Conclusion
The quantum efficiency and bandwidth-efficiency product of resonant-cavity enhanced photodetectors (RCE) has been formulated in closed analytical forms that incorporate device structural parameters. A search and optimization-based procedure has been developed with the quantum efficiency, quality factor and frequency bandwidth as input design parameters. Design charts relating the quality factor Q to the maximum achievable quantum efficiency η max for a given structure dimensions have also been generated. | 2,123.8 | 2002-06-01T00:00:00.000 | [
"Physics"
] |
Enhancement of graphene visibility on transparent substrates by refractive index optimization
Optical reflection microscopy is one of the main imaging tools to visualize graphene microstructures. Here is reported a novel method that employs refractive index optimization in an optical reflection microscope, which greatly improves the visibility of graphene flakes. To this end, an immersion liquid with a refractive index that is close to that of the glass support is used in-between the microscope lens and the support improving the contrast and resolution of the sample image. Results show that the contrast of single and few layer graphene crystals and structures can be enhanced by a factor of 4 compared to values commonly achieved with transparent substrates using optical reflection microscopy lacking refractive index optimization.
Introduction
Graphene is a novel material that has been attracting widespread interest due to its unique electronic, optical, magnetic, and mechanical properties [1][2][3][4]. Graphene's outstanding characteristics make it extremely appealing for a wide range of applications. In electronics, graphene, which has a zero band-gap, has been used to create transistors [2], while its versatility has been increased using several different approaches to engineer a band gap in this material [5,6]. Graphene is also used in spintronics [7], in new hybrid materials for biomedical systems [8], to produce gas and bio sensors [9,10], electrodes [11,12], transparent electrodes for solar cells and LCD displays [13], supercapacitors [14], and as a nonlinear element in laser applications [15].
For many of these investigations robust and easily applicable imaging methods with resolution in the micrometer range is obligatory. Optical reflection microscopy is a simple, high-throughput technique that can be used to determine whether single layers are present, to measure their sizes and positions, and to determine the quality of the samples. Due to the low reflectivity of single layer graphene, interference techniques utilizing dielectric-coated wafers as a substrate to obtain high-contrast images using optical reflection microscopy were first introduced by Blake et. al. [16] and later modified [17][18][19][20][21]. Ellipsometry [22,23], phaseshifting interferometric imaging [24], surface plasmon resonance reflectance [25] and Rayleigh [26] and Raman imaging microscopy [27][28][29] have also been applied to identify graphene layers deposited on different types of substrates. Further methods exploit surface hydrophobicity [30] or quenching of dye molecules by graphene crystals [31].
The optical observation of graphene layers on transparent substrates would be an asset due to the versatility and variability of these materials. However, the contrast is typically quite small due to the low optical conductivity value of a single graphene sheet [32]. For example even though only 4% of the incident light is typically reflected from a glass substrate in the visible, the optical conductivity of a single layer of graphene is such that a contrast of only 7% is obtained in the visible [33], which makes the observation of graphene on transparent substrates notoriously difficult. To overcome this obstacle, a novel technique is presented here to enhance the visibility of graphene monolayers using optical reflection microscopy; by introducing a medium with a refractive index tuned sufficiently close to that of the substrate, the optical contrast of graphene flakes can be greatly enhanced. Using this method we have obtained graphene image contrasts that approach 30%, roughly 4 times higher than values typically reported for a graphene monolayer on a glass support, and 2 times higher than the contrast observed in interference techniques. Even higher contrast values are achievable in principle by further optimization of the refractive index, accompanied by a lowering of the intrinsic noise of the detection system.
Theoretical basis
The experiments are based on optical reflection microscopy in combination with an immersion medium of the same refractive index as the substrate between the sample and the front lens of the objective. In the ideal case, there will be no reflection from the substrate surface, and only the reflection from single -and multilayer graphene will be visible. The Contrast C, which establishes the relative difference between the reflected light intensity with m graphene-layers I(m) and with no graphene layer I(m=0), is described by: The reflectivity with and without graphene is given by the Fresnel coefficients for linearly polarized light which for normal incidence are given by [32]: 12 12 n n m r n n m (2) where n 1 and n 2 denote the refractive index of the medium above and below the graphene sample, m the number of graphene layers and α=1/137 the fine-structure constant. The intensity is then given by 2 Ir . Equation (1) thus gives the following expression: This expression formally diverges for n 1 =n 2 which is an artifact of our simplistic theory since there will always be some light background from the substrate, for example due to light scattering in the liquid, due to noise of the camera system, or averaging over a small range of incident angles due to the finite numerical aperture of the objective lens. Nevertheless, the above formula exemplifies the effect which we intent to exploit, i.e., choosing an immersion liquid with refractive index n 1 approximately equal to that of the substrate n 2 can significantly increase the sample contrast. The above formula also states that for n 1 ≠n 2 the contrast approximately scales with the number of layers since mα=m/137 cannot be neglected in comparison with n 1 and n 2 . This will also experimentally be confirmed.
To account for the various sources of residual reflections mentioned above we have chosen to introduce a phenomenological reflection constant R, which essentially indicates the level of the background signal that would be observed from the substrate under conditions of perfect index matching. This constant term should be independent of the presence of graphene and thus cancels in the numerator of Eq. (1). But in the denominator, it will lead to finite results. Our amended contrast formula thus reads: The theoretical predictions for the contrast for glycerol, oil and quinoline (n 1 =1.47, n 1 =1.49 and n 1 =1.63, respectively) on glass (n 2 =1.52) compare well with the experimental values if we choose R=0.0007. For single-layer graphene, we have C (n 1 =1.47) ≈31 %, C (n 1 =1.49) ≈26 % and (n 1 =1.63) ≈-26 %, for double-layer graphene, we find C (n=1.47) ≈73 %, C (n 1 =1.49) ≈65 % and C (n 1 =1.63) ≈-43 %. The value of R will change slightly with experimental conditions, but it will be always of this magnitude and can be neglected if I(0)>R, i.e., if the refractive index of the two media considerably differ from one another, which is, for example, the case of an air-glass interface. Figure 1 presents the results applying Eq. (4) which includes R. Figure 1(a) shows the change of contrast as a function of the refractive index of the medium between the microscope objective lens and graphene layers. In particular the contrast reaches its maximum at a refractive index value smaller than n 2 , which would have not been predicted by using Eq. (3) without assuming residual reflection. Furthermore, a negative contrast, which would be visible as a darker graphene layer in front of a brighter substrate background, is predicted for refractive indexes above n=1.53 and shows a maximum in the negative around n=1.63, which will be verified experimentally. This analysis is also consistent with similar observations using mica as a dielectric on top of a graphene sheet [34]. Fig. 1(a) The contrast as a function of the immersion index medium for a monolayer (red), bilayer (blue) and trilayer (black) deposited on a glass substrate. Fig. 1(b) The contrast of graphene layers as a function of the substrate index in air. Figure 1(b) shows the contrast as a function of the refractive index of the support according to Eq. (4), and for values sufficiently different from n=1, is in line with data published before [33]. Specifically, the contrast rises significantly upon lowering the refractive index, however there are no solid transparent materials available with a refractive index close to one. To conclude, the theory can predict qualitatively as well as quantitatively the optimum conditions for the optical contrast for a single layer or few layers of graphene in a given system with a fixed index of refraction of the support
Experimental details
Graphene samples were prepared by micromechanical cleavage (also known as the scotchtape technique) [35] of 5-10 mm graphite flakes from NGS Naturgraphit and subsequently transferred to standard glass slides (n=1.52). The samples were observed under a Nikon Optiphot metallurgical reflection microscope in 20× magnification. The objective is specified for use with an immersion liquid with a refractive index around n=1.5. Figure 2 illustrates the experimental set-up that shows how refractive index optimization is used to enhance the contrast of graphene flakes. Additionally a prism was firmly attached to the bottom surface of the glass slide using immersion oil to minimize any reflections from the second refractive index boundary.
The refractive index solutions were filtered through a Whatman Puradisc membrane syringe filter with 450 nm pore size to remove any suspended impurity particles whose presence would lead to a larger scattering from the liquid medium thereby increasing the background. For quantification, Figure 4 shows the contrast profile as a function of the position along the blue line shown in the image of Fig. 3(c) for all the immersion materials. The contrast values (in percent) are plotted for each pixel against the averaged light intensity from the glass support, according to Eq. (1). The images obtained using glycerol and oil show a contrast that is approximately 4 times larger than the contrast for the image acquired with the sample exposed to air, while the contrast using quinoline is increased by a factor of ~3, but with a reversed sign. This confirms the theoretical predictions presented above. To the best of our knowledge, the typical contrast value previously reported for graphene monolayers [17,33,36] obtained with glass substrates is 7%, compared to the 30% reported here for a monolayer of graphene. In order to verify the correspondence between the contrast and the number of graphene layers in the different regions of the flake, Raman microscopy was employed for confirmation [29,37,38]. The spectra were taken in three different locations of the transparent regions of the flakes inside the square marked in each image shown in Fig. 3. From the analyses of the G and D peaks in the Raman spectra (not shown) it can be concluded that the regions with the lowest reflectance in the images are graphene monolayers, and the area with twice the contrast in-between these 2 regions is a graphene bilayer. Fig. 4 The graphs displays a comparison of the contrast profile in the flake within the squares of the images presented in Fig. 3 for air, microscopy immersion oil, glycerol and quinoline.
In order to do perform a statistic analysis of the images of Fig. 3, several pictures were taken under the same conditions and the contrast was then averaged for each pixel in order to create the three-dimensional charts presented in Fig. 5. The contour map of the contrast using oil (b) presents a significantly better contrast and signal to noise ratio compared to air (a). The contour map from the analogue experiment using glycerol (not shown) is comparable to the one using oil. The contour map of quinoline (c) is qualitatively different, as the contrast is also enhanced relative to air, but is negative. Fig. 5 Contour maps showing averaged contrast as a function of pixel position for the same region using air (a), oil (b) and quinoline (c) as a medium. The sample area depicted in these maps is indicated as squares in the images in Fig. 3. The bluish coloured areas represent the substrate baseline, while the yellow areas shows a monolayer and the red areas show a bilayer. The perspective of (a) and (b) is identical, the one of (c) is slightly altered to give an improved view of the negative contrast valley of the double layer graphene.
The contrast enhancement by this method is limited by light scattering in the immersion liquid, the noise of the camera system, and the reflection of light at the interface between the substrate and immersion liquid due to incomplete refractive index matching across the optical spectrum, all of which give a contribution to the phenomenological factor R in Eq. (4). In principle, any given equipment could be standardized, as R could be determined independently of the presence of graphene by measuring the image illumination intensity for a set of immersion liquids of different, accurately determined refractive index values. In practice, it may be easier to simply measure the contrast of a graphene flake that is known to be a single layer and use Eq. (4) to estimate the value of R for a given experimental set-up. Of course the smaller the value of R is, the greater the improvement in contrast that can be achieved. Several measures can be taken towards this purpose. The immersion liquids should contain low levels of impurities, in particular they should be free of larger scattering particles and fluorophors. A smaller numerical aperture will help to reduce the background signal by limiting the range of incident illumination angles, since the precise index matching condition varies with incident angles. However this will inadvertently lead to a smaller spatial resolution. Similarly, reducing the spectral width of the illumination will lead to a smaller spread of the refractive indexes, but the price for this would be a weaker image illumination intensity. We have experimented with using different CCD-Cameras and different objectives, and have found that the effective R values is relatively robust to changes of the set-up.
Conclusions
Graphene monolayers deposited on glass substrates are notoriously difficult to see in optical reflection microscopes, as both the areas covered with graphene as well as the regions with exposed substrate reflect light and the consequently the contrast is low. Using a liquid which has a refractive index close to that of the substrate eliminates most of the reflection from this surface and as a consequence the visibility of graphene structures is greatly enhanced. Fresnel theory was used to compute the contrast as a function of the substrate and immersion medium indices of refraction and taking in account the optical conductivity of graphene. Experimental results show that by placing an immersion liquid in the space between the microscope lens and the glass substrate, the optical contrast can be improved by up to a factor of 4 relative to the ones obtained with the substrate exposed to air. In principal, even higher contrast could be achieved by reducing R, the level of residual light background from the substrate.
The contrast of 30% for a monolayer of graphene on transparent substrates is twice as high compared to the standard microscopy technique exploiting interference enhancement [16][17][18][19][20][21] for which contrasts of 15% are observed. Furthermore, interference techniques require specific substrates, such as Si wafers coated with a dielectric of well defined thickness, and the contrast enhancement is strongly dependent on the wavelength of illumination. On the other hand, refractive index tuning can be employed with a large variety of transparent substrates, and there is hardly any variation of contrast with wavelength.
Although we limited our investigation to graphene flakes from exfoliated graphite, this new method can be also used to visualize other graphene structures and patterns on transparent surfaces and therefore has potential to be applied on graphene-based devices. | 3,598 | 2013-05-20T00:00:00.000 | [
"Physics"
] |
ON OPTIMAL CONTROL AND COST-EFFECTIVENESS ANALYSIS FOR TYPHOID FEVER MODEL
Typhoid fever is a disease of a major concern in the developing world because it adversely affects on health and finance of a large chunk of people in this part of the world. This paper is aim to develop an extend and improve the optimal control model of typhoid transmission dynamics that can select the best cost-effective strategy for some interventions. Thus, an optimal control model for typhoid, incorporating control functions representing measures of personal hygiene and sanitation, diagnosis and treatment, and vaccination, was formulated. The corresponding optimality system was characterized via the Pontryagin’s maximum principle. The optimality system was numerically simulated for all possible strategies using Runge-Kutta method of order four. For cost-effectiveness analysis, the method of incremental cost-effectiveness ratio (ICER) was employed. The results show that the model is able to select the most cost-effective strategy for any given set of parameter values and initial conditions.
Mathematical models are veritable tools for studying the dynamics of infectious diseases. See, for example, Anderson and May (1991). Optimal control techniques have been used to determine best control strategies for infectious diseases such as malaria, Ebola, Influenza, tuberculosis, hepatitis B, tungiasis, to mention a few. See [Khamis et al. (2018) Athithan and Gosh (2016); Tchuenche et al. (2011)]. Mathematical models for typhoid transmission dynamics are scanty (Tilahun et al., 2017). Tilahum et al. (2017) presented a deterministic mathematical model to investigate the dynamics of typhoid fever with optimal control strategies. However we noticed a flaw in the associated system of differential equations emanating from their model descriptions. Thus the current study improves and extended the Tilahun et al. (2017) by incorporating the dynamics of vaccinated individuals. Further, this paper extended and improved optimal control model for typhoid transmission dynamics that can select the best strategy for some interventions, analytically characterize and numerically explore the corresponding optimality system.
The paper is organized as follows. Brief introduction on Typhoid fever was presented in section 1, the basic Typhoid fever model is presented and an optimal control model is designed in section 2, analysis of the optimal control model is done in section 3, and numerical simulations are performed and the results are presented in section 4. Cost-effectiveness analysis is carried out in section 5. Discussion of results and the conclusive remarks are passed in section 6 The rate of shedding salmonella in foods and waters by carriers σ 2 The rate of shedding salmonella in foods and waters by infectives μ The death rate of salmonella bacteria The flow of all epidemiological and demographic processes involved is described as follows. Recruitment into the susceptible class which is either by birth or immigration occurs at the rate of Λ. The recovered individuals lose partial immunity to typhoid fever to become susceptible at the rate of . The force of infection in the model is = [ + ] , where is ingestion rate, is the concentration of Salmonella bacteria in foods or waters, and [ + ] is the probability of individuals in consuming foods or drinks contaminated with typhoid causing bacteria. Death occurs naturally at the rate of . is the probability that an infected person becomes a carrier after infection. Carriers become symptomatic at the rate of and acquire natural immunity at the rate of . The symptomatically infected persons acquire natural immunity at the rate of . Typhoid-related mortality occurs at the rate of . Carriers and symptomatically infected individuals discharge Salmonella at the rates of σ 1 and σ 2 respectively. The net death rate of the pathogen is given by . From the above descriptions and flow diagram, Tilahun et al (2017) presented to the following system of ordinary differential equations:
Modified Model Equation
The parameters 1 and 2 as defined in Tilahun et al (2017) model and captured in the model equations are flaws as the carriers and asymptotically infected individuals cannot themselves become bacteria as captured in their model (see Table1 and Equation (2) and Equation (3) Thus with descriptions and flow diagram (Table1 and Figure 1), we modify the model equations by Tilahun et al (2017) and present the following system of ordinary differential equations:
Basic Properties
We obtain the invariant region in which the model solution is bounded. All the associated parameters and state variables are non-negatives for ≥ 0. Consider the biological feasible region Ζ = {( , , , ) ∈ ℝ 4 : ≤ Λ } Lemma 1: The closed set Ζ is positively and attracting with respect to the system of equations (6) -(9). Proof: Adding equations (6) -(9) gives the rate of change of the total population: = Λ − − (11) It is clear from equation (11) that Thus, by a standard comparison theorem (Lakshmikantham et al, 1989) can be used to show that Thus the region Ω ispositively-invariant. However if N(t) ≤ Λ , then either the solution enters Ω in finite time, or N(t) approaches Λ asymptotically. Hence the region Ζ attracts all solutions in ℝ 4 .
Therefore, it is sufficient to consider the dynamics of the flow generated by equations (6) -(9) in Ζ, where the usual existence, uniqueness, continuation results hold for the system (6) -(9), that is the system is mathematically and epidemiological well-posed in Ζ.
Optimal Control Model
In this section, we modify and extend the existing optimal control model of Tilahun et al (2017) by incorporating the compartment of vaccinated individuals , so that = + + + + The efficacy of sanitation measure at killing the pathogen is and we define the parameter as the rate at which the symptomatically infected persons acquire immunity. 1 , 2 1 are weight constants; and are control variables. All other parameters retain their descriptions as in the existing model which are depicted in Table 1 above. Therefore, from our modified model (6) -(10), the extended optimal control equations for typhoid dynamics are presented as follows: where = [ + ] The objective function is given by ( 1 , 2 , 3 ) = ∫ [ 1 1 ( ) + 2 2 ( ) + 3 3 ( ) + 1 2 ( 1 1 2 + 2 2 2 + 3 3 2 ) + 1 ( ) + 2 ( )] 0 (18) where 1 , 2 3 represent the costs of hygiene and sanitation, vaccine and drugs per person respectively. 1 , 2 , 3 represent the costs of implementation of control and 1 2 represent average losses of wages due to a typhoid related death and illness respectively. = ( 1 , 2 , 3 ) is a st of Lebesgue measurable functions.
Cost-Effectiveness Analysis
In this section, the method of incremental cost-effective ratio (ICER) is used to compare cost-effectiveness of two strategies. The cost objective functional is used to evaluate the total costs associated with all possible strategies over the period. The numbers of infections averted and the total costs of the corresponding strategies are shown in Table 3. Table 3 shows that vaccination as a single intervention imposes the highest cost, followed by hygiene and sanitation, and treatment. It is also observed that treatment alone produces cyclical effects on the dynamics of typhoid fever. Figure 6 shows that double intervention of hygiene and sanitation, and vaccination is not able to eradicate the disease from the population.
However, Figures 5 shows that double intervention of hygiene and sanitation, and treatment has the capability of eradicating the typhoid disease. Similarly, Figures 7 shows that double intervention of treatment and vaccination has the capability of eradicating the typhoid disease, with a higher cost compared to hygiene and sanitation, and treatment. In the same vein, triple intervention of hygiene and sanitation, treatment and vaccination produces the same impact and imposes the same cost as the double intervention of hygiene and sanitation, and treatment as shown Figure 8. Based on the data employed, the findings show that a double intervention of hygiene and sanitation, and treatment as a strategy; and the combination of three controls as a strategy are the most costeffective strategies. Number of Symptomatic cases cases of typhoid fever No control u1,u2,u3 | 1,908.2 | 2020-09-29T00:00:00.000 | [
"Medicine",
"Mathematics"
] |
Combining Text and Images for Film Age Appropriateness Classification
We combine textual information from a corpus of film scripts and the images of important scenes from IMDB that correspond to these films to create a bimodal dataset (the dataset and scripts can be obtained from https://tinyurl.com/se9tlmr ) for film age appropriateness classification with the objective of improving the prediction of age appropriateness for parents and children. We use state-of-the art Deep Learning image feature extraction, including DENSENet, ResNet, Inception
Introduction
The question "Is this film appropriate for my children of X years of age?" frequently arises in parents' minds. Up till now, age-appropriateness of films has been recommended by censorship bodies, in the form of age rating certificates. In the United States and the United Kingdom, these age rating certificates are issued mainly by two organizations: the Motion Picture Association of America (MPAA) in the United States of America and the British Board of Film Classification (BBFC) in the United Kingdom. The two "censorship" bodies base their ratings on the film content and provide descriptions for each certificate. Different ratings for the US and UK and their interpretations can be found in Table 1. The BBFC define their classification as "the process of giving age ratings and content advice to films and other audiovisual content to help children and families choose what's right for them and avoid what's not.". The
Introduction
The question "Is this film appropriate for my children of X years of age?" frequently arises in parents' minds. Up till now, age-appropriateness of films has been recommended by censorship bodies, in the form of age rating certificates. In the United States and the United Kingdom, these age rating certificates are issued mainly by two organizations: the Motion Picture Association of America (MPAA) in the United States of America and the British Board of Film Classification (BBFC) in the United Kingdom. The two "censorship" bodies base their ratings on the film content and provide descriptions for each certificate. Different ratings for the US and UK and their interpretations can be found in Table 1. The BBFC define their classification as "the process of giving age ratings and content advice to films and other audiovisual content to help children and families choose what's right for them and avoid what's not.". The classification is, in principle 1 , based on the content of the films. As a result, we hypothesise that it is possible to use automatic methods to perform the classification. This, in turn, would, among other things, improve the consistency and productivity of the classification process. An automatic classifier would also provide insights into the differences in the perception of appropriateness in different countries or decades (e.g. if a machine classifier trained on data from one decade performs differently on data from different decades, we could infer that there are some differences in human perceptions across different decades, as similar texts and images, as determined by machine classifiers, are now looked at differently by human classifiers). The contribution of factors such as the country of the censor board, the time the film was produced, and the quantified content of violence or explicit material could also form the basis of various studies in Digital Humanities and Computational Social Science. While not the main focus of this research, such aspects could be very important to the understanding of the making, reception, and perception of films in different times and cultures.
Previous research indicates that using the textual content of the films alone, it is possible to build classifiers that could perform the classification fairly accurately for various aspects of the film [13,9,8]. Mohamed and Ha [10] compiled a dataset of film scripts and their age-appropriateness ratings, developed various classification models and reported fairly good accuracies (79.1% accuracy for American MPAA and 65.3% accuracy for British BBFC) using TFIDF values of character based ngrams as features. In this paper, we try to see whether using image features extracted using state-of-the-art image feature extraction could improve the classification performance further. From a human perspective, we know that vision adds more information, and should thus improve the classification accuracy, a fact also supported by machine vision research [11,15,12]. Our research focuses on whether the use of current state-ofthe-art image feature extraction could improve the automatic classification models. If it could, we then can have further evidence that these image feature extraction methods can capture abstract concepts such as age-appropriateness. We add images to Mohamed and Ha's dataset by using the Internet Movie Database (IMDB) to extract images associated with each film. We then use state-of-the-art image feature extractors to extract vectors representing the images, combine these vectors with textual vectors, and investigate the impact of these image feature vectors on the accuracy of the classifiers. The contributions of this paper are: (1) a bi-modal datatset combining images and texts for 17000 films, The rest of this paper is organised as follows: section two introduces the data and the methods used in the research, section three outlines the results and provides analysis, examples, and the confusion matrix, followed by the conclusion and plans and suggestions for future work.
Data and Methods
Mohamed and Ha's dataset was created using an INNER JOIN of two resources: film scripts and film certificates. Film scripts were obtained from the website www.springfieldspringfield.com, which unfortunately does not exist any more. The files, available in html, were converted into text and were run through a basic cleaning pipeline that involved transforming the utterances into proper sentences using the Spacy package [6]. Mohamed and Ha also removed nondialogue elements from the scripts like scene descriptions and actor actions, a practise that we follow for two reasons: (1) these are not consistent across the film scripts as many films do not have them, and (2) because these are external to the film content proper. These scripts were combined with IMDB Certificates, which indicate, for each film, the age for which the film is appropriate. These certificates may vary by country and cut. For example, the film "The Hobbit: The Battle of the Five Armies" has been rated both PG-13 and R in the United States based on which cut is intended. The main certificate used on IMDB for both the UK and the USA is used, which in the case of this Hobbit film is 12A for the UK and PG-13 for the USA. We then collected IMDB Images, accompanying and characterizing main scenes of the film by downloading the images in the photo gallery of each film. The number of images accompanying each film description on IMDB is limited; and we understand that this limitation will affect the accuracy of prediction. We nonetheless hypothesise that, even with the limited number of images, the combination of text and images will lead to better performance in classification since text alone could be ambiguous. The combination of these two modalities would then contribute to the disambiguation of otherwise difficult to interpret textual content, and will thus lead to better classification accuracy.
The IMDB certificates are used as labels, in what is known as distant annotation. The BBFC website explains that they use two raters for each film and when there is a dispute, a third, more experienced, rater steps in. This is very similar to human linguistic annotation. We do not know the inter-rater agreement, and thus are unable to determine the ceiling of human performance. Similar to [10], we use the following upper bounds and baselines: The Upper Bound. The IMDB hosts certificates from 70 countries around the world. The upper bound takes as predictor variables all these certificates and as a target the country in question. If we want to predict the UK certificate, we use all the other certificates as features. This method achieves accuracies of 84.7% and 80% for the US and the UK (OtherCts in Table 5). Both experiments were performed using XGBoost, our best classifier for this task. The baseline for this paper is 55.0% and 41.8% for the USA and UK respectively, representing the majority classes ("R" for the US and "15" for the UK).
Dataset Description and Statistics
The dataset comprises 17018 titles. Transcripts of these titles contain a total of 181 million words. USA certificates are available for 8923 titles and British certificates are available for 10920 titles. 7068 titles have both countries' certificates. The mapping between the UK and the USA ratings is not one to one. A classifier that uses the UK ratings to predict the USA ratings would only have an accuracy of 80.6% (SingCt in Table 5). For each title in the dataset, we download images that belong to the title gallery excluding those images that are not part of the film itself, for example, those whose captions include descriptions such as "X at an event to promote the film Y" from IMDB. A total of 429050 photos have been collected, with an average of 46.94 photos per title. The average numbers of photos per title for each certificate rating can be found in Table 1. We use the same train (70%), test (20%), and dev (10%) subsets.
a) Texts:
Mohamed and Ha tried a variety of classification methods from both traditional machine learning and Artificial Neural Networks. They concluded that the best setting is to use character ngrams tf-idf as features, and XGBoost as classifier, achieving an accuracy of 79.1% when predicting USA certificates, and 65.3% when predicting British certificates. We have replicated their experiments and we have reached the same results using textual features. The next section will combine these textual features with image features and will also explore the use of images alone in film age appropriateness classification.
b) Images only: Recent advances in machine vision have produced models that almost surpass human performance in image object recognition tasks, specifically the ImageNet challenges. Information needed to distinguish between all the 1,000 classes in ImageNet is also often useful to distinguish between new kinds of objects. Such information can be harvested from the outputs of penultimate layers of models originally trained to distinguish between all the classes in ImageNet. We use these outputs as our image feature extractors. Specifically, we use NASNetMobile [18], Dense169 [7], InceptionV3 [14], ResNet152V3 [5], and NASNetLarge [18]. These models represent the state-of-theart in image object recognition ( Table 2). Keras implementations of these models 2 are used. For each of the images, we produce a feature vector; we then pool feature vectors of all the film's images first, and use the resulting vectors as input for certificate classifications. We try mean, median, and max pooling, and find mean pooling to be the best. We also try dimension reduction methods such as PCA as pooling methods; and find that they also are inferior to mean pooling. We also produce an ImageConcat vector, which is the concatenation (stacking the vectors horizontally) of the pooled vectors produced by individual feature extraction models.
Film age classification using images may not be as easy as it sounds. The reason for this is that in an R-rated movie, most of the images may be innocent, the equivalent of PG-rated, but only some may contain violence or explicit references. This poses even a bigger a challenge to our experiments since we use only the set of images provided by IMDB, which, for various reasons, may not contain the most violent or explicit images in the film. It is thus useful to check the accuracy of using only the images as per category using a balanced dataset. To build our balanced dataset, we first choose titles of which we have at least 40 images. We then build a balanced training set of 450 titles of each rating. We then choose 40 random images for each title to form the training set. Similarly, from the titles that have at least 40 images in the test set, we choose a set of 150 random titles for each rating. For each of these titles, we pick 40 random images. They form our test set. We perform this experiment only for the USA certificates. We only use three ratings in this experiment: PG, PG-13, and R. The two other categories have not been used due to the small number of films in the categories, which make it impossible to balance them. In experiment ImagePool, we pool feature vectors of all the film's images first then classify the pooled vectors, while in ImageIndividual, we classify individual images then count how many times images belonging to a film have been classified as belonging to a specific rating, and then take the rating with the most count as the predicted rating for each film.
c) Text and Images combined: For each title, the character based ngram TFIDF vector and the image vector are concatenated into a single vector, and fed into XGBoost. Other classification algorithms such as Random Forests and Logistic Regression have also been experimented with, but the results are inferior to those of XGBoost. While TFIDF is not usually thought of as comparable to word embeddings, Mohamed and Ha's experiments show that in this specific case, word embeddings (from BERT and ELMO) were not as good as this traditional method. In the experiments we ran, word embeddings did not produce good results. The use of XGBoost was also beneficial in other ways. Since the TFIDF vector is very large, corresponding to the vocabulary size of X words, neural network implementations in Keras and PyTorch did not scale well, unlike XGBoost and similar algorithms that can deal with a large number of textual features. d) Evaluation metrics: For the balanced image experiment, we use the standard precision, recall, f-measure, and overall accuracy. For other experiments, we use the standard metrics of accuracy and the Area Under the Curve of the Receiver Operating Characteristic (AUC), which incorporates the trade-off between precision and recall. Two settings for evaluation of accuracy are used: strict accuracy (Acc in Table 5) is the normal accuracy, and relaxed accuracy (RelaxAcc), in which a prediction of a certificate that is either the same as, or only one age rating higher or lower than, the true certificate, is considered correct. While the relaxed accuracy is in common use in Machine Learning, it is especially important in the context of film ratings due to the differences among countries. This relaxed evaluation thus mirrors the state of the data set.
There has been previous work in combining texts and images for downstream tasks. Chen and Zhuge [2] combine text and image information to generate a multimodal summary comprising images and their captions. Rafkind et al. [11] combine text and image features to classify images in bioscience literature. Taniguchi et al. [15] and Sakaki et al. [12] combine text and image classifiers to identify the gender of Twitter users. They classify the images first, then pool the image classifications to classify the users, whereas we pool the image feature vectors first. We tested the former methods (classifications and then pooling), and found them to be inferior to pooling first (the accuracy for US certificates, image only, classification of individual images first: 59% compared to classification of pooled vectors: 62%). Generating captions from images has also gathered attention recently [17,4,3]. Ailem et al. [1] learn textual and visual representations jointly; this leads to competitive performance on tasks of assessing pairwise word similarity and image/caption retrieval.
Results & Analysis
Tables 3 and 4 present the results for two experiments using a balanced dataset of the categories PG, PG-13, and R. We can see that when the data is balanced, it is easier to classify PG then R then PG-13. This may be due to the fact that PG-13 is a confused category that has elements of both PG and R. In a PG film, one does not expect to see images of violence, gore, or sex, making it more consistent, whereas in R films innocent images may also be found, hence the easier classification of PG vs. R films. To give some examples of the classifications assigned by our image classifier versus the true category, figure 1 shows a number of images predicted as PG-13 and the true category of the film they come from. For example, the third image on the first row, which comes from the R-rated film "Courage Under Fire" (1996) 3 has been classified as PG-13. From a human perspective, the image does not show any violence or explicit material.
Tables 5 shows the results of our experiments, and figure 2 shows the confusion matricies. When image feature vectors are combined with text vectors, the performances of the classifiers are approaching or surpassing those using ratings from one country to predict those of another country (SingleCt in the table). Around 95% or more of the predictions are within one rating of the correct ones. Despite its incomplete nature, visual data, in the form of extracted feature vectors, do help improve the accuracy of the prediction of age rating certificates when combined with TFIDF. Only InceptionV3 shows statistically significant improvements in accuracy of predictions for both the USA 6 Le An Ha and Emad Mohamed / Procedia Computer Science 00 (2021) 000-000 and the UK. Other image feature extraction models provided statistically significant improvements for either the USA (Dense169) or the UK (NASNetMobile, ResNet152V2, NASNetLarge, and ImageConcat). Using visual data alone, ImageConcat provides the best results for both countries. Given that the categories of certificates follow a certain order with respect to age appropriateness, we have experimented with regression models such as Random Forest regression and XGBoost regression, and found them not to be as good as the classification models (73.1% vs 81.1% for USA and 58.1% vs 68.1% for UK). We have tried a ordinal regression method [16], which does not assume the distances between two consecutive classes are a constant (as normal regression methods do), to take advantage of the fact that the age-appropriateness is progressive, i.e. films suitable for a 12-year-old should also be suitable for a 15-year-old. The results are slightly worse than what we reported here with regard to accuracy (79.2% vs 81.1% for USA, 67.8% vs 68.1% for UK), but slightly higher with regard to relaxed accuracy (97.4% vs 97.0% for USA and 97.0% vs 95.2% for UK). 4
Conclusion and future work
We have conducted experiments with the target of predicting the age rating of films based on images, and the combinations of text and images. Our experiments included ones on a general corpus as well as limited experiments on a balanced subset geared towards examining the errors produced by the classifier. Our results indicate that the combination of images and texts is better than either images or text alone, reaching an accuracy comparable to that of using ratings from one country to predict the ratings in another country in spite of the fact that we use only a very limited subset of the images that can potentially be used for such a task.
Our future work will focus on two aspects: (1) investigating the use of the whole video and audio of the film in age rating classification. We believe that with such an amount of data, we can produce results that are on par with, if not more accurate than, those produced by censorship bodies, and, when we reach the point where we can quantify the distribution of these materials in the film, we will (2) conduct computational social science analysis of the distribution of sex and violence in films and its relationship to cultural and country-based differences, for which we will use not only the textual and audiovisual data, but also the reports provided by parents on film contents. The two future concerns are both related to our desire to conduct responsible Computational/Digital Humanities research. | 4,599 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Change in serum KL-6 level from baseline is useful for predicting life-threatening EGFR-TKIs induced interstitial lung disease
Background A high incidence of interstitial lung disease (ILD) has been reported in patients with advanced non-small cell lung cancer (NSCLC) treated with epidermal growth factor receptor-tyrosine kinase inhibitors (EGFR-TKIs), particularly in Japanese populations. A previous report from our laboratory demonstrated that KL-6 was a useful serum biomarker to assess the severity of drug-induced pneumonitis. Based on these observations, this study was conducted to evaluate the risk factors of EGFR-TKIs induced ILD and the usefulness of monitoring serum KL-6 levels in patients who developed EGFR-TKIs induced ILD in a large multi-institutional setting. Methods We retrospectively reviewed clinical records and radiographies of 341 patients with advanced NSCLCs who were treated with EGFR-TKIs, and analyzed risk factors for the development of EGFR-TKIs induced ILD. Changes of circulating levels of KL-6 were also evaluated in the patients who developed EGFR-TKIs induced ILD. Results Among the 341 patients included in this study, 20 (5.9%) developed EGFR-TKIs induced ILD, and 9 (2.6%) died from ILD. Univariate analyses revealed that only preexisting pulmonary fibrosis was a significant risk factor for the development of EGFR-TKIs induced ILD (p = 0.003). Absolute levels of circulating KL-6 at neither baseline nor the onset of ILD could discriminate between life-threatening and non-life threatening EGFR-TKIs induced ILDs. However, we found that the ratios of serum KL-6 levels just after the onset of EGFR-TKIs induced ILD to those at baseline could quite precisely distinguish survivors from non-survivors (p = 0.006) as well as acute interstitial pneumonia (AIP) pattern from non-AIP pattern (p = 0.005). Conclusions The results of this study strongly support the potential of KL-6 as a diagnostic biomarker for life-threatening EGFR-TKIs induced ILD. Monitoring of KL-6 is also useful to evaluate the progression and severity of EGFR-TKIs induced ILD.
Background
Gefitinib (ZD1839, Iressa; AstraZeneca) and erlotinib (Tarceva, OSI-774; OSI Pharmaceuticals) are orally active epidermal growth factor receptor tyrosine kinase inhibitors (EGFR-TKIs) used for the treatment of nonsmall cell lung cancer (NSCLC) patients [1]. EGFR-TKIs sometimes cause drastic tumor regression in specific subgroups of patients with advanced NSCLC, including women, non-smokers, patients with lung adenocarcinoma (ADC) histology, patients of Asian origin and patients with EGFR mutations [2][3][4][5][6]. On the other hand, treatment with EGFR-TKIs is associated with serious side effects, such as life-threatening drug-induced interstitial lung disease (ILD), particularly in Japanese populations [7][8][9][10][11][12][13]. These previous studies have reported that male gender, smoking history, poor performance status (PS), and preexisting ILD are risk factors for developing EGFR-TKIs induced ILD, however, we questioned whether each of these should be equally considered for the risk-benefit assessment to use EGFR-TKIs for the treatment of NSCLCs in a practical clinical setting. In addition, we also wondered whether we can assess the severity of EGFR-TKIs induced ILD when it develops during EGFR-TKIs treatment.
KL-6 is a mucin-like glycoprotein with a molecular weight of 200kd and has been classified as human MUC1 mucin [14][15][16][17]. Previous studies have demonstrated that serum levels of KL-6 are elevated in a variety of ILDs, such as idiopathic pulmonary fibrosis (IPF), collagen vascular disease associated interstitial pneumonitis, radiation pneumonitis, pulmonary sarcoidosis [18][19][20][21][22][23][24][25][26]. Furthermore, our laboratory has also demonstrated that absolute levels of KL-6 at the onset of druginduced ILD can predict the clinical outcomes [27]. Although our previous studies have suggested the usefulness of KL-6 as a tumor marker [28,29] and a predictor of survival in NSCLC patients treated with EGFR-TKIs [30], significance of circulating KL-6 level as a detector of EGFR-TKIs induced ILD or a predictor of clinical outcome in patients with EGFR-TKIs induced ILD has not been determined yet.
In the cohort of the present study, to obtain more information on risk factors for developing EGFR-TKIs induced ILD, the characteristics of NSCLC patients who developed ILD during EGFR-TKIs treatment were analyzed. In addition, to evaluate whether monitoring serum KL-6 levels in NSCLC patients during the treatment is useful to detect the development of EGFR-TKIs induced ILD or predict the clinical outcome of EGFR-TKIs induced ILD, circulating KL-6 levels were measured in NSCLC patients included in the cohort before and during EGFR-TKIs treatment.
Study subjects
Between August 2002 and August 2010, 341 advanced NSCLC patients treated with gefitinib (250 mg/day) or erlotinib (150 mg/day) at Hiroshima University Hospital (Hiroshima, Japan), Ehime University Hospital (Ehime, Japan), Shimane University Hospital (Shimane, Japan), Kochi University Hospital (Kochi, Japan) and Onomichi General Hospital (Hiroshima, Japan) were consecutively enrolled in the study. The disease staging was carried out using computed tomography (CT) scan of the chest and abdomen, bone scintigraphy or F-18 fluorodeoxyglucose positron emission tomography (FDG-PET/CT), and magnetic resonance imaging (MRI) of the head. To obtain information on both the response of tumor to EGFR-TKIs treatment and the occurrence of EGFR-TKIs induced ILD, chest radiography and/or CT scans were performed at least once a month at each institution, and the patients were followed-up until 12 weeks after the administration of EGFR-TKIs. Informed consent was obtained from all patients. This study complied with the Declaration of Helsinki, and was approved by the individual institutional Ethical Committees.
Diagnosis of preexisting pulmonary disorder and EGFR-TKIs induced ILD
The presence of preexisting pulmonary fibrosis was determined according to the diagnostic criteria set by the ATS/ERS on the basis of clinical characteristic and/ or chest CT findings, and the types of preexisting pulmonary fibrosis were classified into idiopathic pulmonary fibrosis (IPF) pattern and non-IPF pattern [31][32][33]. In addition, the presence of preexisting pulmonary emphysema was determined by chest CT findings that show low attenuation areas occupying more than 25% of the entire lung field in at least one slice [34]. The diagnosis of EGFR-TKIs induced ILD was made using the diagnostic algorithm described elsewhere [11,35]. We defined EGFR-TKIs induced ILD as diffuse pulmonary infiltrates newly developed during EGFR-TKIs treatment with lack of evidence for alternative diseases such as infection, tumor progression, heart failure and pulmonary embolism. When the occurrence of EGFR-TKIs induced ILDs was suspected, chest CT scans were performed, levels of brain natriuretic peptide (BNP) and Ddimer in blood were measured, the sputum culture, blood culture, urine antigen test for Legionella pneumophila and Streptococcus pneumoniae, cytomegalovirus antigen test, and polymerase chain reaction test for Pneumocystis jiroveci were conducted. When possible, bronchoalveolar lavage or lung biopsy was carried out. Tumor progression was carefully excluded on the basis of the clinical information including chest CT findings, physical examinations, and tumor markers. The final diagnosis of EGFR-TKIs induced ILD was made by the consensus of at least two independent pulmonologists. We collected the clinical information of all 341 patients, such as patient age, sex, histologic type, disease stage, performance status, prior chemotherapy and thoracic radiation therapy, preexisting pulmonary fibrosis, preexisting pulmonary emphysema, EGFR mutation status, types of EGFR-TKIs, duration of EGFR-TKIs treatment and laboratory data.
Subclassification of EGFR-TKIs induced ILD
The chest radiography and CT of the patients who developed EGFR-TKIs induced ILD were reviewed separately by two independent observers who were not aware of the patients' profiles, and were categorized into four patterns as previously described [27,36]: (1) acute interstitial pneumonia (AIP) pattern characterized by extensive bilateral ground glass attenuation or airspace consolidations with traction bronchiectasis, (2) chronic interstitial pneumonia (CIP) pattern characterized by fibrosis and/or consolidation, (3) cryptogenic organizing pneumonia/eosinophilic pneumonia (COP/EP) pattern showing peribronchial or subpleural consolidation without fibrosis, and (4) hypersensitivity pneumonitis (HP) pattern with diffuse ground glass opacities without fibrosis.
EGFR mutation status
In 148 out of 341 NSCLC patients included in the study, EGFR mutation statuses were assessed using paraffinembedded biopsy samples or surgically resected tumor tissues. To evaluate EGFR mutations, the peptide nucleic acid-locked nucleic acid polymerase chain reaction (PNA-LNA PCR) clamp test that can detect G719C, G719S, G719A, L858R, L861Q, T790M and 7 different exon 19 deletions [37] was used.
Electrochemiluminescence immunoassay (ECLIA) to determine circulating levels of KL-6 At least one serum sample was obtained before the EGFR-TKIs treatment from each patient included in the study. From 15 out of 20 patients who developed EGFR-TKIs induced ILD, a total of 2-5 serum samples per patient were also collected weekly after the occurrence of EGFR-TKIs induced ILD, and stored at -80°C. Serum KL-6 levels were measured by sandwich-type electrochemiluminescence immunoassay (ECLIA) using a Picolumi 8220 Analyzer (Eidia, Tokyo, Japan), as previously described [29,30].
Statistical analysis
The data were analyzed with a statistical software package (JMP, version 7.0.1; SAS Institute Inc.; Cary, North Carolina) and p < 0.05 indicated a significant difference. Data are shown as the mean ± SEM. Differences between patients with and without preexisting pulmonary fibrosis, survivors and non-survivors, and patients with AIP pattern and the other patterns of EGFR-TKIs induced ILD were analyzed using the Mann-Whitney U-test. We analyzed differences between patients with preexisting pulmonary fibrosis who developed EGFR-TKIs induced ILD or not using the Fisher's exact test. In order to test differences among the variables evaluated prior to and at the diagnosis of EGFR-TKIs induced ILD, Wilcoxon test was used. The risk factors associated with EGFR-TKIs induced ILD were evaluated using multiple logistic regression analysis. The criterion for removing a variable was the likelihood ratio statistic, which was based on the maximum partial likelihood estimate (default p-value of 0.05 for removal from the model). Table 1 shows the characteristics of the 341 patients enrolled in this study. All patients were Japanese. The ages of the patients ranged from 30 to 87 years (mean age 65.2 ± 0.6 SEM). Of the patients, 167 (49.0%) were female, 296 (86.8%) had adenocarcinomas (ADCs), 171 (50.1%) were never smokers, and 200 (58.7%) were in good performance status (PS = 0, 1). Forty-seven (13.8%) patients received thoracic radiations prior to Figure 1 shows the absolute serum KL-6 levels at the baseline according to the presence of preexisting pulmonary fibrosis. The absolute serum KL-6 levels at the baseline showed no significant difference between patients with and without preexisting pulmonary fibrosis (Mann-Whitney U-test; p = 0.207). Table 2 shows the characteristics of the 48 patients who had preexisting pulmonary fibrosis. Eight (16.7%) out of the 48 patients with preexisting pulmonary fibrosis developed EGFR-TKIs induced ILD. Statistical analyses were made to see the association between the patients' characteristic and the development of EGFR-TKIs induced ILD among these patients (Table 2). In the patients who had preexisting pulmonary fibrosis, thoracic radiation prior to EGFR-TKIs treatment was not associated with the development of EGFR-TKIs induced ILD, however, there was a weak but statistically significant association between the development of EGFR-TKIs induced ILD and EGFR mutation status (p = 0.0498).
Incidence and characteristics of patients with EGFR-TKIs induced ILD
Among the 341 patients included in this study, 20 (5.9%) developed EGFR-TKIs induced ILD, and 9 (2.6%) died from ILD. Table 3 shows the characteristics and clinical course of these 20 patients. All the patients had acute onset or exacerbation of respiratory symptoms. The median interval from the administration of EGFR-TKI to the occurrence of EGFR-TKIs induced ILD was 19 days (range 5-51 days). The subclassifications of EGFR-TKIs induced ILD categorized by the findings of chest CT scans in these 20 patients were as follows: AIP pattern in 5 patients, COP/EP pattern in 9 patients, and HP pattern in 6 patients. The CT images of 5 patients who demonstrated AIP pattern are shown in Figure 2.
When the occurrence of EGFR-TKIs induced ILD was suspected, the administration of EGFR-TKI was immediately stopped and high dose methylprednisolone (1,000 mg daily for 3 days) therapy was started. All of the 5 patients with AIP patterns were refractory to the treatment and eventually died, whereas 7 of 9 patients with COP/EP pattern and 4 of 6 patients with HP pattern showed immediate response to the treatment. Postmortem examinations were performed in 3 patients (patient No. 5, 8 and 11) and diffuse alveolar damage (DAD) was detected histologically in all of them. In addition, the presence of preexisting pulmonary fibrosis was suspected in 2 of the 3 patients. Neither infection nor lymphangitic spread of cancer cells was pointed out in any of them.
Risk factors for developing EGFR-TKIs induced ILD
The results of univariate analyses on risk factors for EGFR-TKIs induced ILD are shown in Table 4. Univariate analyses revealed that only preexisting pulmonary fibrosis (odds ratio, 4.683; 95% CI, 1.741-12.042; p = 0.003) was a significant risk factor for the development of EGFR-TKIs induced ILD.
Serum levels of KL-6 in patients who developed EGFR-TKIs induced ILD
After the administration of the EGFR-TKIs, measurements of serum KL-6 levels at least once during and/or around 4 weeks were achieved in 15 out of 20 patients who developed EGFR-TKIs induced ILD and 198 out of 321 patients who did not. The ratios of serum KL-6 levels during or around 4 weeks after the start of EGFR-TKIs to those at baseline were 1.315 ± 0.120 for the former and 1.000 ± 0.036 for the latter, respectively (mean ± SEM). There was a significant statistical difference between these ratios (p = 0.004, Mann-Whitney U-test). Figure 3 shows the serum levels of KL-6 at the multiple time points before and after the onset of ILD in 8 survivors ( Figure 3A) and 7 non-survivors ( Figure 3B). The serum levels of KL-6 in 7 non-survivors but not in 8 survivors showed consistent trends to increase after the onset of EGFR-TKIs induced ILD. The absolute serum KL-6 levels at the onset as well as at baseline showed no difference between the 7 non-survivors and 8 survivors (Mann-Whitney U-test; p = 0.072 at onset, and p = 0.072 at baseline, respectively). To assess the changes in serum KL-6 level before and after the onset of ILD, the ratio of serum KL-6 level just after the onset of ILD to that at baseline was calculated in 15 of 20 patients who developed ILD. The differences in the ratios of serum KL-6 levels just after the onset of ILD from baseline were found to be statistically significant between the survivors and non-survivors (Mann-Whitney U-test; p = 0.006; Figure 4). Then, we compared the circulating levels of KL-6 according to the patterns of EGFR-TKIs induced ILD subclassified by the manifestation on chest CT in 15 of 20 patients who developed EGFR-TKIs induced ILD. The absolute levels of circulating KL-6 at neither baseline nor the onset of ILD were found not to be statistically significant between the life-threatening pattern (AIP pattern) of 4 patients and the other patterns of 11 patients (Mann-Whitney U-test; p = 0.648 at onset, and p = 0.845 at baseline, respectively). When the ratio of serum KL-6 level at baseline to that at the onset of ILD was compared, this value was significantly higher in the patients with the lifethreatening pattern (AIP pattern) than that in other patterns (Mann-Whitney U-test; p = 0.005; Figure 5). In addition, patients whose serum KL-6 levels rose more than 1.5 times higher than their baseline levels had a high chance of developing the AIP pattern.
Discussion
In this large multi-institutional study, we investigated the incidence and risk factors for developing ILD in patients treated with EGFR-TKIs until 12 weeks after the start of EGFR-TKIs therapy. Univariate analyses revealed that preexisting pulmonary fibrosis at baseline was the only risk factor for EGFR-TKIs induced ILD. Although absolute serum KL-6 levels at neither baseline nor the onset of ILD could discriminate between lifethreatening and non-life-threatening EGFR-TKIs induced ILDs, the ratio of serum KL-6 level at the occurrence of EGFR-TKIs induced ILD to that at Table 3. baseline was found to quite precisely do so. These findings suggest the significance of serum KL-6 level for the detection of life threatening EGFR-TKIs induced ILD. The development of molecular targeted agents has been a key factor in recent advances in cancer therapy, and some of these agents have been applied in clinical practice. EGFR-TKIs are one of the representative molecular target agents and, at first, were considered to be safe agents with mild side effects in comparison to cytotoxic agents. However, following the increase in usage of EGFR-TKIs in lung cancer therapy, a significantly higher incidence of life-threatening drug induced ILD in Japanese patients than that of patients in the rest of the world was reported [38,39]. In the present study, out of 341 NSCLC patients treated with EGFR-TKIs, 20 patients (5.9%) developed ILD and 9 patients (2.6%) died from ILD. The incidence and mortality of EGFR-TKIs induced ILD were relatively higher than those reported in previous studies from Japan [7][8][9][10][11][12][13]39]. This result might be due to the high incidence of preexisting pulmonary fibrosis in this study. In this study, the manifestations of chest CT scans in 20 patients who developed EGFR-TKIs induced ILD were classified as AIP pattern for 5 patients, COP/EP pattern for 9 patients and HP pattern for 6 patients. Interestingly, CIP pattern was not observed as was the case in a previous study [36]. All the patients who demonstrated the AIP pattern died, whereas the majority of patients with other patterns recovered from EGFR-TKIs induced ILD. In this study, the postmortem examination of three patients with AIP pattern revealed that DAD was the main cause of death and observations similar to ours have been reported previously [7,8]. In this study, univariate analysis revealed that preexisting pulmonary fibrosis was the only risk factor for developing EGFR-TKIs induced ILD. Although previous studies reported that male gender, smoking history and poor PS were also independent risk factors for developing EGFR-TKIs induced ILD [7][8][9][10][11][12][13]39], neither of them correlated with incidence or mortality of EGFR-TKIs induced ILD in the present study. This may be due to the small sample size and high incidence of preexisting pulmonary fibrosis in our studied patients. Figure 5 The ratios of the serum levels of KL-6 at the onset of EGFR-TKI related ILD to those at baseline on the basis of the sub-classifications of EGFR-TKIs induced ILD. Open, shaded, and solid bars represent hypersensitivity pneumonitis (HP) pattern, cryptogenic organizing pneumonia/eosinophilic pneumonia (COP/ EP) pattern, and acute interstitial pneumonia (AIP) pattern, respectively. There is a significant difference in these ratios between AIP pattern and the other patterns (p = 0.005). Although a previous study from our laboratory reported that serum KL-6 levels at diagnosis increased only in the life-threatening types, such as the DAD and CIP patterns, of drug induced ILDs [27], absolute serum KL-6 levels at the onset of EGFR-TKIs induced ILD did not correlate with clinical outcomes in the present study. The immunohistochemical analysis of KL-6 using three postmortem autopsy specimens showed that KL-6 was expressed at tumor cells in the primary lesions as well as alveolar epithelial cells in the EGFR-TKIs induced ILDs (data not shown). Therefore, we speculate that the origin of serum KL-6 at the onset of EGFR-TKIs induced ILD might be associated with both NSCLCs and EGFR-TKIs induced ILDs. On the other hand, we found that the ratios of serum KL-6 levels just after the onset of ILD to those at baseline could quite precisely discriminate life-threatening ILD from nonlife-threatening ILD, and correlate well with the disease progression. We can speculate that a drastic increase in serum KL-6 levels after the administration of EGFR-TKIs might be due to severe lung injury accompanied with both alveolar-capillary destruction and enhancement of alveolar-capillary permeability which allow KL-6 to leak into the circulation from the alveolar space [40]. Based on these observations, KL-6 can be regarded as a good serum biomarker to assess the severity of alveolar epithelium injury and the clinical outcome of EGFR-related ILD. Regarding the association between KL-6 and other serum biomarkers for ILD such as surfactant protein (SP)-A and SP-D in EGFR-TKIs induced ILD, we do not have data to discuss. Previous studies, which measured serum SP-A, SP-D, and KL-6 levels in 4 patients with EGFR-TKIs induced ILD, demonstrate that serum SP-A and SP-D levels increased in all studied patients whereas KL-6 levels only elevated in patients with life-threatening EGFR-TKIs induced ILD [8,41]. This observation is compatible with the findings of the present study.
In addition to its ability to detect patients who develop life-threatening ILD, the monitoring of serum KL-6 levels is also useful to predict survival and progressive disease in NSCLC patients treated with EGFR-TKIs [30]. As measurement of serum KL-6 level is more rapid, inexpensive, reproducible, and easier to perform than CT scans, its monitoring could be quite useful to assess the condition of NSCLC patients receiving EGFR-TKIs. The development of EGFR-TKIs induced ILD is reported to mostly occur within the first 4 weeks after the start of EGFR-TKIs [11]. In the present study, 5 cases developed ILD within the first 2 weeks (ranged from 5 to 14 days) after the start of EGF-TKIs. Therefore, based on the results of the present study, once a week monitoring of serum KL-6 levels in addition to chest radiography could be recommended for NSCLC patients receiving EGFR-TKIs particularly for the first 4 weeks after the start of treatment.
Although these promising results were obtained, we are aware that this study has a number of limitations. First, the number of EGFR-TKIs induced ILD patients included in the study was not sufficient for a valid statistical analysis. Second, this study was conducted in a retrospective manner. Therefore, the information on EGFR mutation statuses in cancer tissue was not obtained from all the studied patients. Furthermore, multiple measurements of serum KL-6 levels were not achieved in all patients who developed EGFR-TKIs induced ILD. Third, the enrolled NSCLC patients might be biased compared with general advanced NSCLC population. We believe that this was caused by our trend to use EGFR-TKIs for specific subgroups of NSCLC patients such as women, non-smokers, and patients with EGFR mutations. Finally, the studied patients were only Japanese. Considering ethnic differences in the efficacy of EGFR-TKIs treatment and/or the occurrence of adverse side effects related by EGFR-TKIs, we should carefully interpret the results when this monitoring system is applied to non-Japanese patients. A large and prospective study to measure serum KL-6 levels serially before and after EGFR-TKIs treatment, also including non-Japanese patients, will be required to evaluate the utility of monitoring KL-6 in EGFR-TKIs induced ILDs.
Conclusions
Our results indicate that the change in serum KL-6 level from baseline should be useful biomarker for the diagnosis of life-threatening EGFR-TKIs induced ILD and for estimating its progress and severity. A risk-benefit analysis and patient selection should be considered as well as close monitoring of serum levels of KL-6, particularly if using EGFR-TKIs in patients with preexisting pulmonary fibrosis. | 5,206.6 | 2011-07-26T00:00:00.000 | [
"Medicine",
"Biology",
"Psychology"
] |
A Predator-Prey model in the chemostat with Holling Type II response function
A model of predator-prey interaction in a chemostat with Holling Type II functional and numerical response functions of the Monod or Michaelis-Menten form is considered. It is proved that local asymptotic stability of the coexistence equilibrium implies that it is globally asymptotically stable. It is also shown that when the coexistence equilibrium exists but is unstable, solutions converge to a unique, orbitally asymptotically stable periodic orbit. Thus the range of the dynamics of the chemostat predator-prey model is the same as for the analogous classical Rosenzweig-MacArthur predator-prey model with Holling Type II functional response. An extension that applies to other functional rsponses is also given.
Introduction
In this paper, we analyze a predator-prey model in the chemostat with Holling Type II predator response function of Monod form and prey response function of mass action form. The chemostat is a widely-used apparatus used in the study of microbial biology. It is helpful for the study of microbial growth and interactions under nutrient limitation in a controlled environment. Chemostats can be used as a guide for identifying the dynamical nature of population interactions that may be present in a more complex system such as a lake. There are many articles related to the study of the chemostat, both from the experimental and the modelling point of views (see for example [6], [15], [29] and [37]). Here, we look at a model of a basic chemostat setup in which a single, essential, non-reproducing nutrient is supplied to a growth chamber from a nutrient reservoir at a constant rate. A population of microorganisms, designated as the prey, lives in the growth chamber and it feeds on the nutrient and a predator population predates on the prey population. The growth chamber is assumed to be well-stirred. It is assumed that the inflow rate of nutrient is the same as the outflow rate from the growth chamber to the waste reservoir so that the volume of the growth chamber remains constant and all of the contents of the growth chamber are removed in proportion to their amount in the growth chamber. How the amounts of the nutrient, prey population, and predator populations change as time changes are all modelled.
The more familiar predator-prey models, introduced in Rosenzweig and MacArthur [34], involve models of predator-prey interactions in which the prey population reproduces and involves only two equations, one for the prey and one for the predator. In [34], models for which the prey nullcline can have at most a single local extremum, a local maximum, were considered. Such a model is often referred to as the Rosenzweig-MacArthur model, and it has been well studied (see, for example, [26] and [33]). The case where the prey nullcline has both a local maximum and a local minimum is possible and has also been considered as a generalized version of the Rosenzweig-MacArthur model (see for example, [1], [13], [14], [36] and [38]). Collectively, the Rosenzweig-MacArthur model and its generalizations are known as the classical predator-prey models.
The mathematical expressions needed for the response functions in these models are not usually known in practice. Biologists typically collect sets of data that can be fitted by many functions. Following the work of Holling [20], modellers have used three main forms to describe response functions in predator-prey models. Holling Type I refers to a mass-action response function, that is linearly increasing. Holling Type II responses are increasing and concave down. Holling Type III are sigmoidal. The dynamics of predator-prey models are directly influenced by the types of response functions chosen. Recently, in Fussmann and Blasius [14], it was shown that the range of possible dynamics can be different for classical predator-prey models modelled by different forms of Holling Type II response functions.
Here, we determine the global dynamics of a predator-prey model in the chemostat with predator response function of Holling Type II (Monod form) and prey response function of mass action form. We then compare the dynamics of this predator-prey model in the chemostat with that of the analogous classical predator-prey model.
Harrison [16] considered a wide class of classical predator-prey models and obtains sufficient (but not necessary) conditions for the global stability of the coexistence equilibrium. He also proved that when the coexistence equilibrium is unstable, at least one periodic orbit exists. In the special case that the prey grow logistically in the absence of predators and the predator response function is of Holling Type I form, Hsu [21] proved that the coexistence equilibrium is globally asymptotically stable whenever it exists. He also proves that when the predator response function is of Monod form, then the coexistence equilibrium is globally asymptotically stable whenever it is locally asymptotically stable, and Liou and Cheng [28] and Kuang and Freedman [27] proved that when the coexistence equilibrium exists and is unstable, it is surrounded by a unique periodic orbit.
In this paper, we focus on the analogous model of predator-prey interaction in a chemostat. In Wolkowicz [39], a food web in a chemostat was considered. In the special case that the model studied in [39] is a food chain that includes one resource, and only a single prey, and a single predator population, the model is of the basic form studied here. A Lyapunov function was used to prove that the coexistence equilibrium is globally asymptotically stable whenever it exists, under the assumption that the predator response function is Holling Type I and the prey response function is either Holling Type I or Holling Type II (of Monod form).
In the model considered in this paper, we assume instead that the prey response function is Holling Type I and the predator response function is Holling Type II (of Monod form). In this case, we prove that the dynamics are more complicated. In particular, we prove that whenever the coexistence equilibrium is locally asymptotically stable, it is globally asymptotically stable, and that whenever the coexistence equilibrium is unstable, there is a unique orbitally asymptotically stable periodic orbit. We also prove that the change in stability occurs by means of a Hopf bifurcation that is always supercritical. We thus show the similarity between the dynamics of the classical predator-prey model in which the prey is assumed to grow logistically in the absence of the predator population and its analogous chemostat predator-prey counterpart model. The chemostat model is of one dimension higher, since the resource that the prey population consumes in order to grow is also modelled and the growth of the prey in the absence of the predator population depends instead on the abundance of the resource. This paper is organized in the following manner. The predator-prey model in a chemostat is described in Section 2, where three equivalent lower dimensional limiting systems are also derived. The limiting system that most resembles the classical predator-prey model is then chosen to be the focal system. Properties of the prey nullcline of this system are derived in Section 3. Preliminary analytic results appear in Section 4, where it is also determined that the system undergoes a supercritical Hopf bifurcation. In Section 5, we prove that the coexistence equilibrium is globally asymptotically stable whenever it is locally asymptotically stable, and also that when the coexistence equilibrium is unstable, there is a unique, orbitally asymptotically stable periodic orbit. Finally, a discussion of the similarities between chemostat models and their analogous, classical predator-prey models is given in Section 7. Appendices provide a modification of a theorem due to Huang [22], an extension of a theorem due to Hsu [21], and the Hopf bifurcation analysis. The bifurcation diagrams were done using XPPAUT [12] and the simulations were done using Matlab [32].
The Model
Let S(t) denote the concentration of the nutrient in the growth chamber at time t, and let x(t) and y(t) denote the density of prey (that feeds off this nutrient) and predator populations, respectively. We consider the following system of autonomous ordinary differential equations as a model of predator-prey interaction in a well-stirred chemostat: where S 0 denotes the concentration of the nutrient in the nutrient reservoir, and c and γ are yield constants (respectively, to the prey's consumption of the nutrient and the predator's consumption of prey). To ensure that the volume of this vessel remains constant, D denotes both the rate of inflow from the nutrient reservoir to the growth chamber, as well as the rate of outflow from the growth chamber. We assume the species specific death rates are insignificant with respect to the flow rates and they are ignored. It is also assumed that the functions p and q are continuously differentiable (additional smoothness assumptions are given below) and that S 0 , D, c, and γ are all positive constants.
The rate of conversion of nutrient to biomass is given by the function p(S), and is assumed to satisfy p(0) = 0, p(S) > 0 for S > 0 and p (S) > 0 for S ≥ 0. For the remaining of this paper, we consider p to be of this form. The function q(x) denotes the predator response function. It is assumed that q(x) has properties similar to p(S). In particular, The Monod functional response q(x) = ax b+x satisfies these properties and will be the focus in the remainder of this paper. Note that the following inequality holds for that form: To simplify (2.1), the yield constants c and γ can be scaled out by performing a change of variables. LetŜ = σS,x = ξx,ŷ = ηy andt = τ t. Then, taking c = σ ξ , γ = ξ η ,Ŝ 0 = σS,D = D τ ,m = m ξτ ,â = aξ τ η , andb = ξb, and removing hats to simplify notation, reduces system (2.1) to: Remark 2.1. Since the system has four variables (S, x, y and t), we are able to scale out two more parameters for a total of four. However, this is not needed to complete our analysis.
Lemma 2.1. The solutions of S(t), x(t) and y(t) of (2.5) are non-negative and bounded.
Proof. It is important to first note that since the vector field is C 1 , existence and uniqueness of solutions hold. S(t) is positive for all t > 0 since S(τ ) = 0 for any τ ≥ 0 implies that S (τ ) > 0. Furthermore, x(t) > 0 and y(t) > 0 for all t > 0 since the (S, 0, y)-plane and (S, x, 0)-plane are invariant with respect to solutions of (2.5). Hence, by the uniqueness of solutions, they cannot be reached in finite time by trajectories for which x(0) > 0 or y(0) > 0, respectively.
Next consider the sum of our differential equations: is a first order ODE that has the solution Thus S(t) + x(t) + y(t) ≤ max{S(0) + x(0) + y(0), S 0 }. Since all three components are non-negative, this implies each component is bounded above.
Nonetheless, (2.5) is still a 3D system of equations, and so is harder to analyze than the classical predator-prey model. From the above proof, we can see that the sum of solutions S(t) + x(t) + y(t) converges to S 0 exponentially as t tends to infinity. This tells us that for any point (S, x, y) in the omega limit set, , t n → ∞ as n → ∞, S(t n ), x(t n ), y(t n ) → (S, x, y) as n → ∞ , it follows that S(t n ) + x(t n ) + y(t n ) → S + x + y = S 0 . Hence, ω is restricted to the simplex (S, x, y) : S + x + y = S 0 , a two dimensional set. We can then obtain three equivalent limiting systems by eliminating one variable in our system using the fact that S(t) + x(t) + y(t) = S 0 . In this case, it is most useful to eliminate S, as this yields a 2D predator-prey system with nullclines that resemble those of the classical predator-prey model. In doing so, if we define we obtain the limiting system, (2.8) Since (2.8) closely resembles the classical predator-prey model, it will be the main system we analyze throughout this paper. That it has the same dynamics as the 3D systems (2.5) and therefore (2.1), is justified in Section 6.
The predator nullcline is the vertical line x = q −1 (D) = bD a−D and is in the first quadrant if and only if a > D. We will assume, unless otherwise stated, that a > D so that the coexistence equilibrium point exists in the interior of the first quadrant. The prey nullcline, F (x), is studied in detail in the next section.
The Prey Nullcline
The prey nullcline is given by the continuously differentiable function . The properties of this function play a key role in our analysis. In this section, we determine the properties of F (x) when the predator response function is of Monod form, q(x) = ax b+x . Define We will assume that mS 0 > D, so that F (0) > 0. The slope of the prey nullcline is given by and with the sign determined by the sign of the numerator. For x ≥ 0, the second derivative of the prey nullcline is: (3.5) and the third derivative is: Though not discussed any further, it is clear that the higher order derivatives will alternate in sign. Proof. If we assume that M = 0, the result follows, since then, mS 0 −D 2m > 0 under our assumptions. Next assume that M > 0. Then, Lemma 3.1 tells us that M is always less than half the distance to K. Furthermore, with a Monod response function, one can actually find an explicit expression for M , namely: Fig 1 illustrates how the shape of the graph of F depends on the sign of F (0). In Section 4, we select S 0 as the bifurcation parameter and investigate how changes in S 0 affect the shape of the graph of F . Since, F is an increasing function of S 0 for each fixed x. F is also an increasing function of S 0 for each fixed Remark 3.2. By equations (3.8) and (3.9), as S 0 increases, the local maximum of F moves up and to the right.
It also is important to note that since F (x) increases with S 0 , one can find a fixed S 0 value so that F (x) = 0, namely: for any fixed x > 0. If we take x = x * in S 0 (x), we get: Then, taking S 0 = S 0 crit forces F (x * ) = 0. Taking S 0 = S 0 crit will prove especially useful in our analysis.
Local Analysis
In this section, we consider some of the properties of the different invariant sets associated with (2.8). It is first important to note that non-negativity and boundedness of solutions for system (2.8) follows immediately from the non-negativity and boundedness of solutions of the 3D system (2.5). Nonnegativity and boundedness of solutions is a prerequisite of any reasonable model of the chemostat.
The Jacobian matrix of (2.8) is Evaluating it at the zero equilibrium, we obtain: Since q(x) is an increasing function and F (0) > 0, the diagonal entries have opposite signs. Hence, (0, 0) is always a saddle. Similarly for (K, 0), One can see that if x * > K, then q(K) − D = q(K) − q(x * ) < 0 and hence, (K, 0) would be a global attractor since F (K) < 0 as well. However, there is no coexistence equilibrium in this case. Therefore we assume that x * < K, which means the diagonal entries of (4.3) have opposite signs, and hence, (K, 0) is also a saddle. Finally, we consider the coexistence equilibrium point. Note once again, the coexistence equilibrium exists if and only if x * < K. In this case the Jacobian is given by which has characteristic equation: Since D + mx * > 0, the constant term is positive and the roots of (4.5) have negative real part if and only if F (x * ) < 0. Thus, (x * , y * ) is locally asymptotically stable when F (x * ) < 0 and unstable when F (x * ) > 0. Note that unless x * = K or F (x * ) = 0 all critical points are hyperbolic. When x * = K, we determine that the equilibrium point is asymptotically stable using standard phase plane analysis. Proof. Since system (2.8) is planar, any periodic orbit must surround an equilibrium point by the Poincaré-Bendixson Theorem [3]. By the non-negativity of this system, the only equilibrium a periodic orbit can surround is (x * , y * ). From phase plane analysis, any periodic orbit would lie in the set {(x, y) : 0 < x < K and y > 0}. The eigenvalues of the variational matrix about (x * , y * ) are When the coexistence equilibrium exists, these eigenvalues are ply imaginary if and only if F (x * ) = 0. The condition F (x * ) = 0 can be achieved by fixing S 0 = S 0 crit from (3.11). Also, the imaginary part of Solid lines correspond to stable equilibria, dashed lines correspond to unstable equilibria, filled circles correspond to stable periodic orbits and empty circles correspond to unstable periodic orbits. As S 0 increases there is a transfer of stability from (0, 0) to (K, 0) to (x * , y * ) by means of transcritical bifurcations, and finally a transfer of stability from (x * , y * ) to a stable periodic orbit by means of a Hopf bifurcation.
the eigenvalues is non-zero. Furthermore, the transversality condition holds, since the derivative with respect to S 0 of the real part of the eigenvalue at the Hopf bifurcation is positive by (3.9). Thus, the eigenvalues are complex in a neighbourhood of S 0 crit and cross the imaginary axis at S 0 = S 0 crit , implying that a Hopf bifurcation occurs there. The direction and stability of the bifurcating periodic orbit is determined by the sign of the following quantity, called the vague attractor condition: (4.7) which was determined using the algorithm in Marsden and McCracken [31] as outlined in Appendix C. Under our assumptions on the parameters, w < 0, hence, the Hopf bifurcation is always supercritical.
The bifurcation diagram in Fig 2 summarizes each of these local stability results.
Global Analysis
To establish the global dynamics of system (2.8), first examine periodic orbits. The following lemma together with the Poincaré criterion will be used to show that when periodic orbits exist, they must surround the local maximum, M, F (M ) .
Lemma 5.1. Let Γ be any periodic orbit of (2.8). Then This means that Proposition 5.2. Any periodic orbit of (2.8) must surround the local maximum of F (x).
Proof. Assume F (x) > 0 for the entire portion of F inside of Γ. Then C > 0 by Lemma 5.1, implying that Γ is an unstable periodic orbit by the Poincaré criterion [9]. Since any periodic orbit must surround (x * , y * ), it follows that F (x * ) > 0 and so (x * , y * ) would also be unstable, which is impossible. Using a similar argument, it also follows that F (x) < 0 for the entire portion of F inside of Γ is impossible. Thus, the slope of the portion of the prey nullcline inside any periodic orbit cannot be entirely of the same sign, i.e., it must change sign, and therefore any periodic orbit must surround the local maximum of F (x), M, F (M ) .
Global stability of the coexistence equilibrium point for the classical predator-prey model was studied by Harrison [16]. If one denotes the coexistence equilibrium point as (x * , y * ), he proved that this equilibrium is globally asymptotically stable, if it is locally asymptotically stable, i.e., in phase space it lies on prey nullcline, F (x), where its slope is negative (F (x * ) < 0), and as well, it lies below the vertical line y = F (0), i.e., y * < F (0). Hence, he only obtained a sufficient condition for the global stability.
Hsu [21] theorized more generally that if the prey nullcline is concave down, then wherever the interior equilibrium is locally stable, it is also globally stable. Although this is not true for all systems (some counterexamples were found in [19]), we prove for (2.8), that local asymptotic stability implies global stability by using the Dulac criterion and the Poincaré-Bendixson theorem, that the equilibrium destabilizes via a supercritical Hopf bifurcation, and that if a periodic orbit exists it is unique.
Let (x * , y * ) be locally asymptotically stable, i.e. x * ∈ [M, K]. Since the Hopf bifurcation at x * = M is supercritical, (x * , y * ) is also locally asymptotically stable there. We start using a similar argument to the proof given for Theorem 3.3 in Hsu [21]. However, it was pointed out in [8], that the proof in Hsu [21] is "not rigorously correct" and so we make modifications. Choose h(x, y) = mx+q(x) −1 y β−1 as the auxiliary function to use with the Dulac criterion, where the value β > 0 will be determined.
Here, the function h(x, y) is defined in the interior of the first quadrant. Let f = x y . Then, In the interior of the first quadrant, y β−1 > 0 and mx + q(x) > 0, hence, ∆ changes sign if and only if H(x) changes sign. As opposed to the argument in [21], it is important to note that F (x) ≥ 0 and . Therefore, the choice of β is critical in order to ensure that H(x) does not change sign. As a consequence, we have the following proposition.
. If there exits β > 0 such that max Note that β(x) is parameterized by S 0 . We next look to see how changes in S 0 affect β(x). Since, β(x; S 0 ).
Proof. Consider the derivative with respect to x of β(x; S 0 crit ): > 0 for x ≥ 0, hence, β(x; S 0 crit ) is an increasing function on [0, K]. This, together with (5.4) and Remark 3.2, tells us that β crit > β(x; S 0 crit ) > β(x; S 0 ) for x < M , and β crit < β(x; S 0 crit ) < β(x; S 0 ) for x > x * . Finally at the boundaries, β(M ; S 0 ) = 0 < β crit , and lim x→x * + β(x; S 0 ) = ∞ > β crit . can be used to determine when the Dulac criterion can be used to obtain global stability of the coexistence equilibrium using the function H(x) defined in (5.2). In particular, a value β crit can be chosen so that the Dulac criterion can be used to prove global stability of the coexistence equilibrium if S 0 < S 0 crit , i.e., x * < M , which is precisely when the coexistence equilibrium is locally asymptotically stable, but when S 0 > S 0 crit , i.e., x * < M , and the coexistence equilibrium is unstable, no value β crit can be chosen.
Theorem 5.5. Consider system (2.8). If (x * , y * ) is locally asymptotically stable, then it is globally asymptotically stable. If (x * , y * ) is unstable, then there exists a unique, stable periodic orbit that surrounds the point (M, F (M )).
Proof. Since β crit satisfies Proposition 5.3, H(x) ≤ 0 and consequently ∆ ≤ 0 for 0 ≤ x ≤ K. Therefore, since ∆ does not change sign, by the Dulac criterion it follows that system (2.8) has no nontrivial closed orbits lying entirely in the first quadrant. Thus, since we have proved that all orbits are bounded (see Lemma 2.1), by the Poincaré-Bendixson theorem, (x * , y * ) is globally asymptotically stable.
To determine uniqueness of the periodic orbit when x * < M , apply a modified version of Huang's theorem from [22], stated as Theorem A.1 in Appendix A by taking φ(x) = mx + q(x), ψ(x) = q(x) − D, π(y) = y and ρ(y) = y. Then conditions (i)-(iii) of Theorem A.1 are satisfied. For condition (iv) of Theorem A.1, notice that the function H(x) is identical to β(x) from (5.3). We had β (x) > 0 from (5.7), (5.4), and Remark 3.2, so we also have H (x) > 0. Thus, condition (iv) of Theorem A.1 is satisfied, and so when x * < M , there is a unique periodic orbit and it is orbitally asymptotically stable. Fig. 4. Sample trajectories of (2.8) with parameters a = 2, b = 0.5, m = 2, and D = 1.3788. Plot (a) depicts globally asymptotically stable convergence to (K, 0), when x * > K. Plot (b) depicts convergence to the globally asymptotically stable interior equilibrium, when it is to the right of the local maximum. Plot (c) depicts an orbitally asymptotically stable periodic orbit surrounding the unstable interior equilibrium, when it is to the left of the local maximum.
Fig 4 illustrates
convergence to the globally asymptotically stable equilibrium (K, 0) when the coexistence equilibrium does not exist and the existence of the two types of dynamics that are possible when a coexistence equilibrium point does exist: convergence to a globally asymptotically stable coexistence equilibrium point or convergence to an orbitally asymptotically stable periodic orbit that attracts all solutions with positive initial conditions except the unstable coexistence equilibrium point.
In Appendix B, we extend Theorem 5.5 to a more general class of predator-prey models, and give examples of systems that satisfy the hypotheses so that a β can be found in order to prove global stability of the coexistence equilibrium.
Dynamics of the 3D System
The dynamics of the 2D system (2.8) have been analyzed in great detail. We next justify why studying this system is equivalent to studying the 3D system (2.5), and hence, the original system (2.1).
Any point (x, y) in the 2D system (2.8) corresponds to a point (S 0 − x − y, x, y) in the 3D system (2.5), and so solutions of the 2D system correspond to solutions of the 3D system that lie on the 2D simplex S = {(S, x, y) ∈ R 3 : S, x, y > 0, S + x + y = S 0 }. Thus, the two systems share some of the same properties. The equilibrium points (0, 0), (K, 0) and (x * , y * ) of the 2D system correspond to (S 0 , 0, 0), (S 0 − K, K, 0) and (S 0 − x * − y * , x * , y * ), respectively, for the 3D system. In terms of local stability, the additional eigenvalue of the Jacobian matrix is negative, since solutions of the 3D system converge exponentially to S. Thus, the local stability of the 3D equilibrium points is the same as for the corresponding 2D equilibrium point. Since all periodic orbits of the 3D system must lie on S, the existence and number of periodic orbits of the two systems is the same.
From Theorem 5.5 and standard methods for asymptotically autonomous systems (see Smith and Waltman [37], or using the Butler-McGehee Lemma [5] directly), we obtain the following theorem.
Theorem 6.1. Consider system (2.5). If (x * , y * ) is locally stable for the 2D system (2.8), then it is globally stable, and (S 0 − x * − y * , x * , y * ) is globally asymptotically stable for the 3D system (2.5). If (x * , y * ) is unstable in 2D system (2.8), then there is a unique periodic orbit that lies on the S+x+y = S 0 simplex and it is orbitally asymptotically stable. Corollary 6.2. Consider the original system (2.1). If the coexistence equilibrium exists and is stable, then it is globally asymptotically stable, and if it is unstable, then there is a unique periodic orbit that lies on the {(S, x, y) ∈ R 3 : σS, ξx, ηy > 0, σS + ξx + ηy = σS 0 } simplex, and it is orbitally asymptotically stable.
Discussion
A system of ODEs modeling predator-prey interactions in a chemostat was analyzed assuming a predator response function of Monod form. It was shown that whenever the coexistence equilibrium is locally asymptotically stable, it is also globally asymptotically stable, and whenever the coexistence equilibrium is unstable, there is a unique, orbitally asymptotically stable periodic orbit.
These results are consistent with the dynamics of the analogous classical predator-prey model with Monod predator response function, in whichthe resource and how the prey grows based on the amount of resource available is not modelled, but instead the prey is assumed to grow logistically in the absence of the predator population. This classical model has been studied extensively. See for example, [8,27,28,36] the resource is not modelled, but instead the prey is assumed to grow logistically in the absence of the predator. These authors found, just as for the predator-prey model in the chemostat studied in this paper, that whenever the coexistence equilibrium is locally asymptotically stable, it is globally asymptotically stable. It was also shown in Cheng [7] that when a periodic orbit exists around an unstable coexistence equilibrium in the classical predator-prey model with a Monod functional response, it is unique, and hence any nontrivial periodic orbit is asymptotically stable. However, there was a gap in the proof, that was later corrected by Liou and Cheng [28]. Another proof of the uniqueness of the limit cycle for the classical model in the case of the Monod functional response was also given in Kuang and Freedman [27].
That the range of dynamics of the model studied here, and of the analogous classical predator-prey model is basically the same, is not entirely surprising. For example, after Hastings and Powell [17] showed that a three-species food chain with Monod response functions in which the population at the lowest tropic level grows logistically in the absence of a predator population could have chaotic dynamics, Daoussis [10] showed this was also the case for the analogous chemostat food-chain model.
Classical predator-prey models have been shown to be sensitive to the mathematical form used to model the predator response function, even when the forms have the same qualitative shape by Fussmann and Blasius [14]. They considered three mathematical forms: the Monod form, the Ivlev form [23], and the Hyperbolic tangent form [24], all of which are monotone increasing and concave down and are nearly indistinguishable when appropriate parameters are chosen (see Fig 5). Fussmann and Blasius provided a similar figure and demonstrated that the qualitative and quantitative dynamics predicted by models with these response functions can be quite different. The model with the three . Functions with such a shape are said to be of Holling Type II form [20]. different response function forms was studied in more detail using a bifurcation theory approach in Seo and Wolkowicz [36].
In the case of the classical model with the Hyperbolic tangent response function, Seo and Wolkowicz [36] proved that the Hopf bifurcation is always supercritical when it occurs at the local maximum of the prey nullcline, but can be either super or subcritical when it occurs at the local minimum. They also studied the dynamics in more detail for all three forms of the response functions using a one and two parameter bifurcation approach, and found that in the Hyperbolic tangent case, two limit cycles surrounding a stable coexistence equilibrium can arise through a saddle-node bifurcation of limit cycles when the Hopf bifurcation at the local minimum is subcritical. Seo and Wolkowicz [35] also considered the classical predatory-prey model, with a functional response of arctan form and proved that when the coexistence equilibrium is locally asymptotically stable, more than one limit cycle is possible, providing a counterexample to a result in Attili and Mallak [2]. The classical predator-prey model with Ivlev response functions was analyzed by Kooij and Zegeling [25]. They proved that model has a similar range of dynamics as the model with Monod response function. The analogous chemostat models in the cases of the hyperbolic tangent and the arctan as the response function was studied in Eastman [11] and the analogous model with the Ivlev response function was studied in Bolger [4]. It was also shown that in these cases the analogous chemostat models have a similar range of dynamics when compared with their analogous classical predator-prey models.
there is a positive equilibrium point (x * , y * ), (ii) all functions in (A.1) are C 1 in the interior of R 3 + , and F (x) is continuous in the interior of φ (x) > 0 and ψ (x) > 0 for x > 0, ρ (y) > 0 and π (y) > 0 for y > 0, and is non-decreasing for 0 < x < x * and x * < x < K.
Then, system (A.1) has at most one limit cycle in the first quadrant, and, if it exists it is stable.
Appendix Appendix B Extension of Hsu's theorem
Consider the following predator-prey system: is a generalized version of the system that Hsu studied in [21]. Here, the prey nullcline is given by the function γ −1 F (x) , and the interior equilibrium is the unique point (x * , y * ) satisfying q(x * ) = D and γ(y * ) = F (x * ). Hsu [21,Theorem 3.3] conjectured that if (x * , y * ) is stable and the prey nullcline is concave down, then (x * , y * ) is globally stable. Since this conjecture is not true, as demonstrated by the counter example given by Hofbauer and So [19], we state the following theorem that was shown to be satisfied in Seo and Wolkowicz [36] for the classical predator-prey model with a Hyperbolic tangent response function. The following theorem can also be shown to be satisfied if, for example, ξ(x) = q(x), γ(y) = η(y) = y, F (x) = q(x) , and q(x) = a tanh(bx).
Then (x * , y * ) is globally stable provided there exists a β > 0 such that: Remark B.1. If M = 0 then the Lyapunov function in Harrison [16] can be used to show global stability.
Proof. To show that (x * , y * ) is globally stable, we will use a similar argument as Hsu. That is, we will show that there are no closed orbits in the first quadrant using the Dulac Criterion. Define the auxiliary function to be h(x, y) = ξ(x) −1 η(y) β−1 , where β > 0. The resulting divergence is then where f = x y and H(x) is given by: In the first quadrant, η(y) β−1 > 0 and ξ(x) > 0 by assumptions (iii) and (v), hence, ∆ will only change sign if H(x) changes sign. In Hsu's proof, he tried to show that H(x) ≤ 0 by choosing an appropriate .
(B.5)
Since the denominator is positive by assumption (iv), F (x) has the same sign as d dx γ −1 F (x) . The problem with the β Hsu chose is that it neglects the fact that F (x) ≥ 0 for x ∈ [0, M ] by assumption (vii). By assumptions (ii) and (vii), we can conclude that x * ≥ M . Despite that −D + q(x) ≤ 0 in [0, M ] by assumption (vi), if β is too small (as the one he chose), H(x) > 0. To guarantee that However, β can not be too large either. Since , then H(x) ≤ 0. Since H does not change sign, ∆ does not change sign. Hence, by the Dulac criterion, it follows that system (B.1) has no nontrivial orbits lying entirely in the first quadrant. Thus by the Poincaré-Bendixson theorem, (x * , y * ) is globally stable. We next include another example of a system where such a β is obtainable under the above assumptions. Consider the classical predator-prey model with a Monod response function. In this notation, this would be equivalent to letting ξ(x) = q(x), γ(y) = η(y) = y, F (x) = q(x) , and q(x) to be of Monod form.
Note that since γ(y) = y, γ −1 F (x) = F (x) and hence, that the interior equilibrium is given by x * , F (x * ) . We can then equivalently transform the parameters of the Monod response function by a → Dm and b → x * (m − 1), wherem > 1. Then q(x) = Dmx x * (m−1)+x . We first show that these functions satisfy the assumptions of Theorem B.1.
It is important to note that under assumption (i), the interior equilibrium (x * , y * ) lies in the positive quadrant. Moreover since η(0) = 0 and ξ(0) = 0, the point (0, 0) is also an equilibrium point. Thus, the x and y axes are both nullclines.
To satisfy assumption (vii), we require x * ∈ [M, K]. First assume x * ∈ (M, K]. Since all hypotheses of Theorem B.1 are met, we proceed to the find the β as described by (B.2). Let We can then determine where the maximum and minimum of β(x) occur, by finding its critical values.
Taking the first derivative, we get: Note that the sign of β (x) depends on the numerator. The roots of x 2 − 2xx * + M x * = 0 are and are both positive. The following two lemmas will be useful in determining which of these values is the local maximum of β (the other being the local minimum), and whether these values lie in our desired regions [0, M ] and [x * , K].
Proof. First consider x − . We proceed using proof by contradiction to show that x − ∈ [0, M ]. Suppose that x − > M . Then which is and x − ∈ [0, M ], we can conclude that β − is a local maximum, and hence, that Now consider x + . Clearly x + > x * . Moreover, β (x) has a removable singularity at x = x * and since sign lim β(x) is decreasing between x * and x + .
Proof. First assume that x * ≤ K 2 2K−M . Since β(x) is decreasing between x * and x + , if we can show that β(K) ≥ 0, then a local minimum occurs in [x * , K]. Furthermore, since x − is not in this region, it would mean that this local minimum must be β(x + ), and hence, x + ≤ K. Indeed, This means that in this case, min Now assume that x * > K 2 2K−M . Then, β(K) < 0, and hence, x + > K. Furthermore, since β(x) is decreasing between x * and x + , it sly is decreasing on [x * , K]. Thus the minimum value it attains on that region is at K, namely min Lemma B.4. For our positive quantities β − , β + and β K , In any case, there exists a positive β between the two quantities when x * ∈ (M, K]. β(x).
Appendix Appendix C Analysis of the Hopf bifurcation
Computation of (4.7) was done using the computer algebra system Maple [30], as provided in the supplementary material [18]. The following is a summary of the algorithm used, highlighting the main results.
The formula in Marsden and McCracken [31] is localized to where the Hopf bifurcation occurs, and thus, we assume that we are near the critical value of our bifurcating parameter, S 0 crit . To use this formula, we first need matrix (4.4) in real Jordan canonical form. That is, we need to find an invertible matrix P so that where α ± iβ are the eigenvalues of A.
Lemma C.1. Let the eigenvector for α + iβ be P Re + iP Im . Then P = P Re P Im .
Proof. We have: P −1 P = P −1 P Re P Im = P −1 P Re P −1 P Im . | 9,587.2 | 2020-12-22T00:00:00.000 | [
"Mathematics",
"Environmental Science"
] |
A comprehensive deep learning method for empirical spectral prediction and its quantitative validation of nano-structured dimers
Nanophotonics exploits the best of photonics and nanotechnology which has transformed optics in recent years by allowing subwavelength structures to enhance light-matter interactions. Despite these breakthroughs, design, fabrication, and characterization of such exotic devices have remained through iterative processes which are often computationally costly, memory-intensive, and time-consuming. In contrast, deep learning approaches have recently shown excellent performance as practical computational tools, providing an alternate avenue for speeding up such nanophotonics simulations. This study presents a DNN framework for transmission, reflection, and absorption spectra predictions by grasping the hidden correlation between the independent nanostructure properties and their corresponding optical responses. The proposed DNN framework is shown to require a sufficient amount of training data to achieve an accurate approximation of the optical performance derived from computational models. The fully trained framework can outperform a traditional EM solution using on the COMSOL Multiphysics approach in terms of computational cost by three orders of magnitude. Furthermore, employing deep learning methodologies, the proposed DNN framework makes an effort to optimise design elements that influence the geometrical dimensions of the nanostructure, offering insight into the universal transmission, reflection, and absorption spectra predictions at the nanoscale. This paradigm improves the viability of complicated nanostructure design and analysis, and it has a lot of potential applications involving exotic light-matter interactions between nanostructures and electromagnetic fields. In terms of computational times, the designed algorithm is more than 700 times faster as compared to conventional FEM method (when manual meshing is used). Hence, this approach paves the way for fast yet universal methods for the characterization and analysis of the optical response of nanophotonic systems.
Method
Deep learning neural network (DNN) paradigm and its synchronizations with nanotechnology. This work has been organised in two phases where in the first, we have developed a FEM-based frequency domain approach [50][51][52][53][54][55] which has been utilized to obtain the surface plasmon resonance confinement around the gold nanostructures. Figure 1 shows an overview of the model description, where gold elliptical and circular dimers have been designed. The dielectric constant of the gold have been adopted from Johnson and Christy 56 . Due to the sheer existence of free electron in the metal, the dielectric constant of the metallic surface was estimated using the Drude free electron model. The dielectric constant of gold is computed with the help of relaxation time τ = 9.3 ± 0.9 ×10 −15 s and for metallic structures which are around near infrared frequencies when ω >> 1/τ, 56 : Figure 1. Schematic of the extended unit cell elliptical nano antennas and its optical response in terms of transmitted and reflection spectra.
Results
The analyte molecules are typically attached to the exterior face of the nanostructures, either along with or without tethering particles. It generates a small perturbation of the dielectric surrounding refractive index (RI), resulting in a measurable shift in the resonance frequencies or amplitude, which may be evaluated instantaneously using the transmission, reflectance and absorption spectra which can be predicted with the help of designed DNN configuration as shown in Fig. 2.
Here Fig. 2a shows the given geometrical inputs (paired elliptical) to the DNN. Surface lattice resonances (SLRs) are made up of gold nanostructures organised in a regular pattern. It can sustain resonances that are formed via LSPRs coupling and have much finer spectral characteristics 60 . A gold nanostructure on a Si substrate supports plasmonic resonances in each unit cell of the structure. The geometric properties of the nanoparticles 61 , which may be mapped to the major (a) and minor (b) axes for elliptical dimer, sepration distance (g) and height (h) of the nanostructures, influence the wavelengths at which SLRs are triggered. Variation in these parameters can change the optical spectral characterstics. Thus, the major (a) and minor (b) axes, separation gap (g) are www.nature.com/scientificreports/ adopted as a input parameters, and the corresponding outputs are discrete spectral datapoints in the Visible-Infrared region . Figure 2b shows the architecture of the developed neural network when the input parameters are used for predicting the spectral response of the corresponding nano structures. At the start of training of the developed DNN, the learning algorithm develops an estimated function that predicts output values. After adequate training, this built model is expected to produce output spectral responses for any new input geometrical dimensional value. This process of learning will determine the mean squared errors (MSE) to demonstrate the efficacy of the proposed DNN by comparing its anticipated spectral output with the actual spectral values. Several widely used machine learning packages were evaluated to develop and train this DNN, including pandas 62 for data preprocessing and Scikit-learn 63 67 in 2016 and entrenched on the scripting language Lua 68 , that is identical to NumPy with GPU integration. This is a crucial method since it assists in the acceleration of numerical computations, which may strengthen the performance of the DNN upto 60 times. It has a more concise and easier to read Application Programming Interface (API), making it simpler to integrate with Python. The usage of this excellent platform is attributable to the fact that it facilitates the creation of rapid computational features that can be updated in real-time, which is necessary throughout DNN training process. Designers used FEM solvers in the back-end for dataset collection, which is useful to train the DNN, and Pytorch and Scikit-learn in the front-end due to their remarkable compelling architectural style, which facilitates rapid and lanky approaches, even though PyTorch employs several backend instead of a single backend for GPUs and CPUs as well as other operational aspects. While designing this algorithm, Adam Optimizer has been used in this work because it is widely assumed that Adam converges faster than vanilla Stochastic Gradient Descent (SGD) and Stochastic Gradient Descent (SGD) with momentum. 69 . Due to this reason, we have selected Adam Optimizer as it works best for the nonlinear datasets and it also has the capability to update the learning rate for each parametric values because it adapts first-order gradients with a minimum memory requirements 70 . The weights and bias values of the designed DNN are optimized and updated iteratively by minimizing MSE using with the help of Adam 71 . Hence, the designed algorithm is suitable to analyse/predict/discern the optical response of the paired nanostructures.
Architectural framework of DNN with empirical attestation. DNN have indeed been established as a powerful tool for deciphering the correlation between the architecture and re-configurable nanophotonic structure composition and its functionality. It involves the construction of computer algorithms that aid in the extraction of motifs and the optimization of complicated information with a large number of variables. Forward ANNs are remarkable in that they may leverage numerous layers and neurons to operate efficiently. This neural network is formed using a cognitive computer with 8 GB RAM, 500 GB Hardrive, with the windows operating system (version 20H2 Semi-Annual Channel) installed. Throughout the calculation, the virtual environment Spyder python (version 5.1.5) is installed in anaconda (version 1.7.2). This DNN was arranged in three levels, as shown in Fig. 2b, including an input, output and hidden layers. The input parameters that must be interpreted are delivered to the fully linked input layers. Prediction and categorization are among the tasks that the output layer performs. A layer-by-layer assembling of neurons makes up a neural network. Every neuron in single layer is interconnected to the neurons in the following layers via a weighted connection. The frequency of the relation between the j th neuron in one layer and i th neuron in other is represented by the weight w ij . Each neuron is given a function weight, which is then linearly aggregated (or summed) and transmitted with the help of an activation function to produce the output from neurons. Finally, the anticipated output data may be compared to the random test data points. The designed DNN can be visualised as a closed box that accepts x inputs and generates y outputs 72 (see Fig. 2b). As shown in Fig. 2, an optimal DNN with optimized hidden layers = 5, neurons = 50 in each layer was implemented throughout this investigation. Every neuron inside each layer was interconnected to the neurons in the subsequent layer, implying that these concealed levels were totally integrated. 20% of datapoints were randomly adopted from the training datapoints and supplied as the evaluation datapoints to provide impartial evaluation while tweaking the DNN hyperparameters (weights and biases).
Discussion
In this work, the geometrical parameters (a, b, d and g) of the nanostructure were varied from 10 nm to 130 nm; however, in this work for simplicity h was fixed at 40 nm. The granularity of gathered dataset is chosen to minimise computing costs while yet allowing the DNN to be trained properly. The complete datasets throughout this investigation comprise 10,500 parameter combinations and their accompanying spectra. We exclusively selected structural factors that have a considerable influence on the spectral properties and cover all conceivable spectrum variants. Indeed, with this selected quantity of training data, DNN can be trained to accurately model and forecast millions of spectral properties of the plasmonic structures in the parametric range. www.nature.com/scientificreports/ where n is the total number of datasets utilised throughout the training process. Z a i is the original data points calculated using COMSOL multiphysics, and Z p i is the predictions over the actual dataset. The calculated MSEs of the predicted datapoints from the developed network compared to the targeted datapoints are quantified by MSEs, which itself is regarded the most essential effectiveness assessment criterion. It is also used as the validation criteria of the DNN. Hence, the comparison of the MSEs calculation at each hidden layers are shown in Fig For selecting the best hyper-parameters in terms of performance of the DNN, the hidden layers are optimized in first stage when the number of epoch and neurons were fixed at 5000 and 50. The initial prediction have been made for the given input geometrical dimensions as a = 70 nm, b = 10 nm, and g = 10 nm and the corresponding predicted transmission, reflection and absorption spectra shown in Fig. 4 when hidden layers = 1.
In Fig. 4a the black curve shows the original transmission spectra (calculated by COMSOL Multiphysics) along with the predicted transmission spectra shown by the red curve, when the a = 70 nm, b = 10 nm, g = 10 nm and h = 40 nm. Similarly, the predicted reflection and absorption spectra are also shown in Fig. 4b,c, respectively, where the original spectral values are shown by the black curves, while the predicted values are represented by the red curves. Here, it can be observed (shown in the supplementary materials) that when hidden layers = 1 and neurons = 50, the MSEs was calculated as 0.4 for epoch = 1 and rapidly reduces till epoch = 900; however, it got stabilised after epoch = 1000. Hence, so far epoch = 5000 is used to make initial predictions. Indeed, it is true that at a lower MSEs, the number of predicted spectral values are closer to their actual values. Due to this reason, the remaining hyper-parameters have been tweaked for producing more accurate predictions over the actual spectral responses. More information on hyper-parameter tweaking can be found in Sect. II of the supporting material.
In Fig. 3 it was shown, as the number of hidden layers is increased, the predicted results became better. Finally, the appropriate DNN framework is designed using suitable hyper-parameter selection based on the MSEs calculated at every dataset training. In the Final algorithm the hidden layers = 5, epoch = 5000 and neurons = 50 were adopted. The MSE had its minimum values 0.20 at epoch = 1 and reduces upto 0.05 at epoch = 200; however, it stabilises and reaches nearly to 0 at epoch = 5000. Figure 5 illustrates that as the hidden layer is increased to 5, the outcomes form the improved DNN can be clearly seen that as the MSEs is reduced the predicted transmission, reflection and absorption responses reaches closer to the original spectral values shown by red and black curves, respectively for the specified geometrical dimensions taken as a = 70 nm, b = 10 nm, g = 10 nm and h = 40 nm. Altogether, the findings suggest that DNN can accurately predict spectra for billions of distinct nanostructures in the a, b, g and h ranges using adequate amount of simulation dataset. They all predict the same accurate resonance properties as by FEM simulations (using COMSOL Multiphysics), demonstrating that the DNN can be well trained for electromagnetic modelling. www.nature.com/scientificreports/ As a result, it is reasonable to conclude that expanding the training dataset will improve the performance and accuracy of the DNN. The performance of the designed neural network has also been evaluated in terms of the computational cost. Generating large training data sets for DNN demands a significant investment of computational effort. This emphasises the critical difficulty of automatically generating extra data points, particularly for regions that are not included in the present data collection. Aside from reducing numerical efforts, this would also aid to cut physical labour by reducing the involvement of the researchers in the data curation chain. However, the high computational cost of producing such data sets typically hinders database expansion; as a result, the resulting DNN can be unreliable owing to over-fitting and other difficulties. Hence, Fig. 6a Hence, computational cost has been compared at different epoch. It can be stated that at every epoch weights and parameters were stored in the computing machine after the DNN training was finished and the predictions were made for unseen inputs with the aid of previously saved weights at epoch = 5000 is also represented in Fig. 6. With the increment of number of epoch the computational cost increases whereas the cost per epoch reduces. As a consequence, it can be inferred that at epoch = 5000, although the computational cost is 236 seconds, which is rather expensive when compared to the smaller epochs, but the performance of the DNN is improved. This performance is also far superior to typical FEM solvers, which may take up to 8100 s, 10 www.nature.com/scientificreports/ 86,400 s and 17,2800 s to compute the optical spectrum responses of a single dimer using coarse, normal, fine, finer, and extremely fine and manual meshes. We cannot avoid the effort and computational cost that has been utilised to collect the vast amount of the dataset by using EM solvers. However, it is an one time process. Once the model is fully trained, it can quickly predict the solutions for any unseen values compared to traditional EM solvers. Next, Fig. 6b also shows the computational cost of the DNN when the hidden layers was increased from 1 to 5. Here, it can be seen that at hidden layer = 1 the computational load was comparatively small, approximately 75 s but in Fig. 4 It was shown that the spectral performances was not acceptable hence the DNN training has been continued for a larger number of hidden layers. It can be seen that at hidden layers = 2, 3, 4 and 5 the computational cost increases to 100 s, 170 s, 220 s and 236 s, respectively when a fixed 5000 epoch was used. However, it should be noted as shown in Fig. 3, for a higher hidden layer, a smaller epoch can be satisfactory. Additionally, the corresponding improvement in MSEs values were also presented in Fig. 3 from where it is clear that as the hidden layers is increased the MSEs values are decreased which suggests the prediction are getting more closer to the actual spectral values. Hence, the epochs = 5000 is selected by the user once MSEs has converged to a suitable threshold. After modifying the model to obtain a stable MSE value, the necessary outputs datapoints were provided as additional input datapoints that was not supplied during the training operation. Next, the effect of the number of neurons for a fixed hidden layers = 5 and fixed epoch = 5000 is studied and shown in Fig. 7. The neuron assesses a set of weighted inputs, implements an activation function, and obtains the outputs. An input from neuron might be either features from a training set or outputs from neurons in a previous layer. Weights are assigned to inputs as they travel through synapses on their route to the neuron. The neuron then applies an activation function (ReLU in this case) to the "aggregate of synaptic weights" from each arriving synapse and sends the result to neurons of following layer. Hence, ReLU implementation is the most
Substantiation of in-house developed DNN for concealed nanostructures
Finally, after stabilizing the developed DNN with the help of the all possible hyper-parameters, we have demonstrated in this paper how the deep learning and dynamic challenges are interconnected, providing the groundwork for future research at the intersection of problems and data science. In particular, we suggest novel topologies for DNN that increase forward propagation stability. Using the derivative-based learning regularisation the well-posedness of the learning activity was increased. Moreover, presented a multi-level technique for establishing hyper-parameters, which makes DNN training easier. Further introduced new regularisation techniques that rely on our continuous conceptualization of the challenge to increase generalisation accuracy, consistency, and streamline DNN training. After designing a stable DNN, we have used this algorithm for predicting the spectral response for the paired circular nano structure where d = 80 nm, g = 20 nm and h = 40 nm. Figure 9a shows the spectral response of a paired circular nano disk where a red curve shows the predicted spectral values and their actual spectral values calculated by FEM are shown by a black curve. Similarly, Fig. 9b,c show the predicted reflection and absorption spectra (shown by red curves) and actual reflection and absorption values are shown by black curves. These results show, when hidden layers = 5, neurons = 50 and epoch = 5000 are used to predict the transmission, reflection and absorption spectra then these are close to the actual spectra.
Evaluation of in-house developed DNN for imperceptible geometric dimensions (beyond the training dataset).
In this section, we have discussed the performance of the designed DNN when it predicts the spectral values outside the range of the training dataset. The geometric parameters are selected at random from the test sets, but outside of the training dataset and verified by using the commercial software for the plasmonic nanostructures to examine the performance optimization of the transmission and reflection values for an arbitrary wavelength and visualize the outcomes. During the entire training period, we have used the dataset of major axis (a) from 10 to 130 nm with 10 nm interval. Hence, in this section the spectra has been predicted when major axis (a) = 155 nm, minor axis (b) = 55 nm and separation gap (g) = 35 nm, and it should be noted that these values were not available in the training set. It is worth to note that Fig. 10 shows spectral response of the optimised DNN with prediction accuracy and reliability more than 90% when approximately 50,000 dataset were used for training to show the impact of the test set, which was outside the range of training data set. Here, black curve shows the original spectral values computed by using COMSOL multiphysics and the red curves shows the spectral values predicted from the in-house developed neural network. www.nature.com/scientificreports/ A significant facilitator of cutting-edge nanotechnology research would be the capability to swiftly extract a required optical response by using artificial neural network from the geometrical parameters of a plasmonic nanostructures. One can envision a variety of scenarios in which such data is essential to the design investigations of any nano structures. The highlight of this DNN is that it has a capacity to address multiple targeted resonance spectra for various paired geometrical dimensions, and it emphasises that this technique may be applied to other sensing in biology, chemistry, and material science. Hence, it can be said that the spectrum prediction from the nanostrutural recognition have a high degree of employability, indicating that this techniques might indeed be useful in a wide range of spectral and non-spectral aspects. This deep learning protocol has the potential to revolutionize real-time field applications in a variety of spectroscopic disciplines.
Conclusion
In conclusion, this work demonstrates the use deep learning to correlate spectroscopic knowledge of a paired nanostructure in local environments. The presented DNN algorithms can estimate spectral values of designed paired nano structures at more than 700 times lower computing cost than the traditional FEM solver (when manual meshing is used) while providing the similar degree of precision. This study illustrates DNN has been tested rigorously and shown its excellent predictions using one time trained process. Hidden layers = 5, neurons = 50 and epoch = 5000 were employed all across the neural network to provide a swift convergence and yet good precision in estimating spectral values for randomized input geometrical dimensions of the paired nanostructures. These values can depend on the type of the problem. However, as the results may not be known beforehand so for a real application a safer number of these DNN parameter can be used. In this work, we have also shown the performance of the associated hyper-parameters of the designed DNN and explained in terms www.nature.com/scientificreports/ of MSEs which is plotted with respect to hidden layers, epoch and neurons. This research also offers a contrast between traditional FEM solver and in-house developed DNN in terms of computing time, which is more than 700 times faster than direct FEM simulations (when manual mesh size is used). Finally, the performance of the proposed DNN model was proven for the random input parameter for inside and outside the training dataset such as paired circular when d = 70 nm and g = 20 nm and paired elliptical dimers when a = 155 nm, b = 55 nm and g = 35 nm respectively, and corresponding spectral values were correctly predicted. The detection of structural variations/fluctuations in chemical reactions, automatic identification of interstellar molecules, and real-time recognition of particles in biomedical diagnosis are just a few application when deep learning can be exploited. Thus, we conclude that the consolidation of nanotechnology and artificial intelligence will open the direction for many other new technological advancements in the profession of comprehensive scientific disciplines.
Data availability
All data generated or analysed during this study are included in the supplementary information in the graphical form. The raw datasets and computational models used and/or analysed during the current study available from the corresponding author on reasonable request. | 5,299 | 2023-01-20T00:00:00.000 | [
"Physics",
"Engineering",
"Computer Science",
"Materials Science"
] |
Finite element analysis of free vibration of beams with composite coats
In the modern civil engineering structures, such as buildings, steel framed structures and bridges, use of coated laminated composite beams has increased rapidly in recent years. Many studies exist on the dynamic behavior of isotropic beams using the analytical, experimental and numerical methods (Biggs, [1]; Clough [2]; Khadri et al., [3, 4]). However, the number of studies related to the free vibration of beams with composite coats is relatively less. Hamada et al. [5] studied the variations in the natural frequencies and damping properties of laminated composite coated beams utilizing a numerical technique to compute the Eigen parameters of coated laminated composite beams (sandwich structures). Kiral et al. [6] studied the dynamic behavior composite beam subjected to vertical moving force using the commercial finite element. Using analytical technique, Zibdeh et al. [7] studied the vibration of a simply-supported laminated composite coated beam traversed by a random moving load. Tekili et al., [8] utilised the analytical analysis of free vibration of simply-supported laminated composite coated beams. Kadivar et al., [9] the one dimensional finite elements based on classical lamination theory, first-order shear deformation theory, and higherorder shear deformation theory are developed to study the dynamic response of an unsymmetric composite laminated orthotropic beam. Mohebpour et al., [10] presented the free vibration and moving oscillator problems analysis of isotropic and composite laminated beams are presented using the finite element method. There are numerous publications on composite structures which employed the experimental method. In this study, the free vibration of strengthened beams by composite coats has been investigated by use of finite element method (FEM). For this purpose, a computer code is developed using MATLAB to perform the finite element vibration analysis. The parametric analysis is conducted to study the effects of the variation of different parameters such as the thickness of faces, core thickness and the fiber orientation, and type of core isotropic material (steel and foam) on natural frequencies of the beam are examined with different boundary conditions imposed on the beam. The beam frequencies extracted in this regard will be compared with those obtained analytically.
Introduction
In the modern civil engineering structures, such as buildings, steel framed structures and bridges, use of coated laminated composite beams has increased rapidly in recent years.Many studies exist on the dynamic behavior of isotropic beams using the analytical, experimental and numerical methods (Biggs, [1]; Clough [2]; Khadri et al., [3,4]).However, the number of studies related to the free vibration of beams with composite coats is relatively less.Hamada et al. [5] studied the variations in the natural frequencies and damping properties of laminated composite coated beams utilizing a numerical technique to compute the Eigen parameters of coated laminated composite beams (sandwich structures).Kiral et al. [6] studied the dynamic behavior composite beam subjected to vertical moving force using the commercial finite element.Using analytical technique, Zibdeh et al. [7] studied the vibration of a simply-supported laminated composite coated beam traversed by a random moving load.Tekili et al., [8] utilised the analytical analysis of free vibration of simply-supported laminated composite coated beams.Kadivar et al., [9] the one dimensional finite elements based on classical lamination theory, first-order shear deformation theory, and higherorder shear deformation theory are developed to study the dynamic response of an unsymmetric composite laminated orthotropic beam.Mohebpour et al., [10] presented the free vibration and moving oscillator problems analysis of isotropic and composite laminated beams are presented using the finite element method.There are numerous publications on composite structures which employed the experimental method.In this study, the free vibration of strengthened beams by composite coats has been investigated by use of finite element method (FEM).For this purpose, a computer code is developed using MATLAB to perform the finite element vibration analysis.The parametric analysis is conducted to study the effects of the variation of different parameters such as the thickness of faces, core thickness and the fiber orientation, and type of core isotropic material (steel and foam) on natural frequencies of the beam are examined with different boundary conditions imposed on the beam.The beam frequencies extracted in this regard will be compared with those obtained analytically.
Theoretical formulation
A laminated composite coated beam with its physical dimensions shown in Fig. 1.The core is made from an isotropic material (steel and foam), where L, b, and 2H are the length, the width and thickness of the beam, respectively.The top and bottom lamina are made from composite material (glass/epoxy) with the thickness (Hh) as shown in Fig. 1.
Fig. 1 Geometry of a laminated composite coated beam
In the case of pure bending of a symmetric laminate beam, the constitutive equation to the momentum equation [11]: where M x , M y , and M xy are the bending and twisting moments, and κ x , κ y , and κ xy are the curvatures of plate, whichare defined by ,, and y are rotations and the stiffness parameters is: where ' ij Q is the reduced stiffness constant of a unidirec- tional.The beam theory makes the assumption that in the case of bending along x-direction, the bending and twisting moments M y and M xy are zero.Eq. (1) thus lead to: where w is displacement and the stiffness parameter is: The reduced stiffness constant of a unidirectional layer, off its material directions is obtained by: where θ is the angle between the principal laminate's direction and the axis of the beam.The elastic constants Q ij in the principal material coordinate system are expressed as follows: where 11 G and 12 υ are the engineering parame- ters of the kth lamina.The equivalent mass per unit length of the laminated composite beam is expressed as: where ρ c and ρ f are the densities of the core and faces of the beam, respectively.Lastly, the beam theory makes the additional assumption that the deflection is a function of x only: w = w (x, t).So, the mode shape of beam only depends on the coordinate x.In the framework of the beam theory, in this case the fundamental equations of laminates are simplified as: where q is pressure load applied to the beam.For free vibration analysis (q = 0), the relevant equation can be written as: 0.
It may be noted here that for uniform composite beam and for each uniform segment of an externally tapered composite beam, bD 11 is constant.
Finite element formulation
The approach of separation of variables is being applied for w (x, t), and it can be expressed as the product of two functions one in displacement 'x' and the other in time 't' as where w (x, t)represents the solution of the governing differential equation in hand.The displacement components of a beam element shown in Fig. 2 can be expressed in the form: ,, where {u} = {v i φ i v j φ j } T is the nodal displacement vector for the element with v and φ are in transverse displacement and slope at the nodal.
Fig. 2 Beam element
The N i (i = 1, 4) are shape functions of the beam element which can be obtained as follows [2]: ; ; 32 ;, where l is the beam element and x is the local coordinate of the beam element.In the finite element formulation an integral statement is to be established to develop algebraic relations.The Galerkin method leads to the following equation: Substituting Eq. ( 11) into Eq.( 14), then: Integration by parts the weighted integral statement twice, one can get the following equation: Then, the element stiffness k (e) and element mass matrices m (e) are given by: The overall mass and stiffness matrices are obtained by assembling the element matrices: where n is the total number of discretized elements.The φ j φ i w j w i l ξ equation of the undamped free of beams with composite coats may be expressed as: If the system is vibrating in a normal mode, we obtain an eigenvalue problem as: where ω j is the j-th natural frequency and is the corre- sponding modal displacements.The Eq. ( 22) has a solution if and only if its determinant is zero, ie: The roots of the Eq. ( 23) are the characteristic values, which are equal to the squares of the natural frequencies.
Numerical results
A computer code is developed using the MATLAB in order to calculate the natural frequencies and the modes of natural vibrations of an undamped beam with composite coats.The material geometrical properties and physical dimensions of the beam are the same as [7].The beam has length, L = 500 mm, width b = 25 mm, thickness H = 4 mm.Table 1 shows material properties for the face and core of the beam models used in the study.The material of the core is the steel and the foam (Divinycell H200) for model I and II respectively.The faces are made from glass/epoxy composite material for the two models.For validation tests, the natural frequencies calculated by analytical and numerical method for simply supported sandwich beam.The natural frequencies of the simply supported of a sandwich beam are expressed analytically by [8,11]: In the first validation test, the beam is viewed with h / H = 0 and θ = 0° and for ten first modes.From Fig. 3 one observes that the natural frequencies computed by the finite element method agrees generally well with the analytical one (Eq.( 24)) with the considerable deviation for number of finite element ne = 10 and with the slight deviation for ne = 50.Fig. 3 Comparison between the FE results and the analytical solution for the first ten modes For second verification (Fig. 4), the model I is considerate with h / H varies from 0 to 1, with θ = 0° and for mode 1.However, the good agreement was carried out for ne = 100.It can be also verified that when the number of elements increases, the numerical results converge to the exact solutions.Thus, the number of the finite elements used in vibration analysis is ne = 100.
Fig. 4 Comparison between the FE results and the analytical solution for a first mode and with different thickness ratios
The natural frequencies corresponding to mode 1, 5 and 10 are plotted in Figs. 5, a and 5, b, for case of the simply supported laminated composite coated beam and for two models with θ = 0°.As, can be seen from of these figures that the natural frequencies are affected by the thickness ratio for high modes.However for mode 1, the curve of the frequency is almost horizontal, it remains nearly independent of the thickness ratio, relatively compared to the higher modes.According to Fig. 5, a, we note that the natural frequencies of the full steel beam are almost identical to those of the full composite beam and this is for frequencies bases, however for the high frequencies, a small difference is found.While, a large difference is found between the natural frequencies of the full steel For the following analysis, we assumed that the fiber angle of the composite layer vary of the 0° to 90° with an increment of 30°.The Figs. 6, a and 6, b shows the variation of the first natural frequency of the sandwich beam versus the thickness ratio, with various fiber orientations.A linear relation is observed between frequency and thickness ratio for Model I (from h / H = 0.5 to 1.0) and for Model II (from h / H = 0.5 to 0.7).On the other hand, the frequency increases with the increase in the thickness ratio, in these linear domains.The maximum frequency is reached for a beam model I with h / H = 0.2 and θ = 0° (Fig. 6, a), while the maximum value of the frequency is carried out for the sandwich beam Model II, with h / H = 0.7 and θ = 0° (Fig. 6, b).The first natural frequency of the sandwich beam for two thickness ratio (h / H = 0.2 and 0.7) is shows versus fiber orientation angle for Model I and II in the Figs. 7, a and 7, b, respectively.For the Model I (Fig. 7, a) with a strengthening of the order of 30% (h / H = 0.7), the effect of fiber orientation on the natural frequency is low, this is due to the domination of the heavy nucleus (steel), which is not the case in the Model II (Fig. 7, b), the composite layer is dominant over the central layer (foam).For the sandwich beam with thickness ratio h / H = 0.2, the effect of fiber orientation on the natural frequency is larger and this for the two Models I and II.To investigate the influence of the boundary conditions of the sandwich beam on the first natural frequencies, four different boundary conditions were imposed on the beam (Fig. 8): simply-supported-simply-supported The first natural frequencies of the beam with coating composite (h / H = 0.25 and 0.75) are calculated and presented in Table 2 for both Models I and II with different fiber orientations.For both Models I and II, it can be indicated that the frequency of the sandwich beam with fiber orientation θ = 0° is relatively high compared with the others cases.This is explained by the fact that the fiber orientation to 0° (direction x) provides a maximum rigidity of the sandwich beam.Moreover, the maximum natural frequency value is reached to boundary conditions clamped-clamped (C-C) and this is evident.By comparing the natural frequencies of the beam with a composite coating h / H = 0.25 and 0.75, one can conclude that the maximum frequency is 201.4Hz, that of the sandwich beam with thickness ratio h / H = 0.75.The first five mode shapes of the beam without coating are presented in Figs.The first mode of the Model I and II of the simply supported the beam with and without coating composite (fiber angle, θ = 0°) with different thickness of the glass/epoxy composite layer are shown in Fig. 11.For the Model I (Fig. 11, a) when the thickness ratio increases (the thickness of the layer composite decreases), i.e., the stiffness the sandwich increases, and thus the amplitude decreases.That is due to the fact that in the Model I, the stiffness of the core layer is higher than that of the face layer.However, for Model II (Fig. 11, b), it be found that the opposite effect is produced, because the rigidity of the sandwich decreases with the increase thickness ratio.As seen from the results, it is clear that the natural frequency and mode shapes of the beam with coating layer composite can be controlled by choosing the proper fiber orientation, the laminate thickness and the boundary conditions imposed on the sandwich beam.
Conclusions
In the present study, the analysis of free vibration of beams with composite coats has been investigated numerically by finite element method and verified analytically.The following conclusions can be drawn from the present study: (1) the natural frequencies obtained by the FEM are in good agreement with those of the method analytical, (2) the natural frequency increases generally with the increase in the thickness ratio, (3) when the stiffness of the face layers is higher than that of the core layer, a linear relation is observed between frequency and thickness ratio and the amplitude of the mode increases when the thickness ratio increases, (4) the frequencies are larger with a fiber orientation of 0° and this for any thickness of the reinforcing layer, This is explained by the fact that the fibers are the direction of the sandwich beam, (5) This paper presents a finite element model to investigate the analysis of free vibration of beams with composite coats.We used two sandwich beam models; the core is made from an isotropic material, the steel as heavy material, for first model, and the foam as light material, employed in the second model.The faces are made from glass/epoxy composite material for the two models.The natural frequency and mode shapes of the sandwich beam are controlled by choosing the proper fiber orientation, the laminate thickness and the boundary conditions.The effects of these parameters are examined for the two models with different boundary conditions imposed on the beam.Good agreements were achieved between the finite element method and analytical solutions.
Fig. 5
beam and the full foam beam (Fig.5, b) and this is due to differences in stiffness of the two materials.a b Natural frequency of sandwich beam with different mode: a -Model I; b -Model II
(Fig. 6
S-S), clamped-clamped (C-C), clamped-free (C-F), and clamped-simply-supported (C-S).a b First frequency of sandwich beam versus thickness ratio: a -Model I; b -Model II a b Fig. 7 Natural frequency of the sandwich beam versus fiber orientation angle: a -Model I; b -Model II
Fig. 8
Fig. 8 Different boundary conditions were imposed on the beam with composite coats
Fig. 9
Fig. 9 First five modes shape of the beam for BC: S-S: a -Model I; b -Model II
Fig. 10 First
Fig. 10 First five modes shape of the beam for BC; C-C: a -Model I; b -Model II Mode shapes plot of the beam with coating composite and for S-S: a -Model I; b -Model II the boundary conditions clamped-clamped (C-C) give the maximum natural frequency value.S. Tekili, Y. Khadri, B. Merzoug FINITE ELEMENT ANALYSIS OF FREE VIBRATION OF BEAMS WITH COMPOSITE COATS S u m m a r y Keywords: free vibration, composites coats, finite element method, dynamic beams.Received February 15, 2015 Accepted April 14, 2015
Table 1
Material properties for the face and core of the beam models
Table 2
The first natural frequencies (Hz) the beam with and without coating composite | 4,117.4 | 2015-08-27T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Generalized Negative Reciprocity in the Dictator Game – How to Interrupt the Chain of Unfairness
Humans are tremendously sensitive to unfairness. Unfairness provokes strong negative emotional reactions and influences our subsequent decision making. These decisions might not only have consequences for ourselves and the person who treated us unfairly but can even transmit to innocent third persons – a phenomenon that has been referred to as generalized negative reciprocity. In this study we aimed to investigate whether regulation of emotions can interrupt this chain of unfairness. Real allocations in a dictator game were used to create unfair situations. Three different regulation strategies, namely writing a message to the dictator who made an unfair offer, either forwarded or not forwarded, describing a neutral picture and a control condition in which subjects just had to wait for three minutes, were then tested on their ability to influence the elicited emotions. Subsequently participants were asked to allocate money between themselves and a third person. We show that writing a message which is forwarded to the unfair actor is an effective emotion regulation strategy and that those participants who regulated their emotions successfully by writing a message made higher allocations to a third person. Thus, using message writing as an emotion regulation strategy can interrupt the chain of unfairness.
Across cultures humans have a strong preference for fairness 1,2 . Violations of fairness provoke negative emotional responses [3][4][5] . Furthermore, perceived unfairness can influence related decision making. People who were treated unfairly tend to pay this back by treating the same person unfairly as well, a phenomenon called negative reciprocity 6,7 . However, the consequences of unfair behavior seem to have an even larger extent. People do not only behave unfairly against the person who treated themselves unfairly, but do also forward this behavior towards uninvolved third persons (generalized negative reciprocity 8,9 ). Unfairness can thus spread easily and once the chain of unfairness is activated, it is difficult to interrupt.
If negative emotions are the underlying factor driving general negative reciprocity, effective emotion regulation should lead to a decrease in general negative reciprocity. For example, the chance to express emotions in another way as by punishing the offender was shown to influence rejection rates in an ultimatum game. Here, participants who had the opportunity to express their emotions via a message to the proposer showed lower rejection rates 10 . A common method of emotion regulation is reappraisal, in which subjects are asked to mentally reframe and reanalyze the context 11,12 . Reappraisal has been shown to decrease rejection rates in the ultimatum game 13,14 . However, in these studies emotions in the ultimatum game were either not measured explicitly or did not change after regulation. Measuring both emotions and rejection rates bears the risk of an interaction between the two variables. Moreover, assuming that message writing alters rejection rates via emotion expression is a little farfetched, based on this evidence alone. Messages consist of several components besides emotion expression, for example writing in general, additional content elements and forwarding of the message. Which of these components drive the effect of altered rejection rates remains unclear.
Thus, participants show generalized negative reciprocity: they transmit unfairness to innocent third persons 8 . Further, emotion regulation strategies were shown to decrease direct negative reciprocity in the ultimatum game 13,14 . The following questions remain open: 1) Does message writing as an emotion regulation strategy effectively alter negative emotions due to unfair social situations? And if so, which component of the writing process drives this effect? 2) Does an effective emotion regulation strategy decrease general negative reciprocity and thereby interrupting the chain of unfairness? Answering the first question will extend our knowledge of social interactions. Since unfairness is a hazard for social interactions answering the second question will contribute in improving the quality of those.
We conducted two studies investigating whether emotion regulation can decrease general negative reciprocity. In study 1 unequal allocations in a dictator game creating unfair situations were tested on their effect on affective responses. Three emotion regulation strategies were then tested on their ability to regulate the provoked emotional reactions. Affective responses were measured using the pleasure subscale of the Self-Assessment Manikin 15 . This subscale measures affective responses on a scale ranging from happy to unhappy. In study 2 participants were additionally asked to allocate money between themselves and a third person in order to measure generalized negative reciprocity (Fig. 1A). We hypothesized that 1) writing a message to the dictator who made the unfair allocation will successfully regulate emotions and that 2) as a result participants in the message writing condition will make higher allocations to an unrelated third person.
Study 1 Methods
In study 1, 237 (M age = 22.49 years, SD = 4.15) participants took part. The study was conducted at the BonnEconLab at the University of Bonn using z-Tree 20. The study meets all standards for ethical treatment of human subjects in experiments at the BonnEconLab and was approved by the review board of the department of economics. Participants were recruited with the software hroot 16 . Only female participants were invited because based on previous studies, we expected higher emotional reactivity to negative stimuli in women 17,18 . Since participants needed to show negative affective responses in order to regulate them, we concentrated on women as participants. Participants were randomly allocated to the different conditions and written consent was given according to the Declaration of Helsinki. Participants were receivers in a dictator game. First, all participants were asked to indicate how happy they felt (i.e. baseline Happiness). They then received an allocation from a dictator, who made this decision in a previous session. In all conditions 17% of the participants received a fair allocation (12.50 €/12.50 €) and 83% an unfair allocation (20 €/5 €). This distribution reflected the allocations made by a group of dictators (N = 24) who participated in the experiment in a previous session. After receiving the allocation participants were again asked to indicate their affective state (Happiness 1). In two of the emotion regulation conditions participants were asked to write a message to the dictator who made the allocation. In the first emotion regulation condition (the forwarded condition), participants were told that the dictator will come to the lab to receive the message they wrote. In the second emotion regulation condition (the non-forwarded condition), it was told that the message will not be forwarded to the dictator. In both conditions participants received this information before they wrote the messages. No further instructions about the content of the messages were given. In the third emotion regulation condition participants were asked to write a description about an emotionally neutral picture (IAPS picture No. 7185 19 ). The control condition consisted of a waiting period of identical length. Each condition took three minutes. Subsequently participants indicated again how happy they felt (Happiness 2). The dictators of the previous session had to come to the lab for the second time in order to receive the messages written by the participants in the message forwarded condition.
Finally, 382 additional participants evaluated the content of the messages written in both message writing conditions via an online questionnaire (see S1 for instructions). Each message was evaluated by at least 62 participants. They rated whether the messages contained content elements as expression of emotions, understanding, unfairness criticism, questioning of motive or suggestion for usage. Of all participants, five were randomly selected and won a 10 € Amazon voucher. We further evaluated whether the messages contained welcome words, exclamation marks, emoticons, abusive language and/or a form of address and determined the average number of characters of each message.
Results and Discussion
Since we were interested in the effect of emotion regulation in unfair situations we focused only on the unfair allocations in the analysis (analysis of fair allocations can be found in S2). In order to test whether unfair allocations decreased happiness ratings we compared ratings before and after participants received the unfair allocations (baseline Happiness versus Happiness 1). 12 participants had to be excluded from the analysis due to a failure of understanding the procedure. Since the happiness ratings were not normally distributed a Wilcoxon-signed-rank-test for dependent samples was computed to test for differences between baseline Happiness ratings and Happiness 1 ratings. Happiness ratings after receiving the unfair allocation were significantly decreased (baseline Happiness Mdn = 6, Happiness 1 Mdn = 4, z = − 10.17, p < 0.001, r = − 0.55, 95% C.I. was estimated by using the Hodges Lehmann estimator [− 2.50, − 2.00]).
To compare the effects of the three different conditions on happiness ratings an ANCOVA was conducted. The difference between Happiness 1 and Happiness 2 was used as dependent variable and baseline Happiness was included as covariate to control for baseline differences. We observed a significant main effect of condition on the difference between happiness ratings (F (3, 157) = 2.84, p = 0.04, 95% C.I. [0.284, 1.23], partial η 2 = 0.05). The interaction between the condition and baseline rating was not significant (F (3, 157) = 2.55, p = 0.06). We therefore focused on the main effect of condition. Planned contrasts revealed that message writing with forwarding significantly increased happiness ratings compared to the control condition (t = 4.95, p = 0.01, 95% C.I. [1.21, 8.68], Cohens' d = 2.62). We did not observe any significant difference between happiness rating in the writing without forwarding compared to the control condition and the picture condition compared to the control condition (writing without forwarding vs. control: t = 2.50, p = 0.21, picture vs. control: t = 1.03, p = 0.55; Fig. 1B left). A direct comparison between happiness ratings in the writing with forwarding and writing without forwarding condition yielded no significant difference (z = − 0.13, p = 0.89).
In order to test, whether the content of the messages was related to changes in happiness ratings a correlation between the frequencies with which messages (with forwarding and without forwarding) were rated to contain specific content elements (expression of emotions, understanding, unfairness criticism, questioning of motive or suggestion for usage) and the difference between Happiness 1 and Happiness 2 was conducted. We observed a significant correlation between emotion expression and change in happiness ratings (r s (88) = 0.24, p = 0.01). There was no correlation between emotion regulation and any other content element (understanding: r s (88) = 0.03, p = 0.38; unfairness criticism: r s (88) = 0.08, p = 0.22; questioning of motive: r s (88) = 0.02, p = 0.41; suggestion for usage: r s (88) = 0.15, p = 0.07).
We could affirm that unfair offers in the dictator game decrease subjective happiness. Further, our first hypotheses could be confirmed; writing a message which was transferred to the dictator who made the unfair allocation successfully regulated emotions. Additionally we could demonstrate that successful emotion regulation is related to the extent of emotion expression in the messages.
Study 2 Methods
92 (M age = 22.77 years, SD = 5.47) female participants took part in study 2. As in study 1 participants were recruited with the software hroot 16 BonnEconLab and was approved by the review board of the department of economics. Written consent was given by all participants according to the Declaration of Helsinki. Study 2 was identical to study 1, but, based on the results of study 1, only the most effective emotion regulation condition (message forwarded) and a control condition were studied. After the three minute waiting or emotion regulation period participants were asked to play the role of the dictator in an additional dictator game with a third person as receiver. Participants were provided with an additional endowment of 10 € and were able to allocate any amount to the receiver. The third person/receivers were invited to the lab at a later time point and received the money the dictators allocated to them.
Results and Discussion
In order to test whether emotion regulation increases allocations to a third person the allocations between the emotion regulation and control condition were compared. Since the dictator allocations were not normally distributed a Mann-Whitney-U-test for independent samples was used. The results indicate that allocations in the emotion regulation condition (Mdn = 4.5) were significantly higher than those in the control condition (Mdn = 3; U = 987; p = 0.036, r = 0.24, 95% C.I. was estimated by using the Hodges Lehmann estimator [− 2.00, 0.00]; Fig. 1B right). Furthermore, we observed a positive correlation between happiness ratings after emotion regulation (Happiness 2) and the amount participants allocated to the third person (r s (77) = 0.26, p = 0.01). There was no correlation between baseline happiness ratings and the allocations in the dictator game (r s (77) = 0.11, p = 0.16). Thus, baseline differences cannot account for our findings. The results confirm our second hypothesis; participants in the message forwarded condition make higher allocations to a third person compared to the control group.
Conclusion
In study 1 we could demonstrate that writing a message to the person who made the unfair offer is successful in regulating negative emotions. In comparison to standard emotion regulation strategies, like reappraisal, no training or introduction is needed for writing a message. It is therefore easier to implement in experimental settings. Since we could not find any effect of the neutral picture condition, writing in general can be ruled out as a regulating factor. Although happiness ratings in the message not forwarded condition were not significantly different from the control condition a direct comparison between the two message conditions did not reveal any differences between the two. Forwarding is therefore very likely not a factor driving emotion regulation. However, the correlation between emotional content of the messages and the change in happiness ratings suggests that emotion expression might be a factor driving emotion regulation. Additionally, there are other possible factors, which might drive emotion regulation. Engagement in norm enforcement, perspective taking, punishment via message writing or reflection might for example increase happiness. Further research is needed to determine which process precisley underlies the emotion regulation effect.
Using the dictator game instead of the ultimatum game offered us the possibility to measure emotions without any interaction with the decisions. In line with Xiao and Houser 10 we demonstrated that writing a message is an effective strategy in regulating negative emotions. Furthermore, we could show that writing in general has no effect.
In study 2, using the emotion regulation strategy, namely writing a message that is forwarded, we could show that people who regulate their emotions make higher offers to a third person, indicating a decrease in generalized negative reciprocity. These results extend the former knowledge about general reciprocity 8 by providing evidence for negative emotions as a driving factor for generalized negative reciprocity.
We deliberately chose to only measure female participants, since women were shown to show higher emotional reactivity in response to negative stimuli 17,18 . Although narrowing the participant pool might have been potentially helpful to find the hypothesized negative emotional response to unfair behavior, at the same time it limits our interpretation. Singer et al. have for example shown that male participants showed a greater desire for revenge compared to female participants in response to unfair social partners 21 . Since revenge might also be triggered in our paradigm future studies should first, check whether revenge influences emotion regulations and second, whether there is a gender difference in emotion regulation and subsequent dictator game offers.
To conclude, the results of the two studies show how emotion regulation can influence the affect elicited by unfair allocations of others and thereby interrupt the chain of negative generalized reciprocity. These insights help to further our understanding of social interactions and may help to control the spread of negative reciprocity. | 3,690.4 | 2016-02-29T00:00:00.000 | [
"Economics",
"Psychology"
] |
Isolation and molecular characteristics of extended spectrum beta-lactamase-producing uropathogenic Escherichia coli isolated from hospital attendees in Ebonyi State, Abakaliki
1 Department of Applied Microbiology, Faculty of Sciences, Ebonyi State University, P. M. B. 053, Abakaliki, Ebonyi State, Nigeria. 2 Department of Applied Microbiology and Brewing, Nnamdi Azikiwe University, Awka, Anambra State, Nigeria. 3 Institute of Food Security, Environmental Resources and Agricultural Research, Federal University of Agriculture, Abeokuta, Ogun State, Nigeria. 4 Department of Pharmaceutical Microbiology and Biotechnology, Faculty of Pharmaceutical Sciences, Nnamdi Azikiwe University, Awka, Nigeria. 5 Department of Science Laboratory Technology, Akanu-Ibiam Federal Polytechnology, Unwana, P.M.B. 1007, Ebonyi State, Nigeria. 6 Department of Biology/Microbiology/Biotechnology, Federal University Ndufu-Alike, Ikwo Ebonyi State, Nigeria. 7 Department of Applied Biochemistry, Faculty of Natural and Natural Sciences, Enugu State University of Science and Technology, Agbani, Nigeria. 8 National Veterinary Research Institute, P. M. B. 01, Vom, Plateau State, Nigeria.
INTRODUCTION
Microbial resistance by pathogenic Escherichia coli is a major worldwide concern. Antibacterial agents, especially beta-lactams are becoming less useful against Enterobacteriaceae (Dia et al., 2015). Urinary tract infections (UTIs), a common nosocomial and communityacquired bacterial infection, occur in all genders and age groups (Abdulaziz et al., 2018). Antibiotic resistance by E. coli to numerous antibiotics is now developing and evolving (Raju et al., 2019). E. coli implicated in UTIs are becoming multidrug-resistant as a result of their extended-spectrum beta-lactamase (ESBL)-producing ability. Beta-lactam resistance is mediated by ESBL genes that are mostly encoded by plasmid (Topaloglu et al., 2010). ESBLs are a branch of beta-lactamases that have the ability of hydrolyzing the β-lactam ring of penicillins, aztreonam, and cephalosporins. However, they often remain susceptible to cephamycins and carbapenems (imipenem and ertapenems) (Shehani and Sui, 2013). E. coli could acquire some resistance factors from environmental bacteria of surroundings; this could transmit its resistance genes to other bacterial pathogens in arrays of habitats. Diagnosing of UTIs accurately and proper usage of antimicrobials for treatment and prevention are paramount in reducing drug resistance (Roshan et al., 2020). CTX-M, TEM, SHV, and AmpC beta-lactamase genes have been identified in E. coli isolates from UTI patients over the years (Koshesh et al., 2017). TEM (Temoneira), SHV (sulfhydryl variable), and CTX-M (cefotaximase) belong to class A ESBLs. Shahid et al. (2011) and Iroha et al. (2010) observed different frequencies of ESBLs among Gram-negative bacteria to range between 6 and 88% in different health institution settings. Sima et al. (2016) also reported CTX-M (74%), SHV (45%), and TEM (67%) genes in E. coli isolates. CTX-M have been increasingly recorded in various clinical specimens and E. coli remains the major organism implicated (Mohammed et al., 2011). This study was therefore designed to molecularly characterize ESBL-producing uropathogenic Escherichia coli from hospital attendees in Ebonyi State, Nigeria.
Collection of samples
Seventy three (73) E. coli were obtained from mid-stream urine samples of 133 patients attending Federal Teaching Hospital, Abakaliki metropolis, Nigeria between February, 2018 and November, 2018. Sterile universal container was used to collect urine samples from patients suspected of UTIs. Every patient was properly instructed on self-collection of urine samples. The samples were immediately transported to the laboratory of Applied Microbiology Department, Faculty of Sciences, Ebonyi State University, Abakaliki for bacteriological analysis.
Culturing of samples, isolation, and biochemical characterization of bacterial isolates
Mid-stream urine samples were streaked on MacConkey agar aseptically and then incubated for 24 h at exactly 37°C. Plates were then observed for E. coli growth (red or pink colonies) on MacConkey agar. These suspected bacterial isolates were thereafter characterized by standard microbiology techniques such as motility test, Gram-stain, and other biochemical tests such as methyl red, indole, urease test, Voges-Proskauer, and citrate (Cheesbrough, 2010, Moses et al., 2018. Pure colonies of isolates were then inoculated on nutrient agar slants, incubated for 24 h at 37°C and kept in a refrigerator for future use at 4°C (Moses et al., 2018).
Ethical clearance
Ethical approval was given by the Federal Teaching Hospital, Abakaliki (FETHA) research and ethical committee
Antimicrobial susceptibility test
Isolates were tested to evaluate their antimicrobial sensitivity patterns by the Kirby-Bauer disk diffusion method. Test organisms were adjusted to McFarland equivalent standards and inoculated using sterile swab stick on Mueller-Hinton (MH) agar plates. Cefpodoxime (10 µg), amoxicillin (20 µg), ceftriaxone (30 µg), ceftazidime (30 µg), cefepime (30 µg), cefotaxime (30 µg), and aztreonam (30 µg) antibiotic discs were carefully placed on the MH agar using sterile forceps. Antibiotics were allowed to properly diffuse for 10 min and plates were then incubated for 18 h at 37°C. The inhibition zone diameters (IZDs) were measured and results were interpreted as resistant or susceptible as per the CLSI guidelines after incubation (CLSI, 2018;Moses et al., 2020). The confirmed uropathogenic E. coli were stored in agar slant at -70°C and were subjected to further ESBL phenotypic and molecular identification.
Phenotypic test for ESBL detection
The turbidity of suspected potential ESBL-producers were properly adjusted to 0.5 McFarland standards. Sterile swab sticks were then used to make lawn culture of the standardized isolates on the surface of MH agar plates (CLSI, 2018). Cefotaxime and ceftazidime antibiotic discs were placed at a distance of 15 mm centre-to-centre from the central disc containing amoxicillin/clavulanic acid (20 µg/10 µg). Plate was incubated overnight at 37°C. An increase of ≥5 mm in the IZD for either of the cephalosporins (cefotaxime and ceftazidime) tested in combination with amoxicillin/clavulanic acid versus its zone when tested alone confirms ESBL production (Sima et al., 2016). E. coli ATCC 25922 was used as quality control.
DNA extraction and polymerase chain reaction (PCR) detection of blaCTX-M and blaTEM genes
Genomic DNA was extracted from pure colonies of an overnight growth of E. coli on Luria-Bertani agar using QIAamp DNA isolation columns (Qiagen, Hilden, Germany) according to the manufacturer's instructions. The purity and concentration of extracted DNA was determined using NanoDrop-Spectrophotometer at an absorbance of A260/280. Quality control of extracted DNA was done by testing all extracted genomic DNA for 16S rRNA gene. Table 1 shows the oligonucleotide primer sequences and the targeted genes (blaCTX-M and blaTEM) sizes. The PCR reactions for detection of blaCTX-M and blaTEM genes were done in a total reaction volume of 25 µl which contains 12.5 µl of Master Mix Red, 10 µl of Sigma water, 0.25 µl of forward primer, 0.25 µl of reverse primer, and 2 µl of the isolated genomic DNA. PCR was done with a C1000 Touch TM Thermo Cycler (Bio-Red) (Azam et al., 2016;Gudjónsdóttir, 2015).
Gel Electrophoresis
Exactly 1.0 g of agarose and 100 ml of 1X Tris-acetateethylenediaminetetraacetate (TAE; pH 8.0) buffer (Bio-Rad) was used to prepare 1% (w/v) agarose gel. The mixture was then heated up for about 3 min in a microwave for total dissolution of agarose. It was then cooled to about 50°C and ethidium bromide (1 μl/ml) was added to stain the prepared agarose gel. The molten agarose gel was then cast into a gel casting tray containing combs and allowed to solidify. After about 30 min of agarose gel solidification, gel combs were carefully removed and gel casting tray containing the gel was placed into a gel electrophoresis chamber filled with TAE buffer (40 mM Tris, 20 mM acetic acid, and 100 mM EDTA pH 8.0). For each run, 5 μl of Extend Quick-Load DNA Ladder (1 kb; New England, Bio Labs) was added to one of the wells to estimate the band sizes and 5 μl of negative control, comprising Sigma water, was added to another well. Exactly 5 μl of each PCR product was carefully and properly loaded in the remaining wells. Electrophoresis was run at 80 V and 400 mA (mini Ampere) for exactly 1 h. Gels were then visualized and photographed by a gel documentation system (Bio-Rad) (Gudjónsdóttir, 2015).
RESULTS AND DISCUSSION
This work provided insights into the antibiotic resistance profiles and molecular characteristics of uropathogenic E. coli with the ability to produce ESBLs. Out of the 130 samples analyzed between February and November, 2018, 73 uropathogenic E. coli isolates were obtained and fifty 52 isolates were ESBL producers as investigated phenotypically using double disk diffusion techniques (Tables 2 and 3). The isolates that demonstrated resistance or reduced susceptibility to ceftazidime, cefotaxime, and cefpodoxime were subjected to ESBL phenotypic detection. The keyhole pattern exhibited by uropathogenic E. coli isolates expressing ESBL production is as shown in Plate 1. This property is an important characteristic of ESBL-producing bacteria as a result of the synergistic effect between amoxicillinclavulanic acid (a beta-lactamase inhibitor) and third generation cephalosporins (ceftazidime and cefotaxime). Three different sets of primers were used to amplify 16S rRNA, blaCTX-M, and blaTEM genes (Table 1). Agarose gel showed PCR product of the amplified 16S rRNA gene among the isolates to be 797 bp (Figure 1). Out of the 52 ESBL positive uropathogenic E. coli isolates, 17 (32.7%) harboured blaTEM gene, 35 (67.3%) harboured blaCTX-M gene, while 8 (15.3%) harboured both blaTEM and blaCTX-M genes (Table 4). PCR product band sizes of blaCTX-M and blaTEM genes were estimated to be 861 and 585 bp, respectively ( Figure 2). Uropathogenic E. coli evaluated in this study exhibited varying frequencies of resistance to antibiotics tested. Isolates exhibited resistance to cefotaxime (83.6%), ceftazidime (79.5%), amoxycillin (72.6%), cefpodoxime (68.5%), aztreonam (61.6%), ceftriaxone (57.5%), and cefepime (37%) ( Table 2). Indiscriminate use and abuse of beta-lactam antibiotics by individuals have caused problems in the treatment of microbial infections and diseases caused by these antibiotic-resistant organisms as a result of ESBL production. Some ESBL-producing bacteria failed to be detected using disk diffusion technique, thus resulting in serious treatment failures among infected patients/ individuals (Umadevi et al., 2011). The lack of routine screening and detection of bacteria among clinical isolates in hospitals with the ability to produce ESBL was evident in this present study. These types of discrepancies between susceptibility data and disc diffusion call for improved ESBL detection and incorporation into routine susceptibility techniques in hospitals. In the study done by other researchers, it was found that E. coli is usually implicated in urinary tract infections (UTIs) and frequently identified as ESBL-producers (Abhilash et al., 2010;Shanthi and Sekar, 2010;Umadevi et al., 2011). Phenotypic screening tests for ESBL detection only confirm whether ESBL is produced by the isolate but does not detect the presence of ESBL subtype. Currently, many researchers in the world have stated that although molecular methods appear more sensitive, but require specialized equipment and expertise, time consuming, and expensive (Sima et al., 2016;Varun and Parijath, 2014 (Abdulaziz et al., 2018). These observations by Roshan et al. (2020) and Abdulaziz et al. (2018) are in agreement with our present study, where blaCTX-M genotype was the most prevalent. A study done by Zongo et al. (2015) in Burkina Faso showed that high frequency (75.5%) of ESBL-producers was observed among E. coli isolates. They also reported that among the isolates with ESBL-producing ability, CTX-M (65, 49%) was the most prevalent, followed by TEM (25, 73%), and SHV (18, 71%).
Conclusion
This study has shown the presence of uropathogenic E. coli with ESBL-producing ability in the urine samples of hospital attendees in Abakaliki, Nigeria. Our study has also demonstrated the presence of ESBL genes, CTX-M and TEM, in the identified ESBL-producing uropathogenic E. coli isolates. Interestingly, blaCTX-M gene was the most predominant ESBL gene among the isolates in our study area. This research work showed that genotypic methods via PCR technique is more reliable for ESBL detection among bacterial isolates as PCR technique detected the presence of blaCTX-M and blaTEM in uropathogenic E. coli isolates. Thus, routine clinical detection of ESBL using phenotypic method should be introduced in the clinical setting since molecular methods are expensive to checkmate drug resistance due to ESBL production by bacteria. It is imperative that future studies should incorporate sequencing of isolates and resistance genes amplicons, together with bioinformatics so as to decipher the clonal relatedness/diversity and epidemiological identities of bacterial pathogens. This will greatly help in tracking disease occurrence, origins and sources of bacterial pathogens, and curtailing the spread of multidrug-resistant bacterial pathogens. | 2,786 | 2020-11-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
Role of Cytokine-Inducible SH2 Domain-Containing (CISH) Protein in the Regulation of Erythropoiesis
The cytokine-inducible SH2 domain-containing (CISH) protein was the first member of the suppressor of cytokine signaling (SOCS) family of negative feedback regulators discovered, being identified in vitro as an inducible inhibitor of erythropoietin (EPO) signaling. However, understanding of the physiological role played by CISH in erythropoiesis has remained limited. To directly assess the function of CISH in this context, mice deficient in CISH were characterized with respect to developmental, steady-state, and EPO-induced erythropoiesis. CISH was strongly expressed in the fetal liver, but CISH knockout (KO) mice showed only minor disruption of primitive erythropoiesis. However, adults exhibited mild macrocytic anemia coincident with subtle perturbation particularly of bone marrow erythropoiesis, with EPO-induced erythropoiesis blunted in the bone marrow of KO mice but enhanced in the spleen. Cish was expressed basally in the bone marrow with induction following EPO stimulation in bone marrow and spleen. Overall, this study indicates that CISH participates in the control of both basal and EPO-induced erythropoiesis in vivo.
Introduction
Erythropoietin (EPO) is a cytokine that mediates embryonic erythroid cell development and maintains appropriate adult red blood cell numbers in response to environmental signals, such as hypoxia [1].EPO exerts its effects through its cognate homodimeric erythropoietin receptor (EPOR), with EPO binding leading to the activation of a number of intracellular pathways.These notably include the latent transcription factor Signal transducer and activator of transcription 5 (STAT5), which becomes tyrosine phosphorylated, facilitating its dimerization and nuclear translocation, where it can modulate the expression of key target genes that mediate the requisite biological effects [2].Like other cytokine receptors, EPOR signaling is extinguished in a controlled manner by negative regulators, including suppressor of cytokine signaling (SOCS) family members [3].
The cytokine-inducible SH2 domain-containing (CISH) protein was the first member of the SOCS family discovered.It was identified in vitro as an immediate-early gene induced in hematopoietic cells by cytokines that activate STAT5, including EPO [4].CISH binds via its SH2 domain to phosphorylated tyrosine residues of activated cytokine receptors, where it suppresses signaling via at least two mechanisms [5].In the case of EPOR, CISH was shown to bind to the same receptor phospho-tyrosine residues as STAT5 and thereby physically block STAT5 docking [6].It can also facilitate proteasomal degradation of activated receptor complexes through the recruitment of an E3 ubiquitin ligase complex via interactions with its SOCS box [7].Enforced CISH expression was able to inhibit STAT5 activation and partially suppress cell proliferation mediated by EPO in vitro [4,8].CISH was additionally able to enhance apoptosis of erythroid progenitor cells ex vivo, which correlated with the antagonism of STAT5 that protects against apoptosis [9].
Mouse studies have identified a number of functions for CISH across immunity, development, and homeostasis.Transgenic mice constitutively expressing CISH exhibited altered T and natural killer (NK) cell responses, growth retardation, and defective mammary gland development [10].Analyses of various CISH knockout (KO) mice have revealed roles in the regulation of T cell receptor (TCR) and interleukin (IL)-4R mediated development and homeostasis of T cells [11,12], IL-15-mediated development and function of NK cells [13] and granulocyte/macrophage colony-stimulating factor receptor (GM-CSFR)-mediated myelopoiesis [14,15], as well as leptin receptor (LEPR)-mediated appetite control [16].
No erythroid phenotypes have been noted in any of the mouse studies.However, ablation of a zebrafish CISH homolog resulted in increased early erythropoiesis, including the EPOR-positive cell population [17], suggesting CISH may represent a physiological regulator of EPOR.To investigate this further, developmental, steady-state, and EPOinduced erythropoiesis was characterized in a recently described CISH KO mouse line [16].This revealed a role for CISH in regulating in vivo erythropoiesis.
Mouse Studies
Previously described C57/BL6 mice harboring a mutant Cish allele in which the lacZ gene had been inserted [16] were back-crossed for 8 generations onto a Balb/c background.These Cish +/− heterozygote (CISH HET) mice were in-crossed to yield Cish +/+ wild-type (CISH WT), CISH HET, and Cish −/− knockout (CISH KO) progeny, with their genotype ascertained as detailed previously [16].These mice were subsequently maintained as separate lines and fed a standard rodent chow diet ad libitum within a 12 h light/dark cycle environment.For developmental studies, timed pregnancies were used to obtain embryos at embryonic day 12.5 (E12.5) or E14.5.In other experiments, 11-week-old adult female mice were used.In some experiments, these adult mice were subjected to daily intraperitoneal injection with 100 U recombinant human EPO (Epoetin-alpha, Janssen) in saline or saline only up to 6 times.Mice were analyzed 24 h after the 4th injection (4 days), 24 h after the 6th injection (6 days), or 48 h after the 6th injection (6 + 2 days).Work was performed as approved by the Deakin University Animal Ethics Committee under the aegis of the Australian Code for the Responsible Conduct of Research.ARRIVE 2.0 Guidelines were followed with no blinding or randomization, and any animals showing signs of illness were excluded from the study.
Blood Analysis
Blood was routinely collected with minivettes (Sarstedt Australia, Mawson Lakes, Australia) and analyzed with a hematology analyzer (SCIL).Alternatively, blood was collected by cardiac puncture following euthanasia and cervical dislocation, with plasma analyzed for EPO using a Mouse EPO ELISA kit (Aviva Systems Biology, Sapphire Bioscience, Redfern, Australia).
Bone Marrow and Spleen Harvest
Bone marrow was extracted from the femurs and tibias by flushing with 1 mL RPMI 1640 media (Life Technologies Australia, Pty. Ltd., Mulgrave, Australia) using a 26 G needle, while whole spleens were placed in 5 mL media and passed through a 40 µm nylon mesh cell strainer (Interpath Services Pty.Ltd., Somerton, Australia).The cells were centrifuged at 1000× g and resuspended in an appropriate buffer to create single-cell suspensions.An aliquot was stained with trypan blue and analyzed with a Countess Automated Cell Couter (Invitrogen Australia Pty. Ltd., Mount Waverley, Australia) in order to calculate the total cell number for bone marrow and spleen.
FACS
To assess erythroid populations, approximately 2 × 10 6 cells from the bone marrow or spleen were resuspended in 2% (v/v) fetal bovine serum and 0.1% (w/v) EDTA in PBS, incubated with Fc block solution (anti-CD16/CD32) before analysis with double staining using PE-conjugated anti-mouse TER-119 and APC-conjugated anti-mouse CD44 (BD Bioscience, North Ryde, Australia) as described [19], with fluorescence minus one (FMO) and unstained samples used as controls (Supplementary Figure S1).Cells were subsequently incubated with 7-AAD for 10 min prior to the acquisition of a minimum of 100,000 viable cells using a BD FACS-Canto II flow cytometer and analyzed using BD FACSDiva software (v6.0), with compensation and further characterization performed using FlowJo v.10.0.6 (BD Bioscience) software.Cell debris and background noise were removed through gating based on forward scatter (FSC)/side scatter (SSC).Single cells were gated based on the FSC area-to-height ratio, and live cells were identified by 7-AAD staining.Analysis of the TER-119+ population via CD44/FSC plots allowed quantitation of individual erythroid populations.To evaluate STAT5 phosphorylation, fixed and permeabilized cells were subjected to additional staining with AF488-conjugated anti-phospho-STAT5 (pY694) or AF488-conjugated isotype control (mouse IgG1κ).To assess proliferation, 1 × 10 5 splenocytes were stained with CFSE and incubated in 96 well-round bottom plates containing RPMI complete cell culture medium and harvested after 24 h with CFSE staining in TER19+ cells visualized using the FlowJo V10 Proliferation Tool.
Colony-Forming Assays
A total of 1 × 10 5 bone marrow or spleen cells in 1 mL methylcellulose media (R&D Systems, In Vitro Technologies, Noble Park North, Australia) were added to 35 mm tissue culture dishes (Thermo Fisher Scientific Australia Pty. Ltd., Scoresby, Australia) incubated at 37 • C with 5% CO 2 in a humidified incubator.Manual enumeration was performed for blast-forming unit-erythroid (BFU-E) on day 8 and colony-forming unit-erythroid (CFU-E) on day 14.
Gene Expression Analysis
For analysis of individual genes, total RNA was extracted from approximately 2 × 10 6 bone marrow and spleen cells using TRIsure reagent (Bioline Australia, Eveleigh, Australia) and converted to cDNA using a QuantiTect Reverse Transcription kit (Qiagen Pty. Ltd., Clayton, Australia).This was subjected to reverse-transcription PCR (RT-PCR) with primers for Actb (5 -TGGCATCACACCTTCTAC, 5 -AGACCATCACCAGAGTCC) and Cish (5 -GGACATGGTCCTTTGCGTACAG, 5 -GGAGAACGTCTTGGCTATGCAC) and samples analyzed by agarose gel electrophoresis.Quantitation was performed via area under the curve methodology using ImageJ, with expression in WT bone marrow after 4 days of EPO being set at 100%.Alternatively, for transcriptome analysis, a TruSeq Stranded Total RNA Sample Prep Kit (Illumina Australia, Melbourne, Australia) was used on bone marrow samples, and paired-end RNASeq was performed with an Illumina NovaSeq 6000 with 150 bp read lengths, which were mapped to the Ensemble GRCm38 reference genome with TopHat [20].
Statistical Analysis
Data were analyzed with either a Student's t-test with Welch's correction as required or a two-way analysis of variance (ANOVA)/Tukey's multiple comparison test utilizing GraphPad Prism 8.0 with p < 0.05 considered significant.RNAseq gene-level count data were analyzed with DESeq2 [21], which employs a generalized linear model to identify differentially expressed genes.
Role of CISH in Developmental Erythropoiesis
Cish +/+ wild-type (CISH WT), Cish +/− heterozygote (CISH HET), and Cish −/− knockout (CISH KO) embryos were subjected to β-galactosidase staining to detect LacZ expression from the endogenous Cish promoter.This revealed strong staining in the fetal liver in CISH HET and CISH KO mice that peaked at E12.5 (Figure 1A), suggesting a potential role for CISH in developmental erythropoiesis.Despite this, direct visualization revealed comparable hemoglobin pigmentation amongst all genotypes (Figure 1B), with analysis of CISH WT and CISH KO fetal livers revealing no change in overall cellularity (Figure 1C), although there was a small but significant increase in total TER119+ erythroid cells in CISH KO fetal livers (Figure 1D).and samples analyzed by agarose gel electrophoresis.Quantitation was performed via area under the curve methodology using ImageJ, with expression in WT bone marrow after 4 days of EPO being set at 100%.Alternatively, for transcriptome analysis, a TruSeq Stranded Total RNA Sample Prep Kit (Illumina Australia, Melbourne, Australia) was used on bone marrow samples, and paired-end RNASeq was performed with an Illumina NovaSeq 6000 with 150 bp read lengths, which were mapped to the Ensemble GRCm38 reference genome with TopHat [20].
Statistical Analysis
Data were analyzed with either a Student's t-test with Welch's correction as required or a two-way analysis of variance (ANOVA)/Tukey's multiple comparison test utilizing GraphPad Prism 8.0 with p < 0.05 considered significant.RNAseq gene-level count data were analyzed with DESeq2 [21], which employs a generalized linear model to identify differentially expressed genes.
Role of CISH in Developmental Erythropoiesis
Cish +/+ wild-type (CISH WT), Cish +/− heterozygote (CISH HET), and Cish −/− knockout (CISH KO) embryos were subjected to β-galactosidase staining to detect LacZ expression from the endogenous Cish promoter.This revealed strong staining in the fetal liver in CISH HET and CISH KO mice that peaked at E12.5 (Figure 1A), suggesting a potential role for CISH in developmental erythropoiesis.Despite this, direct visualization revealed comparable hemoglobin pigmentation amongst all genotypes (Figure 1B), with analysis of CISH WT and CISH KO fetal livers revealing no change in overall cellularity (Figure 1C), although there was a small but significant increase in total TER119+ erythroid cells in CISH KO fetal livers (Figure 1D).
Role of CISH in Steady-State Erythropoiesis
Full blood examination of adult CISH KO mice revealed no significant difference in red blood cell (RBC) number or mean cell hemoglobin (MCH) compared to CISH WT mice, but subtle alterations in other red blood cell parameters were observed, including a decreased hemoglobin (Hb), hematocrit (HCT) and mean cell hemoglobin content (MCHC), but increased mean cell volume (MCV) (Table 1), consistent with mild macrocytic anemia.Analysis of bone marrow in these mice revealed no significant difference in overall cellularity (Figure 2A).Erythroid populations were analyzed by TER119/CD44 staining, as described [19] (Figure 2B).This indicated no change in total TER119+ erythroid cells (Figure 2C), but there was an increase in the relative proportion of pro-, basophilic, polychromatic, and orthochromatic erythroblasts and a decrease in the reticulocyte population in CISH KO mice (Figure 2D).Analysis of earlier precursors using a colony-forming assay identified decreased frequencies of both CFU-E and BFU-E (Figure 2E).The spleens of CISH KO mice were also unchanged compared to CISH WT mice with respect to cellularity (Figure 2F), but analysis with TER119/CD44 staining (Figure 2G) revealed a small but significant increase in TER119+ cells (Figure 2H).However, the only change seen in individual erythroid cell populations in the CISH KO mice was a significant increase in the pro-erythroblast population (Figure 2I), while the frequencies of both CFU-E and BFU-E precursors were also significantly increased (Figure 2J).
Role of CISH in EPO-Induced Erythropoiesis
EPO is a critical regulator of erythropoiesis, including stress erythropoiesis in the adult [22].To directly investigate the potential role of CISH in regulating the in vivo functions of EPO, CISH WT and CISH KO mice were injected with EPO and erythroid parameters analyzed.In CISH WT mice, a significant increase in Hb, HCT, MCV, and MCH was observed following 4 days of EPO injection (Table 2).These parameters further increased after 6 days, by which time significant elevation of RBC counts and reduction in MCHC were also observed, with all parameters largely sustained after EPO injection was discontinued.In contrast, EPO injection of CISH KO mice led to a delayed impact on red blood cell parameters, with only MCV and MCH significantly increased by 4 days, with Hb and HCT remaining significantly decreased compared to the WT.However, after 6 days and even after EPO was discontinued, all parameters but RBC were significantly altered, with values no longer significantly different from the WT.
Role of CISH in EPO-Induced Erythropoiesis
EPO is a critical regulator of erythropoiesis, including stress erythropoiesis in the adult [22].To directly investigate the potential role of CISH in regulating the in vivo functions of EPO, CISH WT and CISH KO mice were injected with EPO and erythroid parameters analyzed.In CISH WT mice, a significant increase in Hb, HCT, MCV, and Analysis of the bone marrow from CISH WT mice revealed low basal Cish expression that was robustly increased following EPO injection, with expression absent in CISH KO mice (Figure 3A).There was a large increase in overall bone marrow cellularity in CISH WT mice following EPO stimulation, which then waned after EPO injection was stopped (Figure 3B).FACS analysis of specific erythroid populations revealed a significantly increased proportion of basophilic, polychromatic, and orthochromatic erythroblasts at 4 days, while reticulocytes and RBCs were decreased, with a gradual return toward basal proportions across the time course (Figure 3C).The EPO response was blunted in the bone marrow of CISH KO mice, which showed significantly lower total cellularity at both 4 days and 6 days (Figure 3B).The changes observed in each of the specific erythroid populations were similar to CISH WT mice, although the proportion of RBC remained reduced across the time course compared to CISH WT mice (Figure 3C).
Discussion
CISH was first identified as a negative feedback regulator of EPOR signaling in vitro [4].Subsequent studies using transgenic and knockout mice have implicated CISH in the development and function of T cells, NK cells, and myeloid cells, as well as the control of growth, mammary gland development, and appetite [10][11][12][13][14][15][16].However, none of these studies assessed the impacts on erythropoiesis.To address this knowledge gap, this study investigated the potential physiological role of CISH in erythropoiesis through the analysis of CISH-deficient mice during primitive, steady-state end EPO-induced erythropoiesis.Collectively, these studies have identified a regulatory function for CISH in erythropoiesis.Analysis of the spleen showed undetectable basal Cish expression but induction by EPO (Figure 3D).CISH WT mice demonstrated a large increase in spleen cellularity following EPO injection that declined following cessation of injection (Figure 3E).Basophilic, polychromatic, and orthochromatic erythroblasts were all significantly elevated at 4 days and slowly declined to basal levels over the time course (Figure 3F).Reticulocytes were elevated at 6 days and remained increased, while there was a significant decrease in the proportion of RBCs at 4 and 6 days that began to rebound after EPO injection was stopped.CISH KO mice showed a significantly greater initial increase in spleen cellularity at 4 days that then declined (Figure 3E).However, the changes in the relative proportions of specific erythroid cell populations in CISH KO mice largely mirrored those in CISH WT mice, with only reticulocyte percentage at 6 d showing a significant difference between genotypes (Figure 3F).
Discussion
CISH was first identified as a negative feedback regulator of EPOR signaling in vitro [4].Subsequent studies using transgenic and knockout mice have implicated CISH in the development and function of T cells, NK cells, and myeloid cells, as well as the control of growth, mammary gland development, and appetite [10][11][12][13][14][15][16].However, none of these studies assessed the impacts on erythropoiesis.To address this knowledge gap, this study investigated the potential physiological role of CISH in erythropoiesis through the analysis of CISH-deficient mice during primitive, steady-state end EPO-induced erythropoiesis.Collectively, these studies have identified a regulatory function for CISH in erythropoiesis.
CISH was highly expressed in the fetal liver.However, its ablation resulted in only minor perturbation of fetal erythropoiesis, reflected in an overall increase in total erythroblasts.EPOR signaling has been identified as a critical regulator of fetal erythropoiesis [1], suggesting enhanced EPOR signaling likely contributes to the phenotype observed.This is consistent with our previous study that demonstrated ablation of a CISH homolog in zebrafish resulted in increased embryonic erythropoiesis, including the EPOR-positive cell population [17].
Ablation of CISH resulted in mild macrocytic anemia in adult mice, which was confirmed in CISH KO mice on the C57/BL6 background but with no significant perturbation observed in CISH HET mice (Supplementary Table S1).Bone marrow erythropoiesis was disrupted, with a relative increase in pro-, basophilic, polychromatic, and orthochromatic erythroblasts at the expense of more mature cells, but with CFU-E and BFU-E decreased.RNAseq analysis identified a small number of genes differentially expressed in the bone marrow of CISH KO mice, the majority previously implicated in erythropoiesis or erythroid cells (Supplementary Table S2).This included decreased expression of Mst1r, which encodes a c-Met-related tyrosine kinase shown to mediate expansion of erythroblasts downstream of EPOR [23], Ryk that encodes another tyrosine kinase that facilitates Wnt5ainduced HSC quiescence [24] and Aff4, whose product is part of a complex that mediates the transcriptional responses of hypoxia-inducible factor (HIF)1A [25] Alternatively, increased expression was observed for Alox15, encoding arachidonate 15-lipoxygenase, the ablation of which leads to decreased numbers of hypochromic erythrocytes in mice [26], as well as Rpl29 encoding the 60S ribosomal protein L29 that is highly expressed in erythroid cells [27].There was some elevation of serum EPO, although this failed to reach statistical significance (Supplementary Figure S2).EPO-induced erythropoiesis was also blunted in the bone marrow of CISH KO mice, with the more mature populations most affected.This tissue exhibited basal Cish expression in CISH WT mice, which was significantly increased by EPO.Collectively, these results are consistent with in vitro studies identifying CISH as an inducible feedback regulator of EPOR signaling [4,8], but with its absence leading somewhat surprisingly to suppression of erythropoiesis within the bone marrow that correlated with perturbation of several genes implicated in this process.In contrast, CISH KO mice exhibited relatively normal splenic erythropoiesis, with EPO-induced erythropoiesis in this tissue increased, and Cish expression also induced in this tissue by EPO in CISH WT mice.However, EPO-stimulated splenic erythropoiesis followed a normal pattern of differentiation, with evidence of increased proliferation (Supplementary Figure S3).This is consistent with compensation and/or heightened responses in the spleen during EPO-induced erythropoiesis.
Analysis in vitro has revealed that STAT5 mediates the induction of CISH expression by EPO [8], facilitated via tetrameric STAT5 binding sites present in the CISH promoter [28], while CISH participates in direct negative regulation of EPO-induced STAT5 by competing for binding sites on the EPOR [8].In vitro erythroid differentiation has also been shown to be influenced by the level of STAT5 activation, with differentiation maximal at higher levels, whereas intermediate levels resulted in enhanced proliferation and increased progenitors [29].Conversely, Stat5a −/− Stat5b −/− mice exhibited severe microcytic anemia [30].Together, this suggests that the EPOR/STAT5/CISH pathway is functional in vivo.Generation and characterization of lineage-specific conditional CISH KO mouse lines would provide additional insights into which cells contribute to the phenotypes observed. | 4,635.4 | 2023-10-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Identification of interturn faults in power transformers by means of generalized symmetrical components analysis
The paper deals with experimental identification of transformer internal faults, an important factor in reliability and sustainability of power supply systems. Task of identification of transformer internal faults requires increasing sensitivity of relay protection by calculation of components most sensitive to interwire faults from transformer current. In order to study internal faults in transformer, the model in Simulink MATLAB was developed on the basis of transformer constitutive equations. Transformer with short circuited wires was simulated as a multiwinding transformer. We provide the calculation of transformer parameters. Model was applied for analysis of transients in power transformers, such as interwire fault, transformer inrush, and fault in transformer connections. Analysis of power transformer internal faults by means of time-dependent symmetrical components of currents is provided. These symmetrical components were calculated for the first harmonic of current by means of discrimination of firs harmonic by low-pass filter and compensating elements implementing phase shift. Described method allows calculation of symmetrical components during transient and under nonsinusoidal conditions. Simulation results showed the advantage of instantaneous symmetrical components of other direct values. Those components were implemented in relay protection algorithms for identification of internal faults in transformers.
Introduction
Power transformers are important part of any electric power system, and significantly affect the sustainability and reliability of power supply. The transformer reliability is dependent on effective relay protection operation. Main problems in relay protection are failures in case of turn-to-turn fault in the transformer and in unjustified tripping in case of transients in energy system outside the protected zone [1], [2].
As a result of growing energy capacity of modern industry [3], [4], [5], [6], the failure of relay protection in case of interwire fault will lead significant damage caused to the transformer by fault currents and results high repair costs.
The major methods for increasing sensitivity of relay protection are calculation of components most sensitive to interwire faults from transformer currents. Examples of such components are symmetrical components, dq0 transform, wavelet transform and other [7], [8], [9], [10], [11], [12]. In order to estimate and improve algorithms of relay protection, simulation of transformer transients is required.
This study is dedicated to development of transformer model in Simulink MATLAB, which would provide calculation of transformer transients in cases of internal faults and over transients in transformers, and provide means to identify internal faults from other transients.
Methods
Internal short circuits in transformer occur if insulation of transformer windings is damaged. In case of main insulation damage ground faults take place; in case of longitudinal insulation winding end winding fault happen. Interwire fault could happen between coils of same or different phases. Short circuits between coils and wires of transformer windings are most common for early stages of transformer faults.
In cases of interwire fault transformer can be considered as multiwinding transformer [13]. Structural model of ideal single-phase transformer in case of interwire fault is presented on fig.1.
Interwire fault simulation
In case of short circuit between wires of second winding
Simulink MATLAB implementation
In accordance with previously established set of constitutive equations of transformer, structural diagram of the calculation ( fig.2) was developed.
As the simulation of transients in transformer in cases of transients in electric grid is required, it those were implemented by means of SimPowerSystems BlockSet. In order to couple calculation blocks in Simulink with models, made with SimPowerSystems BlockSet library, model of transformer was connected through Controlled Current Source and Voltage Measurement blocks.
Structural diagram of connection of three-phase delta-wye transformer is represented on fig.3.Each phase of three-phase transformer represented by single-phase transformer.
Subsystems Phase_A, Phase_B and Phase_C, are structural diagrams of calculation of single phase transformer constitutive equation
Fault identification
Generalized symmetrical components analysis [14] covers the case of nonsinusoidal and asymmetric dynamic systems of three-phase variables forming the vector: Instantaneous components of positive and negative sequences . These symmetrical components were calculated for the first harmonic of current by means of discrimination of firs harmonic by low-pass filter and compensating elements implementing phase shift ±120°. Transfer factor of phase-shifting link is calculated as: where 50 1 = f Hz -circuit voltage frequency.
Phase shift at frequency 1 Transfer factor of low-pass filter is calculated as:
Results
Comparative analysis of positive and negative sequences of a differential current has been carried out for the following modes: short circuiting of Results of simulation (Fig.5) demonstrate that an internal fault such as short-circuiting 5% turns of the secondary winding leads to asymmetrical 3-phase differential currents in contrast to the form of a healthy transformer excitation current which is a part of an unbalanced differential current. Fig. 5. Three-phase differential currents due to turn-to-turn short-circuiting 5% of the secondary winding in the background of healthy transformer excitation currents. Fig.5 shows instantaneous curves of negative and positive components of a differential current (Fig. 6) and changing of amplitudes of corresponding sequences resulted from internal short circuiting (Fig.7). As it is seen from Fig. 6 the negative sequence component (curve 2) of the unbalanced differential current is negligibly small as compared to a positive one (curve 1). This is due to a filtering effect of decomposing the current into symmetrical components which results in changing the number of harmonics in compared currents from To use the obtained results for internal fault identification it is helpful to refer to a complex informative parameter expressed in the form: Dynamics of the parameter behavior during different kinds of faults is shown in Fig. 8. corresponds to false differential currents caused by external faults, inrush currents, transformer over excitation etc. In this case relay operation is to be prevented.
Discussion
The paper deals with analysis of the power transformer's fault currents under different conditions of operation using Simulink MATLAB. Analysis was carried out on the basis of decomposition of analyzed currents to generalized symmetrical components by means of a sequence analyzer which output signals in the form of dynamic phasors of each harmonic which were used to reconstruct instantaneous values of currents' positive, negative and zero sequences. Low-pass filter with correction link was proposed for calculation of symmetrical components of first harmonic of differential current. Comparative analysis of dynamic phasors shows the sensitivity of amplitudes of negative and positive sequences of differential currents to fault location and allows formulation of algorithm for detecting internal faults in the transformer windings. | 1,486.8 | 2019-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Uncertainty of CT dimensional measurements performed on metal additively manufactured lattice structures
Metal parts with controlled lattice structures can effectively be produced via additive manufacturing (AM) technologies. However, one of the critical aspects of metal AM products is the dimensional and geometrical accuracy. X-ray computed tomography (CT) can be applied to enable advanced control methods that are fundamental for improving the geometrical characteristics and the quality of complex metal AM parts. In this work, Ti6Al4V lattice structures produced by laser powder bed fusion were analysed using a metrological X-ray CT system. Two different approaches for determining the uncertainty of dimensional measurements based on the CT reconstructed volumes were implemented and compared: the “ substitution method” and the “ multiple measurements ” approach. Advantages and limitations of both approaches are identified and discussed.
Introduction
Metal additive manufacturing (AM) technologies are increasingly used in several industrial sectors (e.g.aerospace, biomedical), especially thanks to the capability of fabricating components having complex geometry and high structural complexity [1].Among the AM processes, laser powder bed fusion (LPBF)which produces metal parts directly from computer aided design (CAD) data by the selective melting of successive layers of metal powdershas proven to be particularly suited to produce strong, lightweight and complex metallic lattice structures, whose fabrication is often not possible through conventional manufacturing techniques (e.g.machining and casting) [2].Lattice structures are defined as "three-dimensional geometrical arrangement composed of connective links between vertices (points) creating a functional structure" [3] and have a great potential, for example, in the field of biomedical implants, because implants with a porous structure show reduced stress shielding and improved osseo-integration in comparison to traditional fully dense structures [4].However, LPBF products are typically characterised by geometrical errors, internal defects and complex surface topographies, which may lead to mechanical properties degradation and product failure [5].In order to effectively improve the AM process, adequate measuring techniques and procedures are needed to provide accurate dimensional characterization of the AM products [6].In this context, X-ray computed tomography (CT) can be used as an advanced measuring technique that enables non-destructive dimensional analyses of both external and internal geometries and features, which in most cases are not viable with traditional measuring techniques [7].Moreover, CT is also capable of reconstructing the three-dimensional model of the scanned object with high surface digitization in a relatively short time and to perform simultaneously different kind of analyses, including coordinate metrology, porosity analysis and surface topography evaluation [8,9].This work, in particular, addresses the application of CT for dimensional quality evaluation of AM lattice structures.Such an application has already been studied in literature with several aims: to improve geometrical and mechanical control of LPBF lattice structures [10], to evaluate structural deviations of the as-built structures with respect to the as-designed geometry [11], and to improve finite element analyses of the stress distribution at the strut junctions with the benefit to base the simulations not on an ideal geometry but on the actual one [12].However, although CT was proven to be an effective tool capable of providing an information-rich geometrical description of AM lattice structures, the uncertainty of CT dimensional measurements is often critical and not easy to determine [13].The objective of this work is to investigate the possible application of two approaches for the uncertainty determination and the correction of systematic errors for CT measurements of metal AM lattice structures.The first approach is the so-called substitution method, which is well known for CT dimensional measurements, but limited for complex objects by the fact that it is based on the availability of calibrated workpieces similar to the measured workpieces [14].The second approach is based on the multiple measurements strategies, and is newly proposed for CT metrology in this work, adapting it from a method that is proposed and still under investigation for coordinate measuring machines (CMMs) [15].
Investigated samples
In this work, specimens produced by LPBF of Ti6Al4V alloy characterized by specific lattice designs (i.e.periodic structures determined by cubic cells) were used as case study.They were designed to have density and mechanical properties comparable to those of trabecular bone.Figure 1 (a) shows one of these specimens as an example.
CT scanning
The lattice structures introduced in Section 2.1 were scanned by a metrological CT system (Nikon Metrology MCT225), characterized by micro-focus X-ray source with minimum achievable focal spot size of 3 µm, 16-bit X-ray detector with a 2000×2000 pixel grid, controlled cabinet temperature (20 ± 0.5 °C) and maximum permissible error (MPE) for length measurements equal to (9 + L/50) μm (where L is the length in mm).The scanning parameters are reported in Table 1. Figure 1 (b) shows an example of CT reconstructed volume and Figure 1 (c) reports two examples of cylindrical features extracted from the CT volume.The analysis and visualization software VGStudio MAX 3.2.3(Volume Graphics GmbH) was used to perform dimensional measurements by association of geometrical elements (e.g.cylinders) to the features of interest, according to the measurands defined in Section 2.3.
Definition of measurands
The measurands investigated in this work were chosen considering dimensional characteristics that can be critical for mechanical and fatigue properties, i.e. dimensions of cylindrical features composing the lattice structure and structural deviations with respect to the CAD geometry.Concerning the cylindrical features, horizontal and vertical elements were distinguished, since they can be characterized by different dimensions and surface roughness depending on the sample orientation with respect to the AM building direction.A total of 15 vertical and 15 horizontal features were measured, selected within three different regions of the sample: middle, top and bottom.Three circles were fit on each vertical and horizontal feature, as illustrated in Figure 1 (c), to compute their diameters.The same surface points used to fit such circles were then used to fit a cylinder for each feature.Three-dimensional distances were measured between the points obtained from the intersection of each cylinder axis with planes: axes of vertical cylinders were intersected with parallel horizontal planes at specific locations, and axes of horizontal cylinders were intersected with vertical planes at specific locations.
CT measurement uncertainty
Two different methods for assessing the uncertainty of CT dimensional measurements performed on lattice structures are investigated in this work: the substitution approach (described in Section 3.1) and the multiple measurements approach (described in Section 3.2).
Substitution approach
The first approach is based on the experimental procedure described in the guideline VDI/VDE 2630-2.1 [14], which is a wellknown method for the uncertainty assessment in CT dimensional metrology [16].This approach requires the availability of calibrated samples (with sufficiently low calibration uncertainty) similar to the objects to be measured.In the case of AM lattice structures, an important limitation of the substitution approach is related to the unavailability of appropriate reference samples, especially due to difficulty of performing accurate calibration measurements on highly-rough structures which in most cases are inaccessible from the outside.For this reason, a reference sample was designed and produced to meet the similarity conditions with respect to Ti6Al4V lattice structures produced by LPBF.The reference sample is an assembly of two bodies: a main body (see Figure 1 (a)) and a counterpart (see Figure 1 (b)), where the counterpart and the main body are assembled together).Both these bodies were machined starting from a bulk Ti6Al4V bar, via turning and ultra-precision milling operations.The main body is characterized by six pins with same nominal diameter of 0.4 mm (comparable to the nominal diameter of cylindrical features of the investigated lattice structures) and different heights ranged between 0.8 mm and 2 mm.The pins are disposed along a spiral path in order to randomize the relative distances between each couple of pins.The counterpart has the double function of (i) increasing the maximum thickness to be penetrated by the X-ray beam up to the maximum thickness of the lattice structures (to allow the use of the same CT scanning parameters) and (ii) allowing a double possibility for measuring the pins: as nonaccessible internal features measured by CT when the counterpart is assembled, and as accessible external features measured by CMM when the counterpart is removed.The measurands were defined similarly to those defined in Section 2.3 for the lattice structures.Equally-spaced circles were measured for each pin, and the surface points used to fit such circles were then used to fit a cylinder for each pin.Three-dimensional distances were measured between points obtained from the intersection of each cylinder axis with planes aligned on the base plane (i.e. the plane where the pins lie).In addition, the heights of pins were measured as the distances between the base plane and the top pin's planes at the pins' axes.The calibration was performed according to the same definition of measurands, using a tactile CMM Zeiss Prismo Vast 7 (MPE = (2.2+L/300)μm, with L is the length in mm).The same measurements were conducted with CT, on the CT reconstructed volume obtained from repeated CT scans of the object (using the same scanning parameters reported in Table 1).For each measurement, the uncertainty was determined as recommended in the guideline VDI/VDE 2630-2.1 [14].The similarity conditions requested by the guideline between lattice structures and reference object are well respected in terms of material, size and geometry, but not for the surface roughness and form errors that are consistently larger for the lattice structure's cylindrical features than for the reference object's pins.For this reason, the surface roughness and form errors contributions were taken into account among the uncertainty contributions, based on a previous work [17].
Figure 2: Representation of the reference object produced to apply the substitution approach: main body (a) and assembly with the counterpart (b).
Multiple measurements approach
The second approach (which is newly applied in this work to CT metrology) for the uncertainty determination is an adaptation of the "multiple measurements" strategy that was previously proposed for CMMs [15] and is currently under refinement within the European project EUCoM (Evaluating Uncertainty in Coordinate Measurement [18]).The main advantage of this approach is that it is not limited by the unavailability of reference samples, which is a common issue for very complex workpieces as the lattice structures investigated in this work.The basic principle of the "multiple measurement" approach is to perform repeated measurements by re-orienting multiple times the object within the measurement volume, in order to introduce and stimulate the variation of geometrical errors and other errors (such as those originated by image artefacts that typically influence CT scans [7]).The investigated object has to be representative of objects that are typically inspected (for example, other lattice structures with same material and comparable dimensions).Figure 3 shows schematically the five orientations chosen to scan the lattice structure to estimate the effect of CT geometrical errors and of image artefacts.Attention was given to the choice of "natural" alternative positions with good measuring conditions.
To establish the traceability to the unit of length (metre), the "multiple measurement" approach requires additional tests to be performed using calibrated length and form standards (which in this case are not required to be similar to actual workpieces).To this end, a calibrated artefact characterized by six 1 mm-spheres arranged on a carbon support and with different calibrated center-to-center distances was used.The artefact was scanned at three different positions in the measuring volume, using the same parameters reported in Table 1.
The "multiple measurement" approach was applied also to the reference sample presented in Section 3.1, as if it was the object under investigation, to allow the evaluation of metrological compatibility [19] between CT measurements and reference measurements.
Figure 3: Representation of the five orientations of the lattice structure within the X-ray detector field of view: 5°, 15°, 90°, 185°, 195°.The measurement uncertainty was determined using the two approaches described in Section 3.1 and 3.2 for all the investigated measurands and for all the samples (diameters and three-dimensional point-to-point distances for the lattice structure; diameters, three-dimensional point-to-point distances and pins heights for the reference object).Moreover, the uncertainty was determined in two different scenarios: in the first one, the bias was not corrected but considered as an uncertainty contribution; in the second the bias was instead corrected.In the substitution approach, the bias is calculated as difference between average measured value and reference value, while in the "multiple measurement" approach it includes the contributions of scale error and probing error of size.The surface roughness effect was treated as a separate uncertainty contributor in this work, to better underline its impact on the uncertainty determination using the substitution approach.Figure 4 shows as an example the expanded uncertainty values (95 % of confidence interval) obtained in the above-depicted cases for the CT measurement of lattice structure circles diameters (for horizontal cylindrical features).If the roughness and form errors contributions are not taken into account, the "multiple measurement" approach leads to higher uncertainties than the substitution approach, especially in the case of uncorrected bias.The difference is far lower when the bias is corrected instead of being included in the uncertainty.However, differently from the substitution approach, the "multiple measurement" approach is not based on the measurement of a reference calibrated object, hence it does not require to add the effect of form errors and surface roughness as a separate additional contribution.For this reason, in cases where the surface roughness and the form errors are particularly high, the substitution approach can lead to overestimate the uncertainty.Similar results were obtained for the other measurands, so that the same considerations hold for them as well.
Results and discussions
Besides the comparison between the two approaches, the "multiple measurement" approach was applied also to the reference object in order to assess the metrological compatibility between CT measurements and reference measurements, by computation of the normalized error (see Eq. 1) [19]: when is below 1 a good agreement exists between the two compared results, while if is above 1 the results are not in good agreement.
In the case of bias not corrected, for each measurand, the was found to be below 1.However, in the case of bias corrected, the was below 1 in all cases except for one.Moreover, the values were observed to slightly increase after the bias correction.Thus, the error correction might be non-optimal.Another open issue is related to the choice of multiple orientations, because the relationship between sample orientation and impact of CT errors and artefacts on the measurement uncertainty have to be studied more in depth.Moreover, the chosen orientations must be varied depending on the geometry and dimensions of the object to be scanned, and this limits the possibility to define a generalised approach with standard orientations.For example, in the case of high-aspect-ratio samples, a 90° rotation from one orientation to another might be impossible or inadequate, as the maximum thickness to be penetrated by X-rays could become too large.Another relevant aspect emerged from the results obtained in this work is that the "multi measurement" approach might overestimate or underestimate the uncertainty, depending on the specific measurement cases.Consequently, future work is needed to refine the approach and study how it can be adapted to CT measurements.
Conclusions
This work describes the experimental investigation of two approaches to determine the uncertainty of CT dimensional measurements performed on complex AM lattice structures.Both experimental procedures delivered comparable uncertainty statements.Advantages and limitations of both approaches were pointed out.In particular, the main advantage of the "multiple measurement" approach is that it does not require the use of calibrated artefacts similar to the objects that are typically measured.This is interesting especially for AM components, which are typically characterized by very complex geometries, including nonaccessible geometries and features that are difficult or even impossible to be calibrated using conventional measuring techniques.Nonetheless, the fabrication of task-specific reference objects fulfilling the similarity requirements of the substitution approach may be difficult for complex structures and also expensive due to costs related to design, fabrication and calibration.
In addition, the effect of form errors and surface roughness (which are typically very high in AM parts) on the comparison between CT measurements and calibration measurements are not taken into account as additional separate contributions.In principle, this is an advantage with respect to the substitution approach, but further investigations are needed to better understand if the "multiple measurement" approach gives sufficient weight to the effect of form errors and surface roughness.
The "multiple measurement" approach was applied also to the reference object developed in this work, to enable the evaluation of metrological compatibility through the computation of normalized errors.The normalized errors were found to be below 1 in almost all cases.The open issues of the "multiple measurement" approach were also discussed, including the possible nonoptimal error correction and the difficulty related to the choice of multiple orientations (which are difficult to standardize because should vary depending on the object geometry and dimensions).Future work is needed to improve the understanding of how the method should be applied to ensure reliable uncertainty determination and correction of systematic errors.In addition, since the experiments conducted in this work were limited to a specific geometry, further investigations are needed to extend the research to other case studies.
Figure 1 :
Figure 1: CT reconstructed volume of a Ti6Al4V LPBF lattice structure (a) and examples of cylindrical features (vertical = blu circle; horizontal = red circle) extracted from the CT model where three circles were fit to compute their diameter (b)
Figure 4 :
Figure 4: Expanded uncertainty (95 % confidence interval) determined with different approaches for the CT measurement of lattice structure circles diameters (nominally equal to 0.4 mm) in the case of horizontal cylindrical features.
Table 1 :
CT scanning parameters | 4,033.4 | 2020-02-01T00:00:00.000 | [
"Materials Science"
] |
Multiple Kernel Based Region Importance Learning for Neural Classification of Gait States from EEG Signals
With the development of Brain Machine Interface (BMI) systems, people with motor disabilities are able to control external devices to help them restore movement abilities. Longitudinal validation of these systems is critical not only to assess long-term performance reliability but also to investigate adaptations in electrocortical patterns due to learning to use the BMI system. In this paper, we decode the patterns of user's intended gait states (e.g., stop, walk, turn left, and turn right) from scalp electroencephalography (EEG) signals and simultaneously learn the relative importance of different brain areas by using the multiple kernel learning (MKL) algorithm. The region of importance (ROI) is identified during training the MKL for classification. The efficacy of the proposed method is validated by classifying different movement intentions from two subjects—an able-bodied and a spinal cord injury (SCI) subject. The preliminary results demonstrate that frontal and fronto-central regions are the most important regions for the tested subjects performing gait movements, which is consistent with the brain regions hypothesized to be involved in the control of lower-limb movements. However, we observed some regional changes comparing the able-bodied and the SCI subject. Moreover, in the longitudinal experiments, our findings exhibit the cortical plasticity triggered by the BMI use, as the classification accuracy and the weights for important regions—in sensor space—generally increased, as the user learned to control the exoskeleton for movement over multiple sessions.
INTRODUCTION
Brain Machine Interface (BMI) systems have attracted extensive attention in the past decade, because of their potential in improving human life, especially for those who are affected by motor disabilities. Since gait deficits are commonly associated with spinal cord injuries (SCI), limb loss, and neurodegenerative diseases, there is a need to investigate innovative therapies to restore gait in such patients. Exoskeletons have become prominent tools for the rehabilitation of SCI and stroke patients (Sale et al., 2012;Venkatakrishnan et al., 2014). BMIs have been deployed to infer the user's intent from his/her brain activity to generate output signals to control powered exoskeletons for upper and lower limb rehabilitation (Noda et al., 2012;Contreras-Vidal and Grossman, 2013;Kilicarslan et al., 2013;French, 2014;Venkatakrishnan et al., 2014). In Presacco et al. (2011), Presacco et al. showed decoding of gait kinematics during treadmill walking from EEG of able-bodied subjects with accuracies comparable to that from a similar study in non-human primates with electrodes implanted in their brains (Fitzsimmons et al., 2009). Further, in Kilicarslan et al. (2013), a paraplegic subject's motion intentions were accurately decoded using Locality Fisher Discriminant Analysis and a Gaussian Mixture Model (LFDA-GMM) for two different gait tasks (i.e., repeated walking-turning right-turning left motions and sit-rest-stand motions). The model enabled the closed-loop EEG-based BMI system to control a robotic exoskeleton (NeuroREX) in real-time, resulting in independent walking for the paraplegic user.
To control a device via BMI, different brain activity patterns produced by a user need to be accurately identified by a neural interface system and translated into appropriate commands . Discrete decoding (neural classification) of intent from EEG signals can be considered as a pattern recognition problem, and advanced machine learning techniques are needed to accurately translate the brain electrical activities to meaningful control commands. Many machine learning methods [e.g., linear discriminant analysis (LDA), support vector machine (SVM), Bayesian classifiers] have been applied for classifying EEG signals in different BMI applications (Kilicarslan et al., 2013;Niazi et al., 2013;Leamy et al., 2014;Lew et al., 2014;Hortal et al., 2015;Jiang et al., 2015). However, most of them serve as a "black box" in that we do not know how the brain activity changes during longterm BMI use nor how the brain regions contribute to the classification process while people perform different tasks. The human brain consists of over 100 billion cells, typically divided into regions by neuroanatomists. Different regions have their specific functionalities while coordinating together to accomplish everyday tasks. Moreover, the specific contributions of brain regions to classification may change due to learning a BMI. Therefore, it is important to identify and track these changes to increase our understanding of brain function, BMI learning and performance. In that context, the hypothesis of this research is that different brain regions contribute differentially to BMI learning and control of robot assisted lower-limb movementswe are interested in learning the importance of these regions for neural classification of gait states.
Kernel learning methods have been effectively applied for many machine learning problems, including feature selection, data regression and classification for EEG signals (Garrett et al., 2003;Lal et al., 2004;Lotte et al., 2007). SVM is one of the most popular kernel methods for pattern recognition. However, a problem with using the standard SVM in BMI applications is that it provides no insight about the importance of distinct features, and thus has little knowledge about the biophysical properties of relevant features used in decoding/classification. Multiple kernel learning (MKL), which makes use of a combination of basis kernels to represent different types of features or data, have been shown to outperform traditional single-kernel machines in different aspects (Sonnenburg et al., 2006;Tian et al., 2012;Samek et al., 2013;Li et al., 2014). The main advantage of using MKL over SVM is that MKL can simultaneously learn the classifier and the optimal weights for basis kernels. In this paper, we investigate and make use of this property to simultaneously decode gait states from multi-channel EEG signals and learn the relative importance of different scalp brain areas. Particularly, we build a composite kernel based on a linear combination of basis kernels, in which each basis kernel can be represented by a group of electrodes corresponding to selected regions of interest (ROIs), and consequently contribute unique biophysical information.
The primary goal of this research is to show the feasibility of simultaneously classifying the pattern of user's internal gait states (e.g., stop, walk, turn left, turn right) from the EEG signals and learning the relative importance of different scalp brain areas. Previous studies have shown that low delta band (0.1-2 Hz) EEG contains intended movement-related information for decoding the kinematics of lower limb or gait states (Presacco et al., 2011(Presacco et al., , 2012Jorquera et al., 2013;Kilicarslan et al., 2013;Bulea et al., 2014;Luu et al., 2016). For example, in Presacco et al. (2011), Presacco et al. (2012, and Luu et al. (2016), it was shown that delta band EEG contains information about gait movement kinematics that can be decoded using Wiener or Kalman filters. In Kilicarslan et al. (2013), Jorquera et al. (2013), and Bulea et al. (2014), it was shown that movement-type (e.g., "stop, " "go, " etc.) classifiers can be designed based on delta band EEG signals. Another study (Velu and de Sa, 2013) showed that features corresponding to frequencies less than 2 Hz were the most heavily weighted during single trial classification of walking and pointing direction. Inspired by the above findings, in this study, we utilize delta band (0.1-2 Hz) EEG to build our basis kernels (feature matrices) for neural classification of gait states.
The other goal of the research is to compare the brain regions employed for classifying movement intents from ablebodied subjects and individuals with spinal cord injury (SCI) given differences in neural activity across these populations. Studies have shown that SCI can cause widespread and sustained brain inflammation that leads to progressive loss of brain cells in key brain regions with associated cognitive problems (Wu et al., 2014a,b). Cramer et al. have found that in patients with complete SCI, many features of normal motor system function are preserved, however, the volume and patterns of activation and the modulation of function with change in task are abnormal and absent, respectively, in patients with SCI (Cramer et al., 2005). In this preliminary study, we collected EEG data from a SCI volunteer over multiple sessions to compare the classification results with an able-bodied subject on the important brain regions during learning.
The remainder of the paper is organized as follows. Section 2 introduces our methodology for region importance learning, including experimental protocol, data acquisition, processing and analysis. In particular, we introduce the MKL algorithm and how we apply it to learn the importance of different brain regions. In section 3, we validate the efficacy of the proposed work via experiments using fourclass single session and two-class longitudinal EEG data. Section 4 presents our discussions from analyzing the experimental results. Finally, concluding remarks are provided in Section 5.
Experimental Protocols and Tasks
The experimental protocols were approved by the Institutional Review Board of the University of Houston. After giving written informed consent, an able-bodied subject and an SCI subject (both male) were fitted with a wearable powered exoskeleton (REX, REX Bionics Ltd, New Zealand) and an EEG-based BMI (Kilicarslan et al., 2013). For data collection, users were asked to perform motor imagery of locomotive movements while following and completing a path marked on the ground with the robot controlled by an operator remotely. This allowed synchronized motion and EEG data while securing user engagement. There were two tasks in this research. Task 1 was a four-class, single session task in which the subjects performed different movements, i.e., walking forward, turning right, turning left and stop, following the marked path on the ground. In Task 2, subjects only executed walking and stop motions according to audible beep instructions. Each trial contained at least 10 stop-to-walk or walk-to-stop transitions. The subjects were trained over multiple sessions in a 30 days period to control the exoskeleton to perform these motions.
Data Acquisition and Processing
Multichannel active-electrode EEG (64 channels) was recorded by combining two 32-channel amplifiers (actiCap system, Brain Products GmbH, Germany). The electrodes were placed and labeled in accordance with the extended 10-20 international system. A wireless interface (MOVE system, Brain Products GmbH, Germany) was used to transmit data (sampled at 100 Hz) to the host PC. Figure 1 shows a volunteer controlling NeuroRex via the EEG BMI system.
We took a careful approach in regard to potential motion artifacts aiding decoding. First, we used good engineering measurement practices (Nathan and Contreras-Vidal, 2015), including EEG cap set-up and medical-grade mesh to fixate individual electrode wirings that can induce motion artifacts; second, we deployed a wireless active-electrode EEG system to increase the signal to noise ratio (signals are amplified directly at the electrode location) and help mitigate motion artifacts; third, we have shown that the delta band EEG contains negligible motion artifacts at the gait speeds tested in the study (Nathan and Contreras-Vidal, 2015); fourth, we applied the Artifact Subspace Reduction (ASR, an automated artifact rejection method Mullen et al., 2013;Bulea et al., 2014) and compared classification accuracies with and without ASR, to assess the potential effects of motion artifacts but did not find significant changes on classification accuracies suggesting that motion artifacts, if any, did not affect decoding. The acquired data were then filtered in the 0.1-2 Hz range using a second order Butterworth filter and standardized (z-score) in a data preprocessing step.
Region Importance Learning Framework
We conducted separate experiments of the above two tasks to interpret the use of kernel weights in MKL as an indicator of the region importance in classification of user's movement intention from EEG signals. After the signals were pre-processed, 64 channels were divided into 13 ROIs as described in Section 2.4.1. The features were then extracted by applying a 400 ms sliding window on each channel with 1 shift (10 ms) each time to acquire the amplitude modulations and concatenated as a feature matrix. To better meet the data process in real world, we divided the labeled samples into two halves for supervised learning. We randomly select 500 samples from the first half of the labeled samples for training, and the remaining half were used for testing and evaluation. The testing process was repeated 10 times and the metric for evaluating the classification results is the average overall accuracy (OA). The flowchart of the proposed framework is shown in Figure 2.
Brain Scalp Regions
The analysis and interpretation of EEG measurements depend upon the correspondence of electrode scalp coordinates to structural and functional regions of the brain (Giacometti et al., 2014;Gentili et al., 2015). For example, Giacometti et al. (2014) showed that EEG electrode proximity maps intersect with EEG sensitivity maps of the human brain, allowing the use of proximity maps to inform the cortical origin of scalp recordings. Furthermore, intersection of structural and functional regions of the brain with cortical proximity parcellations can be used to show the correspondences between scalp electrode coordinates and potential regions of interest in the human cortex (Giacometti et al., 2014).
Kernel-Based Learning Methods Foundation
Kernel-based learning methods have been widely applied for various machine learning tasks. The reason for its popularity is that it easily extends the linear classifier to nonlinear decision surfaces using the "kernel trick." All the kernel methods make use of the "kernel trick" to map the data X = {x 1 , x 2 , ..., x N } from the input space to a higher dimensional feature space H (i.e., Reproducing Kernel Hilbert Space (RKHS)) as : R d → H, x → (x), so that the original non-linear data are linear separable in such feature space. The kernel mapping is defined as: where ·, · is the inner product of two vectors.
SVM is one of the most popular kernel-based classifier (Vapnik and Vapnik, 1998). The underlying principle of SVM is to simultaneously minimize the empirical classification error and maximize the geometric margin of the linear separation surface. The optimization problem for SVM classification is formulated as: where C is a constant which controls the balance between the margin and empirical loss, ξ i are slack variables which measure the degree of misclassification, and w 2 is inversely related to the margin to the hyperplane. In most kernel-based learning methods, performance is greatly affected by the choice of kernel function and related kernel hyper-parameters. The standard SVM only utilizes a single kernel function with fixed parameters, which necessitates model selection for good classification performance. Besides, using a fixed kernel may be suboptimal, since different sources of data may have different representations of the phenomena of interest, and hence the similarity should not be measured via the same kernel function.
Multiple Kernel Learning (MKL)
In recent works, MKL has been shown to outperform traditional single-kernel SVMs in many cases, especially for classification and feature fusion problems (Sonnenburg et al., 2006;Tian et al., 2012;Samek et al., 2013;Li et al., 2014;Zhang et al., 2015). In this paper, we employ MKL to infer information about electrode relevance by observing the kernel weights learned from training the machine for classification. Each "group" of features is assigned a basis kernel, and the linear combination of all basis kernels is optimized through gradient descent on the SVM objective function. The optimization of multiple kernels works as a feature selector providing a weighted ranking of the importance of its components. We consider the above 13 ROIs as generating a 13-source input. For a specific source p, the combined kernel function K between two samples x where M is the number of candidate basis kernels representing different kernel parameters, K m is the m-th basis kernel and d m is the weight for it. Weights can be estimated through cross-validation, which is computationally demanding when the number of basis kernels (i.e., feature sets or data sources) is large. An alternative strategy, which we adopt in this work, is based on the SimpleMKL algorithm . It optimizes the weights automatically in a learning problem by utilizing the gradient descent approach. Based on the SVM optimization problem, the SimpleMKL learning problem is expressed as: where m (x p i ) is the kernel mapping function of x p i , w m is the weight vector of the m th decision hyperplane, C is the regularization parameter controlling the generalization capabilities of the classifier, and ξ i is a positive slack variable.
The objective function is a constrained optimization problem, which can be transformed into a dual form L(α i , α j ) using Lagrange multipliers α i , α j . Then the kernel weight d m can be optimized by updating it along the gradient descent direction of L(α i , α j ) as d ← d + γD, where γ is the step length, D is the descent direction of L(α i , α j ), and d = [d 1 , d 2 , · · · , d M ] T is the kernel weight vector. Following this optimization procedure and after several iterations, SimpleMKL provides the optimal kernel weight for each basis kernel that indicates the importance of a particular brain region in classification of gait states.
Parameter Settings
In the experiments, RBF kernels defined as K( ) were used with relative width parameter σ . In the multiple kernel setting, we did not select a specific kernel parameter; instead, we defined a set of different values as candidate input parameters. We can build several basis kernels with different values of σ for each source of input, however, the number of parameters should be kept small to reduce the computational complexity and memory requirements. In particular, four basis kernels with σ = [0.1, 0.5, 1, 1.5] were considered for all sources. This range of values was found to be reasonable after applying kernel alignment (Shawe-Taylor and Kandola, 2002) using cross-validation. The penalty parameter was then selected by cross-validation in the range of [2 −1 , ..., 2 15 ]. For further information of MKL experimental settings, we refer readers to Zhang et al. (2015). All the experiments were implemented in Matlab R2014a using the SimpleMKL toolbox (Rakotomamonjy, 2008).
Four-Class, Single Session Classification Results
First, we compare the kernel weights optimized by SimpleMKL algorithm for defined ROIs from the able-bodied subject and the SCI subject in a four-class task. Four motion classes for classification are walking forward, turning left, turning right and stop. The boxplots and topoplots of optimized kernel weights for different ROIs are shown in Figure 4. The average classification accuracies were 74.5% and 68.4% for the able-bodied and the SCI subject, respectively.
From the results, it is observed that the fronto-central scalp regions (MFC, RFC) have the highest weights among all ROIs, which included scalp areas associated with the motor planning and the lower-limb neural representation (Leeb et al., 2013). Interestingly, for the able-bodied subject, the MFC ROI showed the highest relevance to gait decoding with RFC being the closest area in importance. In contrast, in the case of the SCI subject, the order of importance was reversed, with RFC showing the highest relevance followed by MFC. We also note that for the SCI subject, ROIs LFC, LCP, MCP, and RCP also showed relatively higher weights than for the healthy control subject, while the remaining ROIs have low weights for both subjects. Clearly, the cortical representation for the gait movements was more compact and strong for the able-bodied subject than the SCI user. These results demonstrate that MKL can be efficiently used to infer the importance of different groups of features and thus suggest different roles in the representation of gait for different scalp brain areas.
Further, we give some insights of the class-wise results regarding different movement intentions. We show the confusion matrices in terms of class-wise accuracies and misclassification rates in Figure 5. Generally, the stop intention is the most difficult to decode-it was misclassified as walking forward in many situations. Turning right always has a high accuracy for both able-bodied and SCI subjects compared to the other classes. We note that all class-wise accuracies are above channel levelwhich is 25% for this problem.
Two-Class, Multiple Sessions Classification Results
Second, we conducted a longitudinal experiment for the twoclass (i.e., walk and stop) classification problem. We quantified electrode relevance changes across sessions to examine neural signatures that may indicate the cortical plasticity triggered by BMI use. We first plot the weight changes along 9 sessions over a period of 30 days for the able-bodied (a different subject as in task 1) and the SCI subjects (the same subject as in task 1) in Figures 6, 7, respectively. As depicted in the scalp maps, the weights change dramatically in the first several daily sessions, while becoming more stable in the later sessions. Similar to the previous results, the frontal scalp regions get the highest weights among all ROIs after training the user to control the exoskeleton for several sessions. Specially, for the SCI subject, RFC (ROI 4) has the highest weight, while LCP (ROI 5) also has relative high weight. For the able-bodied subject, the final important region is determined as MFC (ROI 3). Thus, the SCI subject used different brain regions to operate the BMI system when compared with the able-bodied user.
Since ROIs 4, 5 and ROI 3 were determined as the most significant regions for the SCI subject and the able-bodied subject, respectively, for classification of gait states, we further evaluated the overall accuracy and kernel weight for these ROIs as a function of session. The linear fit of the relations between overall accuracy (or weights for the selected ROI) and daily BMI sessions are shown in Figures 8, 9 for the SCI subject and the able-bodied subject, respectively. From the results, we can see the classification accuracy generally increases as a function of session. At the same time, the weight for ROI 4, 5 (ROI 3) also showed a trend of increasing along the session over a period of 30 days. We calculated the R 2 and p-value as a indicator of how well the data fitted to the regression line and the significance of the results to the hypothesis, respectively.
DISCUSSION
Previous study (Kilicarslan et al., 2013) has shown the feasibility of classifying user movement intentions using EEG signals in the delta band (0.1-2 Hz) based on a GMM classifier and achieved a high offline evaluation accuracies for the 3-class tasks. In this study, we extend the study of intended motions from three classes to four classes and set the research as a longitudinal study. The experimental results demonstrate that by properly weighting the importance of the features, MKL can be used as an efficient decoder to predict user's movement intentions. For the tested subjects, the overall accuracies reached above 90% for the twoclass classification task and above 65% for the more complicated four-class classification task. Compared to some commonly used machine learning approaches (i.e., Bayesian classifiers, LDA, SVM) in BMI, MKL has the following advantages (1) Unlike the LDA and Bayesian classifiers, MKL does not need to make assumptions on the data distribution. MKL is a member of kernel learning methods, which utilizes a linear combination of kernels and transforms the original data into an appropriate (kernel) feature space. Thus, all beneficial properties (e.g., optimality) of linear classifiers are maintained, while MKL is also efficient when the data are non-linear in the input space. (2) MKL is a robust learning method in the high dimensional space (Bach, 2008). In Kilicarslan et al. (2013), a sliding window was used on all 64 channels to extract features, it resulted in a 1,280 dimensional space, and a dimensionality reduction technique was required to reduce the dimension of the data decoding by GMM. Similar to the feature extraction step in Kilicarslan et al. (2013), we applied a 400 ms sliding window to extract the EEG delta band amplitude as input to the classifier. Differently, we divided the FIGURE 6 | Scalp maps of weights along 9 sessions for the able-bodied subject in Task 2.
FIGURE 7 | Scalp maps of weights along 9 sessions for the SCI subject in Task 2. channels into different groups and extract features from each group. Thus, the resulting dimension for each group of data is at most 240, which is much lower than 1,280, and a dimensionality reduction method is not necessary for classification by MKL.
(3) MKL can also be used to infer the importance of different groups of features, which is not feasible in other machine learning methods. The weight for each group is initialized uniformly at the beginning and optimized during the gradient descent in the MKL algorithm. MKL ranks sets of features corresponding to the meaningful features for solving the classification problem, and the results indicate importance in the representation of movement for different scalp brain areas.
Comparing the results from the SCI subject and the ablebodied subject, we observe the most important brain region changing from the midline fronto-central to the right frontocentral in both tasks. This could be due to the loss of brain cells and degraded cerebral cortex dynamics or lack of afferent input after spinal cord injury. For example, changes in movement-related cortical potentials have been noted after SCI and correlated with the severity of the injury (Boord et al., 2008;Gourab and Schmit, 2010). Moreover, altered spontaneous neuronal activity following SCI has been characterized by a shift in the dominant spectral power peak toward lower frequencies, including in primary and secondary somatosensory cortices (Tran et al., 2004;Sarnthein et al., 2006). Noted that the number of subjects participated in the study was limited, there might be individual variations on the classification results. However, as the classification results were based on standard cross-validation procedure, we believe the proposed approach and model can be generalized to other subjects.
In the longitudinal experiments, we found that the subjects were adapting to the BMI gait task in the first several sessions, so that the brain regions used for neural classification were not stable, which was reflected in the moderate classification performance. After several sessions of training, as the subjects learned to control the exoskeleton for movement, we observed the channels employed for movement classification converged to specific regions-the midline fronto-central areas for the ablebodied subject and the right fronto-central/left centro-parietal areas for the SCI user. In addition, the classification accuracy generally increased along session, and interestingly the weights for the important regions also increased. This demonstrates the cortical plasticity triggered by the BMI use, as the user gradually learns to control the exoskeleton for movement.
CONCLUSION
In this paper, we presented the feasibility of simultaneously classifying the pattern of user's internal gait states from the EEG signals and learning the relative importance of different scalp brain areas based on the MKL algorithm. The MKL has the advantages of learning the classifier and the optimal kernel weights simultaneously. We investigated these properties and applied the MKL classifier to infer the relative importance of different groups of features (different sources of information) in a BMI application to classify one's motion intention from the EEG signals. The experimental results demonstrated that the frontal/fronto-central regions were the most important regions for classifying gait states of the tested subjects, which is consistent with the brain regions hypothesized to be involved in the control of lowerlimb movements. By comparing the results from the SCI subject and the able-bodied subject, the important regions were observed to change, which could be due to the loss of brain cells and degraded cerebral cortex dynamics or lack of afferent input after spinal cord injury. In the longitudinal experiment, while the user learned to control the exoskeleton for movement over multiple sessions, the classification accuracy increased and the weights for important regions stabilized. These findings suggest the cortical plasticity triggered by the BMI use, which will be investigated further in the future study.
AUTHOR CONTRIBUTIONS
YZ developed the learning method, processed and analyzed the data, and wrote the manuscript. SP supervised development of work, helped in data interpretation and manuscript edit. AK helped to acquire and interpret the data. JC supervised development of work, helped in data interpretation and manuscript edit and evaluation.
FUNDING
This work was supported in part by the NIH Award R01 NS075889, Mission Connect-A TIRR Foundation. | 6,476.4 | 2017-04-03T00:00:00.000 | [
"Computer Science"
] |
Ultrafast photoinduced electron transfer in coumarin 343 sensitized TiO 2 -colloidal solution
. Photoinduced electron transfer from organic dye molecules to semiconductor nanoparticles is the (cid:12)rst and most important reaction step for the mechanism in the so called (cid:147)wet solar cells(cid:148) [1]. The time scale between the photoexcitation of the dye and the electron injection into the conduction band of the semiconductor colloid varies from a few tens of femtoseconds to nanoseconds, depending on the speci(cid:12)c electron transfer parameters of the system, e.g., electronic coupling or free energy values of donor and acceptor molecules [2(cid:150)10]. We show that visible pump/ white light probe is a very e(cid:14)cient tool to investigate the electron injection reaction allowing to observe simultaneously the relaxation of the excited dye, the injection process of the electron, the cooling of the injected electron and the charge recombination reaction.
INTRODUCTION
Photoinduced interfacial electron transfer in dye sensitized colloidal semiconductors is a process of fundamental importance.For example, the operation of photo-electrochemical devices with high conversion efficiency is based on that principle [1].A high quantum yield is only obtained, if the reaction rate for charge separation is much higher than the rates of all other competing reactions (e.g., radiative and non-radiative deactivation processes).Therefore, it is of crucial importance that charge injection occurs on an ultrafast timescale.Electron transfer reactions on the femtosecond timescale for dye/TiO 2 systems have been observed recently by ultrafast laser spectroscopy [2][3][4][7][8][9][10][11].Near/mid infrared absorption and fluorescence upconversion measurements have been performed by several groups on dye/TiO 2 systems [4,8].Here we report femtosecond time resolved transient absorption spectra over a wide spectral range covering the near UV and visible region with a high temporal resolution using a white light continuum as probing light in an excite and probe experiment.In contrast to fluorescence upconversion techniques, where only the population density of the photoexcited state of the dye can be probed, transient absorption measurements allow to distinguish between electron transfer and competing quenching mechanisms, e.g., intermolecular energy transfer between sensitizer molecules.So it also provides necessary additional information compared to near-or mid-IR investigations, where only the electron in the conduction band of the semiconductor can be probed.In this study we present data for the laser dye coumarin 343 adsorbed on the surface of TiO 2 nanocrystals (Figure 1, inset).We are able to assign the various spectral †Josef.Wachtveitl<EMAIL_ADDRESS>to ground state bleaching, cation formation and absorption of the electron in the conduction band and report fast interfacial electron transfer on a time scale faster than 100 fs.The experimental setup used for our investigations is described in [12,13].
EXPERIMENTAL METHODS
Coumarin 343 was used as obtained by Kodak.Solutions of colloidal TiO 2 were prepared as described in [5].The absorption spectrum of the dye was only slightly affected upon adsorption on the TiO 2 surface (Figure 1).The sensitized colloidal solution was circulated through a cuvette during the measurement, to prevent accumulation of photoproducts.
For the transient absorption measurements we use a home built 20 Hz Ti:sapphire laser system with a pulse width of about 100 fs (FWHM), a pulse energy of 600 µJ and a central wavelength of 870 nm.The main part of the energy is used for second harmonic generation to provide pulses at a wavelength of 435 nm.These pulses were directly used to excite the coumarin and the coumarin/TiO 2 samples.
Recently, spectrally tunable excitation pulses could be obtained by pumping a nonlinear-opticalparametric-amplifier (NOPA) with the 435 nm pulses, that allow investigations of dye molecules in a broad spectral range (e.g., alizarin/TiO 2 with a maximum absorption at 495 nm).Furthermore, the pulses generated this way can be compressed down to less than 20 fs and provide the experimental basis to observe the fast electron injection processes directly [14].
A white light continuum for the probing branch of the setup is generated by focussing the 870 nm or the 435 nm light pulses into a 2 mm sapphire plate.The continuum generated with 435 nm light pulses covers a spectral range from 320 nm to 580 nm, the one generated at 870 nm extends from 450 nm to 1100 nm.A spectral interval of about 10 nm around the generating wavelength of the white light can not be used due to temporal energy fluctuations.In order to prevent pulse lengthening by increasing chirp only mirror optics is used.
The white light is split into two parts, one is used to monitor the reference energy distribution of the white light, the other to probe the sample.The two parts are detected by two independent spectrometers each with a 42 channel array of photodiodes.The arrays are read out with a computer working in single shot detection mode.The absorption is defined by the ratio of sample and reference intensity.To calculate the absolute value for the absorption change every fifth pump pulse is blocked by a shutter and the absorption of the nonexcited sample is recorded.The continuous control of the absorption without excitation allows the detection of long term drift effects of the laser or degradation of the sample.This setup provides a high signal to noise ratio.Throughout the entire investigated spectral range the width (FWHM) of the cross correlation function was less than 110 fs for the 435 nm pump pulses, less then 85 fs for the excitation pulses around 500 nm generated with the NOPA.The polarizations of the pump and probe pulses were set to the magic angle.
RESULTS AND DISCUSSION
In the femtosecond time resolved experiments shown in Figure 2 The absorption of the excited state can be observed at wavelengths < 400 nm while the bleaching of the ground state reflects the signature of the main absorption band (λ max = 445 nm) (Figure 1).The characteristic absorption changes persist throughout the entire investigated temporal range (> 3 ns).
In contrast to the dye in solution, the coupled coumarin 343/TiO 2 system shows an additional multiexponential decay of the signal with a dominating fast component of 350 fs present in all transients recorded (Figures 2, 3, 4).A signal remains at long delay times, which resembles the characteristics of the difference spectrum for the charge separated system, stable over many orders of magnitude (Figures 3, 4). Figure 3 also shows the assignment of the various features of the difference spectrum: cation absorption (λ < 400 nm), ground state bleaching (400 nm < λ < 520 nm) and a weak, spectrally broad contribution of the electron in the conduction band of TiO 2 (λ > 520 nm).Due to the similarity of the excited state absorption-derived from experiments on the dye in solution-and the cation band of coumarin/TiO 2 , it is not possible to determine the dynamics of the electron injection process in this spectral region unambiguously.
Therefore, two possible mechanisms can explain the recorded kinetics: (i) The electron is injected with a time constant of 350 fs, indicated by the decay of the excited state and the formation of the coumarin cation radical signal.The signal decrease with a time constant of τ = 350 fs at λ probe = 480 nm superimposed to the instantaneous bleaching of the ground state band is then caused by a contribution of photoinduced D + molecules.
(ii) The electron is injected on a much shorter time scale (τ < 100 fs), but the detection of this fast com-ponent is hindered by the spectral overlap of the educt and the product states.Then the observed dynamics reflects charge recombination leading to a partial decay of the cation and the ground state features.This mechanism is supported by the lack of spectral evolution in the transient spectra at different delay times (Figure 4) and by measurements of ultrafast injection times in the range from 20 fs to 180 fs reported for this system by other groups [4,8].
Measurements in the long wavelength region (λ > 520 nm) will allow to overcome this ambiguity, since a direct observation of the injected electron is possible.Also an improved time resolution of the setup using NOPAs is necessary for the reliable detection of the electron transfer reaction.These experiments are currently underway.
CONCLUSIONS
Ultrafast interfacial electron injection on the femtosecond time scale is observed from an electron donating dye molecule attached to the TiO 2 surface.These results allow further systematic analysis of important electron transfer parameters.The influence of surface, solvent or donor-acceptor distance and effects like redistribution of vibrational excess energy or hot electron injection can now be addressed.
Figure 1 .
Figure 1.Absorption spectra of coumarin 343 in solution (methanol) and adsorbed to the surface of TiO 2 nanoparticles.The coupling process induces only a small shift of about 5 nm in the main absorption band of the two samples.The excitation wavelength is indicated by the arrow.The strong increase in the absorption for the coumarin/TiO 2 sample for wavelengths λ < 350 nm reflects interband transitions from the valence-to the conduction band of the colloid.The arrangement of the coupled coumarin is shown in the inset.
transient absorption changes of coumarin 343 in solution are compared with coumarin 343 adsorbed on the TiO 2 surface.The transient absorption changes in coumarin 343 are dominated by long lived excited state absorption and ground state bleaching. | 2,111.2 | 1999-01-01T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Enhancement of Two-Dimensional Electron-Gas Properties by Zn Polar ZnMgO / MgO / ZnO Structure Grown by Radical-Source Laser Molecular Beam Epitaxy
1Key Laboratory of Photonics Technology for Information, Xi’an Jiaotong University, Xi’an, Shaanxi 710049, China 2Key Laboratory for Physical Electronics and Devices under Ministry of Education, Xi’an Jiaotong University, Xi’an, Shaanxi 710049, China 3Joint Laboratory of Functional Materials and Devices for Informatics, Xi’an Jiaotong University and Institute of Semiconductors, CAS, Xi’an, Shaanxi 710049, China
Introduction
ZnO and its heterostructures, which have several advantages including a high saturation velocity [1], a large conduction band offset for ZnMgO/ZnO heterostructures [2], and the possibility to form a high-density two-dimensional electron-gas (2DEG) [3], have great potential for highfrequency and high-power device applications.So far, the formation of 2DEG at Zn polar Zn 1− Mg O/ZnO interface has been observed by a few groups using molecular beam epitaxy (MBE) [4][5][6][7][8], pulse laser deposition (PLD) techniques [9], metal-organic vapor phase epitaxy (MOVPE) [10], and RF sputtering [11].However, in low Mg composition Zn 1− Mg O/ZnO heterostructures ( < 0.1) high electron mobility can be observed but with very low 2DEG sheet density ( < 10 12 cm −2 ) [12], and in high Mg composition Zn 1− Mg O/ZnO heterostructures ( > 0.1), 2DEG sheet density reached a considerable value (10 12 ∼10 13 cm −2 ) but electron mobility is still deeply affected by alloy disorder scattering, especially at low temperatures.In addition the obtained mobility in previous papers was lower than 250 cm 2 /Vs at RT [13].It has been reported that modified AlGaN/AlN/GaN structures, which employ a thin AlN interfacial layer between AlGaN and GaN layers, show higher 2DEG properties than those of conventional AlGaN/GaN structures.This is reported to be a result of the reduction of alloy disorder scattering due to the suppression of carrier penetration from the GaN channel into the AlGaN layer [14][15][16][17].However, the inserting of MgO into ZnMgO/ZnO structure has never been reported.In this work, we report a Zn polar ZnMgO/MgO/ZnO structure to enhance twodimensional electron-gas properties and discuss the dependence of carrier sheet density of 2DEG on ZnMgO layer thickness which was calculated in theory and the theoretical prediction and experimental results agreed well.
Materials and Methods
Zn polar ZnMgO/ZnO and ZnMgO/MgO/ZnO heterostructures were all grown on sapphire (11)(12)(13)(14)(15)(16)(17)(18)(19)(20) substrates by radicalsource laser molecular beam epitaxy (RS-LMBE) system (Shenyang Scientific Instrument Co., Ltd., Chinese Academy of Sciences (SKY)).The 5 N purity of ZnO target was vaporized by KrF excimer laser (Lambda Physik, COMPex 102, 248 nm, 1-20 Hz, 100 mJ).At first, the substrates were treated by nitrogen plasma, which was ionized by radiofrequency (rf) plasma source (Oxford Applied Research, HD-25) at 700 ∘ C for 1 h to obtain nitrogen polarity surface and control the growth of single-domain Zn polar ZnO film [18].The growth was conducted in an oxygen pressure of 10 −3 Pa.A 20 nm thick low temperature-(LT-) ZnO buffer layer was deposited at 250 ∘ C. Secondly, a 200 nm undoped ZnO layer was deposited at 700 ∘ C. Finally, an undoped ZnMgO layer was grown at 400 ∘ C. For ZnMgO/MgO/ZnO heterostructures (Figure 1(a)), the MgO was deposited before the growth of ZnMgO layer under same condition with ZnMgO layer.The crystalline qualities of the thin films were studied by Philips X'Pert PW3040 high resolution X-ray diffraction (XRD) system using Cu K ( = 0.15406 nm).The growth evolution of MgO layer and the choice of its thickness were investigated by the streaky patterns of the reflection high-energy electron diffraction (RHEED).The Mg composition () was determined from the reflectance measurement of the exciton band gap energy of Zn 1− Mg O using the equation () = (0) + 2.145 [19].The crystal polarity was determined based on differences in etching-rate between Zn polar and O-polar samples.Chemical wet etching was carried out using 0.01 M hydrochloric acid solution at room temperature, for etching-rate measurements [20].The capacitance-voltage (C-V) measurement was performed by using mercury contacts.The electrical properties were examined by Lake Shore 7707A Hall mobility system in a van der Pauw configuration with a magnetic-field of 1000 G.
The Thickness of the MgO Insert Layer.
To optimize the MgO thickness and make sure that the MgO has grown as wurtzite structure, the growth evolution of MgO layer was investigated by RHEED pattern, shown as in Figure 2. Initially, the MgO layer grew 2-dimensionally on c-ZnO as the thickness was 0.5 and 1 nm in Figures 2(a) and 2(b).Then the growth mode of MgO layer changes from 2-dimensional to 3-dimensional.When the MgO thickness was 1.5 nm, the RHEED spots appeared (Figure 2(c)), which suggests that the crystal structure of the MgO layer changes with increasing layer thickness.Therefore, in this paper, a 1 nm MgO layer was chosen.structure was characterized by a cross-sectional TEM image (JEM-2100F).A typical image recorded near the ZnMgO/MgO/ZnO interface is shown in the inset of Figure 3.The interface could not be detected in the TEM micrograph clearly, which suggests that the structure has a high degree of crystalline quality.The MgO has no phase transition from wurtzite structure to rock salt structure.High electron mobility will benefit from the enhanced structure and crystal quality.It also can be observed that the MgO interfacial layer is grown with a thickness of approximately 1 nm, which corresponds to the designed thickness.C-V measurement.2DEG of Zn 0.95 Mg 0.05 O/ZnO structure with 20 nm thick Zn 0.95 Mg 0.05 O layer was not observed by C-V.Owing to the low Mg content and thickness of ZnMgO barrier layer, the conduction band offset was small and the providing of electron was not enough to format 2DEG.
Electrical Property.
Table 1 shows the typical values of Hall mobility () and 2DEG density () measured at 300 K and 10 K for ZnMgO/ZnO structures with or without MgO interfacial layers.From Table 1, it is clearly seen that the Hall mobility was increased by the insertion of MgO interfacial layers.In particular, the Zn 0.95 Mg 0.05 O (20 nm)/MgO/ZnO heterostructure showed very high Hall mobility of 332 cm 2 /Vs at RT and 3090 cm 2 /Vs at 10 K.
The Calculation and Experimental Results of the Dependence of Carrier Sheet Density of 2DEG on ZnMgO Layer
Thickness.It is noted that the presence of strong polarizationinduced fields in both MgO and ZnMgO cap layers leads to a very interesting dependence of the 2DEG sheet density in ZnMgO/MgO/ZnO structures on ZnMgO cap thickness.The density of 2DEG decreases with increasing ZnMgO thickness.
To discuss the behavior of 2DEG density changing with the thickness of ZnMgO layer in ZnMgO/MgO/ZnO structures, simple electrostatic analysis of the Zn 1− Mg O/MgO/ZnO heterostructures yields the following expression for the 2DEG sheet density [11,15,[28][29][30]: where the effect of Zn as shown in Figure 1(b); the surface potential is assumed to be pinned at surface with a level of 0.8 eV below ZnO conduction band edge.The conduction band offset Δ is equal to 0.9 × [ (MgO) − (ZnO)] [31].We approximate the Fermi level by the infinite triangular quantum well [23], which can be expressed as where the dielectric constant is given as (8.75 + 1.08 * 1) for the very thin wurtzite MgO [10].The effective mass is taken to be * ≈ 0.26 . is the polarization-induced charge density determined by the vector sum of the spontaneous polarization ( SP ) and the strain-induced piezoelectric polarization ( PE ) while there is no external field.We assume that the thin MgO layer is fully strained on ZnO and polarization constants vary linearly with composition.Thus, the dependence of total polarizations-induced charges in MgO layer can be expressed as = 0.029 (C/m 2 ) ( = 1) [12].Taking the tunneling of electrons from MgO/ZnO channel to Zn 1− Mg O layer into account, we modified by a coefficient V, and then it can be expressed as For V = 0.1, the calculated 2DEG density and experimental results were shown in Figure 6 structure with the layer thickness of 20 nm is 1.1 × 10 13 cm −2 , and it becomes 2.4 × 10 12 cm −2 when the thickness is 120 nm.The theoretical prediction and experimental results agreed well below 80 nm, which confirmed the ZnMgO layers in ZnMgO/MgO/ZnO structure, to a certain extent, behaved similar to a ZnO cap layer.After the thickness became larger than 80, the 2DEG density became lower than the calculated line.That is because the stress of the ZnMgO strain layer increases as the thickness increases, which results in the deterioration of crystal quality.There might be other reasons which need to be studied in the future.We note that there is somewhat a discrepancy in the sheet carrier concentration values obtained by C-V and Hall measurement.Since the data observed from Hall measurement includes the contribution of bulk carrier, the value was higher than the true value.
Conclusion
In summary, formation of a 2DEG was confirmed for Zn polar ZnMgO/ZnO heterostructures with low Mg composition ( = 0.05).The enhancement of 2DEG concentration and mobility were realized in Zn 1− Mg O/MgO/ZnO with low Mg composition by inserting of a thin (1 nm) MgO obviously.The sample shows a high Hall mobility of 3090 cm 2 /Vs at 10 K and 332 cm 2 /Vs at RT.The carrier concentration reached a value as high as 1.1 × 10 13 cm −2 .However, the study for higher Mg content Zn 1− Mg O/MgO/ZnO structures will be done in the future.The results demonstrate a well defined heterostructure and the possibility for fabrication of ZnMgO/MgO/ZnO HEMT devices.
Figure 1 :
Figure 1: Layer structure (a) and band diagram (b) for a Zn polar ZnMgO/MgO/ZnO heterostructure.The growth direction is <0001> for Zn polarity.
Figure 2 :
Figure 2: The evolution of RHEED pattern during growth of the MgO layer.The direction of MgO is <11-20>.(a) The thickness of the MgO buffer layer is 0.5 nm; (b) thickness of MgO = 1 nm; and (c) thickness of MgO = 1.5 nm.
Figure 3 : 3 )
Figure 3: The XRD spectra for the grown ZnMgO/MgO/ZnO structure; the inset is the cross-sectional TEM image of a ZnMgO/MgO/ZnO film.
Figure 4 :
Figure 4: C-V depth profiling of the 2DEG and net donor concentration in the Zn 0.95 Mg 0.05 O/ZnO and Zn 0.95 Mg 0.05 O/MgO/ZnO structures.
Figure 5
shows the results of temperature-dependent Hall measurements.The electron mobility of Zn 0.95 Mg 0.05 O (100 nm)/ZnO and Zn 0.95 Mg 0.05 O (100 nm and 20 nm)/MgO/ZnO structures increases with decreasing temperature, as shown in Figure5(a).This trend is nearly identical to that reported for AlGaAs/GaAs[21- 23], AlGaN/GaN[24,25], and ZnMgO/ZnO heterostructure[5,13], which is consistent with the existence of a 2DEG at the heterointerface.Compared with reported values for both Zn polar and O-polar ZnMg(Mn)O/ZnO heterostructures, the high mobility is obvious[11,13,26,27].The mobility of Zn 0.95 Mg 0.05 O (20 nm)/ZnO structure changed similarly to a single ZnO thin film as the temperature was changing, which indicates no 2DEG was formatted because of thin barrier layer.By the insertion of MgO layer, a 2DEG was observed.The results agree well with the results of C-V measurement.The high electron mobility of the ZnMgO/MgO/ZnO heterostructure was mainly attributed to the reduction of alloy disorder scattering.In Figure 5(b), the sheet carrier concentration of Zn 0.95 Mg 0.05 O (100 nm)/ZnO and Zn 0.95 Mg 0.05 O (100 nm and 20 nm)/MgO/ZnO structures changes little with increasing temperatures, indicating the good confinement of channel electrons.The insert of a thin (1 nm) MgO layer between ZnMgO and ZnO enhanced the sheet carrier concentration almost one order of magnitude.It also confirms the thin MgO enhanced the confinement effectively.
2 )
. The solid line is the calculated plot extracted from (1).The sheet carrier concentration decreases rapidly as the thickness of ZnMgO layer increases.The 2DEG density of the Zn 0.95 Mg 0.05 O/MgO/ZnO Sheet carrier concentration (×10 13 cm −
Figure 6 :
Figure 6: The dependence of sheet carrier concentration as a function of ZnMgO layer thickness. | 2,766.4 | 2015-01-01T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
A Variant of the Round-Robin Scheduling Problem
: We consider the following variant of the round-robin scheduling problem: 2 n people play a number of rounds in which two opposing teams of n players are reassembled in each round. Each two players should play at least once in the same team, and each two players should play at least once in opposing teams. We provide an explicit formula for calculating the minimal numbers of rounds needed to satisfy both conditions. Moreover, we also show how one can construct the corresponding playing schedules
Description of the Problem
Scheduling problems for sports tournaments are a broad and extensively researched field in combinatorics, operations research and combinatorial optimization.A prominent example is the roundrobin tournament problem with applications, e.g., in computer science (see, e.g., [1][2][3][4][5][6][7] to mention just some of the most recent work on the subject).Another one is the traveling tournament problem (see, e.g., [8][9][10]).In this article we will study a variant where not fixed teams play against each other but the teams are shuffled in each round of a tournament.
We deal with the following combinatorial problem: suppose 2n people are playing a sports tournament which consists of different rounds.In each round two new teams of n people are reassembled to play against each other.We would like to find a playing schedule such that the following conditions are satisfied: (s) each two players have been at least once in the same team, and (o) each two players have played at least once in opposing teams.
The general question is: what is the minimal number f s,o (n) of rounds of the tournament so that the conditions (s) and (o) are satisfied and what is an optimal playing schedule?We shall give a complete answer to both questions.The main goal of this paper is to prove the following result: An intriguing feature of the problem is that the sequence is not monotone.The paper is organized as follows.We start by considering the conditions (s) in Section 2 and (o) in Section 3 separately.The combined problem of finding an optimal playing schedule satisfying both conditions (s) and (o) is treated in Section 4. In particular, this section will contain the proof of the main Theorem 1.But first, we fix some notation and notions that we will use in the sequel.
Notions and Notation
We write S k×m to denote an array of k rows and m columns with entries s i, j ∈ {0, 1} in row i and column j.If the dimension of the array is clear from the context, we just write S .The columns of S k×m are denoted by C j , j = 1, . . ., m.Two columns are called complementary if their componentwise sum is 1 modulo 2. We write C for the complementary column of C.
An array S k×2n is called a playing schedule of length k for 2n players.If each row of S contains n 0's and n 1's, it is called a valid playing schedule.
The interpretation is as follows: the entry s i, j ∈ {0, 1} means that in game i player j plays in team s i, j .Observe that S satisfies (s) if it does not contain complementary columns, and S satisfies (o) if all columns are different.A valid playing schedule S is called admissible if it satisfies both conditions, (s) and (o).
The concept of a playing schedule satisfying the condition (o) is related to the notion of separating systems which goes back to Rényi [11] (see also Katona [12]).Definition 1.A separating system on a finite set M is a collection {A 1 , A 2 , . . ., A k } of subsets of M such that for every pair of distinct elements x, y ∈ M, there exists i ∈ {1, 2, . . ., k} with either x ∈ A i , y A i or x A i , y ∈ A i .Moreover, a covering separating system is a separating system such that M is the union of the sets A i .More specifically, we call a covering separating system an (n, k)-covering separating system if |M| = n and for all i we have We can interpret a playing schedule S of length k for 2n players as a set M = {1, 2, . . ., 2n} and a collection {A 1 , A 2 , . . ., A k } of subsets of M with |A i | = n, such that j ∈ A i if and only if s i, j = 1.Then we clearly have that S satisfies the condition (o) if and only if {A 1 , A 2 , . . ., A k } is a separating system on M. Definition 2. Two columns C i , C j of a playing schedule S k,2n are called equivalent, be a playing schedule, then the columns C 1 , C 3 and C 4 are equivalent, and hence the characteristic of these columns is 3, while the characteristic of C 3 is 1.
is an admissible extension of S of length 2, and The playing schedule S ′ is valid and sufficient and hence admissible.
Optimal Playing Schedule for Condition (s)
In this section we neglect condition (o) and consider only condition (s).We will denote by f s (n) the minimal number of rounds needed to satisfy condition (s).We find: is a playing schedule satisfying condition (s).Here each of the four subarrays in S contains n 2 columns.If n is odd, a possible playing schedule has the form where each of the four big subarrays contained in the middle of S consists of n−1 2 columns.Each entry marked with * can be independently replaced by either 0 or 1 as long as the condition is satisfied that the last row of S contains the same number of 0's and 1's.
It remains to show that no shorter playing schedules exists.To see this, notice first that columns (and rows) of a playing schedule S which satisfies condition (s) can be permuted and the result is still a playing schedule which satisfies condition (s).In particular we may assume that the first row of S has the form 0 0 . . .0 1 1 . . . 1 .
We now state condition (s) in the following form: for each pair (k, l) with 1 ≤ k < l ≤ 2n there is a j such that s j,k = s j,l .These are 2n 2 conditions.The first row saturates 2 n 2 such conditions.So the remaining rows must satisfy the remaining 2n 2 − 2 n 2 = n 2 conditions.Suppose the second row consists of n 1 zeros and n − n 1 ones in the first n columns, and hence n − n 1 zeros and n 1 ones in the last n columns.This gives additional 2n 1 (n − n 1 ) < n 2 conditions to the conditions which are already satisfied by the first row.This implies that S cannot satisfy condition (s) if it has only two rows.
Similarly for the third row of S : It consists of n 2 zeros and n − n 2 ones in the first n columns, and hence n − n 2 zeros and n 2 ones in the last n columns.This provides at most 2n 2 (n − n 2 ) additional conditions to the conditions we already have.Observe that Hence, if n is odd, S cannot satisfy condition (s) if it has only three rows.This completes the proof.□
Optimal Playing Schedule for Condition (o)
In this section we neglect condition (s) and consider only condition (o).We denote by f o (n) the minimal number of rounds required to satisfy condition (o).In the language of separating systems we need a separating system A 1 , A 2 , . . ., A m with minimal m on the set {1, 2, . . ., 2n} such that every set A i hat cardinality n.We find the following.
For this it is enough to construct a playing schedule of length m ⌈log 2 (n)⌉ + 1 = ⌈log 2 (2n)⌉ for 2n players.Since there exist more than 2n different columns (note that 2n ≤ 2 ⌈log 2 (2n)⌉ ) of length m with entries in {0, 1}, we can find n different columns having 0 as last entry.Then we define S to be the array which contains all these columns and their complements.Since all columns in S are different, (o) is satisfied.Moreover S is valid by construction (see Example 3).
On the other hand, the minimal separating system of a set of 2n elements has at least cardinality ⌈log 2 n⌉ + 1 (see [11]).Indeed, to show the inequality f o (n) ≤ ⌈log 2 (n)⌉ + 1 we argue as follows: A playing schedule with length strictly smaller than ⌈log 2 (n)⌉+1 for 2n players contains at least one pair of identical columns by the pigeonhole principle.Hence, no such playing schedule satisfies (o).□ Example 3. We illustrate the construction of an optimal playing schedule S for n = 11.The gray subarray is our choice of columns with length 5 (the columns correspond to the binary representation of the numbers 0, 1, 2, . . ., 10) where the last entry is 0 and the white subarray is the complement of the grey one.Then S is a playing schedule of minimal length satisfying (o).
Optimal Playing Schedule for Condition (s) and (o)
To find an optimal playing schedule for 2n players that satisfies both conditions (s) and (o) is considerably harder than for just one of the two conditions.We start with lower and upper bounds for the minimal length f s,o (n) for particular values of n.These bounds are improved and extended gradually, and finally lead to a proof of the main Theorem 1.
Bounds for
Here, χ even (n) = 1 if n is even, and 0 if n is odd.In this section we would like to improve these bounds.
By using covering separating systems we can estimate k := f s,o (n) from below.Assume that we have a minimal valid playing schedule S of length k for 2n players.Without loss of generality, we can assume that player 2n is always in the team 1, i.e., the last column of S consists of 1's, and the first n players play against the second n players in the first game.Hence, the first row of S consists of n 0's followed by n 1's.Consider now the collection otherwise S would violate the conditions (s) and/or (o).Hence, by Phanalasy et al. [13,Lemma 5] we get that k ≥ ⌊log 2 (n)⌋ + 2. If n = 2 m for m ∈ N \ {0}, then this estimate coincides with the next result.In every other case we will find a better lower bound for k.
Proof.Let k := f s,o (n).Then there exists a playing schedule S with k rows and 2n columns.Since S is admissible, the columns of S are all different, and no two columns of S are complementary to each other.There exist 2 k different columns of length k.However, only 2 k−1 of these are not complementary and this number must be larger than or equal to the number of columns in S .Hence, we get 2 k−1 ≥ 2n which proves the proposition.□ In this section we determine f s,o (n) if n is a power of 2.Moreover, we find an upper bound for To prove this theorem, we need the following lemma: Proof.We show that a valid playing schedule S of length f s,o (2 m−1 ) for 2 m players can be extended to a valid playing schedule S + of length f s,o (2 m−1 ) + 1 for 2 m+1 players.To do so, we define the columns of S + by Observe that all columns of S + are different from each other and not complementary to another column in S + .Hence, S + defines a valid and admissible playing schedule for 2 m+1 players.□ With this preparation we are ready for the proof of Theorem 4.
Proof of Theorem 4. For m = 1 the statement is true by Example 4. Now we assume that this is the case for an arbitrary m and show it for m + 1.By using Proposition 1, Lemma 1 and the induction hypothesis f s,o (2 m ) = m + 2 we get the desired result Let S be a valid and admissible playing schedule for 4 players of length 3, as in Example 4: .
Following the construction in the proof of Lemma 1, we obtain an optimal playing schedule for 8 players.
We will now introduce a helpful tool which we use to prove the upper bound of f s,o (n).Proof.The case n = 2 is verified by Example 4. Therefore let n ∈ N \ {0, 1, 2}, and m ∈ N \ {0} the unique value such that 2 m < n ≤ 2 m+1 .Let S be a valid and admissible playing schedule of length m+2 for 2 m+1 players (see Theorem 4).We construct a playing schedule S + of length ⌈log 2 (n)⌉ + 3 = m + 4 for 2n players as follows.
For the columns , where ). Observe that S + is admissible because S was admissible.It remains to show that S + is valid.The first m + 2 rows contain the same number of 0's and 1's by construction.Observe, that we used all columns of S and added pairs of columns and their complements.For the last two rows we choose the * 's in the C + i as follows.The numbers of triplets in S + and the number of C + i are either both even or both odd.In the even case the extensions in the C + i must be chosen such the last two rows of the C + i contain the same number of 0's and 1's.In the odd case, there must be one more 0 than 1's in the second last row of the C + i , and three 1's more than 0's in the last row.The result is a valid and admissible playing schedule S + of length m + 4 for 2n players.□ Example 6.Consider the case n = 3.We start with a valid and admissible playing schedule S for 4 players of length 3: We have The construction in the proof of Proposition 2 yields S + = (T (C 1 ) are chosen such that S + is valid.A possibility is .
The Case n Even
So far we know that for each n ∈ N \ {0, 1} there are only two possible values for f s,o (n).In this section determine the exact value of f s,o (n) for even n.
Proof.We start with an admissible playing schedule for 4 players.For example, we can consider with columns C 1 , . . ., C 4 .Since n = 2m, an admissible and valid playing schedule for n consists of 4m players.We define the arrays with m columns for i = 1, . . ., 4. The idea is now to find a playing schedule for 2m players of length ⌈log 2 (n)⌉ + 2 = ⌈log 2 (m)⌉ + 3, of the form Here E consists of m different column vectors of length ⌈log 2 (m)⌉.Observe that two columns from different arrays S i are neither the same nor the complement of each other.Hence, S + is a valid and admissible playing schedule for 2n players.This implies f s,o (n) ≤ ⌈log 2 (n)⌉ + 2 and the result follows by Corollary 1. □ n is even, we can alternatively define a playing schedule in the following way: Choose n different pairs of columns of length ⌈log 2 (n)⌉ + 1 such that the columns in each pair are complementary.Then define S as the array containing all columns of these pairs such that the columns of each pair belong either to the first n or to the last n columns of S (note that this only works if n is even).If we extend S by the row containing n 0's followed by n 1's, then S + = S ⊕ E is valid and there are no columns in S + which are equal or the complement of each other.Hence, S + satisfies (o) and (s) and its length is
The Case n Odd
The case n odd is more delicate than the case n even.We start with some general considerations.In the proof of Proposition 2 we had to extend the triplets by an extension of length 2. We now ask, by how many rows must a given playing schedule be extended in order to obtain an admissible playing schedule.For this, the characteristic of the columns (see Definitions 2 and 3) plays the decisive role.More concretely, we find a lower bound for the length of an admissible extension as a function of its largest characteristic.Theorem 6.Let S be a playing schedule, and k be the maximal characteristic of the columns in S .Then an admissible extension of S has at least length ⌈log 2 (k)⌉.
Proof.Let C u be a column of S with maximal characteristic k, and U the array built from the columns in the equivalence class of C u .We show that the minimal length t of an admissible extension E of U is at least ⌈log 2 (k)⌉.
Observe first that the maximal characteristic of the columns of E is ≤ 2. To see this, assume that this is not the case.Then there are three equivalent columns in E, say , where C j ′ ∼ C j ′′ ∼ C j ′′′ by assumption.Since E is supposed to be sufficient, any two of the vectors C ′ j ′ , C ′ j ′′ , C ′ j ′′′ are pairwise distinct and pairwise nonequivalent.At least two of the vectors C j ′ , C j ′′ , C j ′′′ are equal, say C j ′ = C j ′′ .It follows that E j ′ = E j ′′ .But then, in the case C j ′′′ = C j ′ as well as in the case C j ′′′ = C j ′ , E j ′′′ can neither be equal to E j ′ nor to E j ′′ .So, we have a contradiction.
We know now that the characteristic of the columns in E can be at most 2. On the other hand if E is an extension of length t, then we find at most 2 t−1 different equivalence classes among the columns of E. Hence, the number of columns of E must be bounded by k because and this completes the proof.□ Remark 1.Note that the last inequality becomes an equality if there are exactly 2 t−1 equivalence classes in the columns of E, and each equivalence class consists of two columns.Also note that the number of equivalence classes of the columns of E must be at least k 2 .Example 7. Let U = 0 0 1 1 0 0 1 1 .
The maximal characteristic is k = 4, and hence the minimal length of a sufficient extension E of U is 2. For example, we can choose E = 0 1 0 1 0 1 1 0 and get In the following we will analyse the structure of the equivalence classes more closely.We only need to discuss the case when M 1 and M 2 have the same odd cardinality as we see later.In order to prove Theorem 7 we will show that the columns of E must contain at least q + 2 equivalence classes.This is done in the next two lemmas.Lemma 2. For an admissible extension E of U, the columns of E must contain more than q equivalence classes.
Proof.We have seen in the proof of Theorem 6 that each equivalence class of columns of E contains at most 2 elements.Hence, the columns of E contain at least q equivalence classes.Assume now that here are exactly q equivalence classes and work towards a contradiction.In this situation every equivalence class contains two elements.These two elements are either equal or the complement of each other.Hence, we can choose representatives E 1 , E 2 , . . ., E q of each equivalence class, and a natural number 0 ≤ c ≤ q such that, after reordering the columns if necessary, we have Since the columns of U all belong to the same equivalence class, and U ′ is admissible by assumption, there exists a column C in U such that where two consecutive * are either equal to C or C.However, the number of consecutive pairs of C and C in U must be the same.Hence, the number of * in U must be divisible by 4 and so q − c is even and c is odd.
Moreover, since E is valid, we have that ) must also be valid and hence E ′′ := (E 1 , E 2 . . ., E c ) is also valid.However, E ′′ cannot be valid because its number of columns is c and c is odd.Thus, we get a contradiction.□ Lemma 3.For an admissible extension E of U, the columns of E must contain more than q + 1 equivalence classes.
Proof.By Lemma 2 we already know that the columns of E contain at least q + 1 equivalence classes.
To obtain a contradiction, assume that they contain exactly q + 1 equivalence classes.Then there are exactly q − 1 equivalence classes with 2 elements, and 2 equivalence classes with 1 element.Hence, we can assume that there exists a natural number 0 ≤ c ≤ q − 1 such that for a choice of representatives E ).
Define now .
Since E is an admissible extension of U, E ′ is not valid because otherwise (E q , E q+1 ) would also be valid and so E q and E q+1 would be in the same equivalence class which is not the case.Thus, E c ) cannot be valid.We now consider separately the cases, c even and c odd.Let c be even.Since E ′′ is not valid, there exists a row such that the difference of 0's and 1's in this row is at least 2. Hence, there is a row in E ′ such that the difference between the 0's and 1's is at least 4. Adding the missing two columns with characteristic 1 cannot make this playing schedule valid.Therefore we have a contradiction.
Let c be odd.Then the difference of 0's and 1's in each row of E ′′ must be at least 1, and the difference of 0's and 1's in each row of E ′ is at least 2. Thus, adding two columns to E ′ to make it valid means that the two added columns E q and E q+1 must be equal and this is a contradiction because they do not belong to the same equivalence class by assumption.□ Theorem 7 follows now from the lemma above.
Proof of Theorem 7. We showed that the columns of E contain at least q + 2 different equivalence classes.Then the result follows from the inequality 2 t−1 ≥ q + 2.
□
Remark 2. For q = 1 it is not possible to find an admissible extension.However, we will now show that for q > 1 we can construct an admissible extension with q + 2 equivalence classes.This will be the last building block we need to prove Theorem 1. ).
Example 4 .
Consider a playing schedule S for 2n = 4 players.By the inequality 2 k−1 ≥ 2n in the proof of Proposition 1 where k := f s,o (n), we have that k ≥ 3.In fact, we can find the following valid and admissible playing schedule for k = 3, and hence f s,o (2) = 3:
Definition 4 .Proposition 2 .
For a column C of a playing schedule, we call the array T (C) consisting of the three columns C, C, C a triplet of C. For all n ∈ N \ {0, 1} we have f s,o (n) ≤ ⌈log 2 (n)⌉ + 3.
Theorem 7 .
Let S be a playing schedule, and [C u ] be the equivalence class of a column of S with characteristic k > 1. Assume that the setsM 1 := {C j ∈ [C u ] : C j = C u } and M 2 := {C j ∈ [C u ] : C j = C u } have odd cardinality q := |M 1 | = |M 2 | > 1.Let U be the array built from the columns in [C u ].Then the minimal length t of an admissible extension E of U satisfies t ≥ ⌈log 2 (q + 2)⌉ + 1.
Proposition 3 .
Let S be a playing schedule, and [C u ] be the equivalence class of a column of S with characteristic k > 1. Assume that the setsM 1 := {C j ∈ [C u ] : C j = C u } and M 2 := {C j ∈ [C u ] : C j = C u }, have odd cardinality q := |M 1 | = |M 2 | > 1.Let U be the array built from the columns in [C u ].Then the minimal length t of an admissible extension E of U is t = ⌈log 2 (q + 2)⌉ + 1.Proof.By Lemma 3 it remains to show that an admissible extension E of U exists such that the columns of E contain exactly q + 2 equivalence classes.Without loss of generality (by renumbering players), we can assume that U has the form U = (C, . . ., C q times , C, . . ., C q times Definition 3. Let S k×2n and E t×2n be two playing schedules.Then we can build a new playing schedule S ′ = S ⊕ E with k+t rows by putting the array S on top of E (see Example 2).E is called an extension of S of length t.If all columns of S ′ have characteristic 1, then E is called sufficient.If E is valid and sufficient, then it is called an admissible extension.
1 , E 2 , . . ., E q+1 of all equivalence classes.As in Lemma 2, we find a column C in U such that U = (C, C, . . ., C, C | 6,230 | 2024-01-01T00:00:00.000 | [
"Mathematics"
] |
Hierarchical Onsager symmetries in adiabatically driven linear irreversible heat engines
In existing linear response theories for adiabatically driven cyclic heat engines, Onsager symmetry is identified only phenomenologically, and a relation between global and local Onsager coefficients, defined over one cycle and at any instant of a cycle, respectively, is not derived. To address this limitation, we develop a linear response theory for the speed of adiabatically changing parameters and temperature differences in generic Gaussian heat engines obeying Fokker–Planck dynamics. We establish a hierarchical relationship between the global linear response relations, defined over one cycle of the heat engines, and the local ones, defined at any instant of the cycle. This yields a detailed expression for the global Onsager coefficients in terms of the local Onsager coefficients. Moreover, we derive an efficiency bound, which is tighter than the Carnot bound, for adiabatically driven linear irreversible heat engines based on the detailed global Onsager coefficients. Finally, we demonstrate the application of the theory using the simplest stochastic Brownian heat engine model.
Linear irreversible thermodynamics is a universal framework that systematically describes the response of equilibrium systems under weak nonequilibrium perturbations [28,29]. Despite its importance, the application of linear irreversible thermodynamics to heat engines operating under small temperature differences has been limited, until recently [30][31][32][33][34][35][36][37][38]. This is because the identification of thermodynamic fluxes and forces is highly complex for heat engines undergoing cyclic changes. Nevertheless, such an identification is essential because the performance of heat engines depends on the response coefficients, that is, Onsager coefficients, in the linear response regime [6,11]. In particular, the linear irreversible thermodynamics for the temperature difference and the speed of adiabatically changing parameters of cyclic heat engines is limited to a few specific examples [30][31][32]. Cyclic heat engines can experience continuous equilibrium change along a cycle and be substantially perturbed from a reference equilibrium point. This makes the application of the linear response theory, which is usually defined for a response from a one-equilibrium point, difficult and obscure. Notably, the identified Onsager symmetry for these models is derived only phenomenologically, by adopting intuitive global fluxes and *<EMAIL_ADDRESS>forces per cycle, without deriving a relation to the local thermodynamic fluxes and forces defined at any instant of a cycle.
By contrast, in recent studies on quantum thermoelectrics, such a linear response for adiabatically changing parameters has been investigated as an effect of adiabatic ac driving applied to a system [39,40]. Remarkably, the Onsager coefficients defined globally for a one-cycle period of ac driving, which determine the overall performance of the thermoelectrics, are expressed in terms of locally defined Onsager coefficients at any instant during driving [39,40]. The key of this formulation is to apply the standard linear response theory to instantaneous equilibrium states instead of the usual equilibrium states by regarding that the adiabatically changing parameters have "frozen," fixed values. Considering the universal nature of linear irreversible thermodynamics, we are motivated to uncover a similar hierarchical structure for adiabatically driven linear irreversible heat engines. To this end, we focus on the simplest heat engine model. We establish a hierarchical relationship between global and local Onsager coefficients for a generic Gaussian heat engine model obeying Fokker-Planck dynamics. The adiabatic dynamics can be easily obtained based on the idea of time-scale separation [41], which is one of the advantages of this model. Moreover, based on the detailed structure of the Onsager coefficients, we derive an efficiency bound, tighter than the Carnot efficiency, under a given speed of adiabatic change.
Model-. The heat engine consists of a working substance (system) and thermal bath. The state of the system x = (x 1 , · · · , x n ) at time t is specified by a probability distribution P(x, t). The system is periodically operated based on p external parameters λ(t) = (λ 1 (t), · · · , λ p (t)) and the bath temperature T (t) with period τ cyc ; λ(t + τ cyc ) = λ(t) and T (t + τ cyc ) = T (t). The energy of the system is given by H(x, t), which is a function of λ(t). Specifically, the external parameters are expressed as λ(t) = λ 0 + g w (ǫt) using the timeindependent part λ 0 and the time-dependent part g w . Here, ǫ ≡ 1/τ cyc denotes a small parameter corresponding to the speed of the process. Thus, a long period of time t = O(1/ǫ) is required for a finite increment of g w . The bath temperature T (t) is given by where ∆T (t) ≡ γ q (ǫt)∆T , and ∆T ≡ T h − T c and γ q (ǫt) are the temperature difference and periodic function satisfying 0 ≤ γ q (ǫt) ≤ 1, respectively [33]. We define the entropy production rate per cycleσ for the system and thermal bath. Hereafter, we denote by the overdot a quantity per unit time or a quantity being time differentiated. The energy change rate becomesĖ ≡ d dt H(x, t) = d dt dx n H(x, t)P(x, t), where · refers to an ensemble average with respect to P(x, t). We decomposeĖ into the sum of the heat and work fluxesQ andẆ;Ė = dx n H(x, t) ∂P(x,t) ∂t + dx n ∂H(x,t) ∂t P(x, t) ≡Q −Ẇ. Then, we can defineσ aṡ where the prime symbol denotes the time derivative with respect to the slow time T ≡ ǫt andġ w (ǫt) = dg w (ǫt) dt = ǫg ′ w (ǫt). The dot between symbols denotes an inner product. Here, we have defined the following work and heat fluxes per cycle as thermodynamic fluxes: The corresponding thermodynamic forces are defined as We assume the global linear response relations J = LF between J ≡ (J w , J q ) T and F ≡ (F w , F q ) T defined over one cycle of the heat engine in the limit of ǫ → 0 and ∆T → 0: where L corresponds to the global Onsager coefficients. Our goal is to find a detailed expression of L in terms of its local counterpart defined at any instant of the cycle, thereby establishing a hierarchical relationship between the two. Fokker-Planck dynamics-. For further calculation of J, we need to specify the dynamics of P(x, t). In what follows, we consider generic Gaussian heat engines described based on multivariate Ornstein-Uhlenbeck processes as the simplest models. The energy of the system, which serves as a potential function, thus takes the following quadratic form: where H(t) is a positive-definite symmetric matrix. We assume that x is even variables under time reversal. The probability distribution of the system P(x, t) obeys the Fokker-Planck (FP) equation with the time-dependent drift matrix A and diffusion matrix B (i, j = 1, · · · , n) [42,43]: where J i (x, t) is a probability current. A is a symmetric matrix and B is a positive-definite symmetric matrix. B is further assumed to be invertible. The probability distribution is assumed to be the zero-mean Gaussian distribution: where the symmetric covariance matrix The equation to be solved is replaced with the dynamic equations in Eq. (10) For the stationary distribution to agree with a Boltzmann distribution at temperature T c , the following detailed balance condition is usually imposed [43]: which together with Eq. (11) yields Ξ −1 0 = H 0 /k B T c with k B being Boltzmann constant. Here, as a natural generalization of Eq. (12), we impose the detailed balance condition, including the time-dependent part: whose validation will be clarified below. We decompose A(t), B(t), and Ξ(t) into time-independent and time-dependent parts as We solve Eq. (14) perturbatively with respect to ǫ. Because a regular perturbation yields a secular term, we use a two-timing method based on time-scale separation [41]. As a result, we obtain Ξ(t) as [44] where Ξ ad (t) and δΞ nad (t) are the adiabatic solution and the lowest non-adiabatic correction to it, respectively, as From Eqs. (13) and (16), we have Thus, the probability distribution P(x, t) in the adiabatic limit ǫ → 0 agrees with an instantaneous equilibrium distribution with energy H(x, t) and temperature T (t), which validates the condition given by Eq. (13). Local and global linear response relations for speed and temperature differences-. We can now evaluate the thermodynamic fluxes in Eqs. (2) and (3) using Eqs. (15)- (17).
Note that we can rewrite Eq. (3) as J q = 1 τ cyc τ cyc 0 dtγ q (ǫt) dx n ∂H(x,t) ∂x i J i (x, t) using Eq. (8), and we can express Eq. (2) and (3) as the time average of the local thermodynamic fluxes as respectively, where we define the response vectors j = T as the local thermodynamic fluxes. We also introduce the conjugate local nonequilibrium perturbation vector The perturbations are the speed of adiabatically changing parameters and temperature difference, and the responses are the generalized pressure and instantaneous heat flux. The linear relationship between the perturbations and responses can be written as a local fluxforce form [39,40], namely j = j ad + Λf in the limit of ǫ → 0 and ∆T → 0, where j ad is an adiabatic response that remains within the limit of ǫ → 0, and Λ is the local Onsager matrix given by We can expand j w and j q with respect to f as [45] to the linear order of O(|f|). We thus identify j ad and Λ as respectively. We can confirm the Onsager symmetry Λ ww,mm ′ = Λ ww,m ′ m and anti-symmetry Λ wq,m = −Λ qw,m (m, m ′ = 1, · · · , p) at the local level. The former symmetry relates to the dissipation, while the latter anti-symmetry relates to the dissipationless cross-coupling between the heat flux and the work flux (heat engine-refrigerator symmetry).
Subsequently, we consider the global linear response relations J = LF in Eqs. (5) and (6). The global thermodynamic fluxes in Eqs. (19) and (20) can be rewritten as J w = 1 0 dT g ′ w (T ) · j w and J q = 1 0 dT γ q (T ) j q in terms of the slow time T = ǫt. We note that the contribution from j ad vanishes upon cycle averaging. Note that F w = ǫ/T c and F q ≃ ∆T/T 2 c in the linear response regime, and using Eqs. (22) and (23), we immediately arrive at the following expression for the global Onsager matrix L: The local and global Onsager matrices in Eqs. (25) and (26) constitute the first main results of this study. The global Onsager coefficients L are given as the integration over one cycle of the local Onsager coefficients Λ in Eq. (25). This yields a hierarchical relationship between L and Λ, thereby relating the different levels of symmetries. In particular, L shows Onsager anti-symmetry L wq = −L qw , reflecting the Onsager antisymmetry Λ wq,m = −Λ qw,m for Λ.
In the linear response regime, the entropy production rate per cycleσ = J w F w + J q F q in Eq. (1) takes the quadratic forṁ σ = L ww F 2 w + (L wq + L qw )F w F q + L qq F 2 q , where we have used Eqs. (5) and (6). The second law of thermodynamicsσ ≥ 0 imposes constraints on L: For the present system, we finḋ by using the explicit form of L in Eq. (26). Remarkably, we readily observe L ww ≥ 0, and thus,σ ≥ 0 from the positivedefinite quadratic form of L ww in Eq. (26). The anti-symmetric coefficients do not contribute toσ because they represent a reversible, adiabatic change in entropy. The vanishing L qq also reducesσ, which arises from nonsimultaneous contact with the thermal baths at different temperatures. This property is essentially the same as that known as the tight-coupling condition [6]. Note that we have the optional thermodynamic fluxes and forces. By switching the roles of J w and F w , that isJ w = F w andF w = J w , while maintainingJ q = J q and F q =F q , we obtain another global Onsager matrixL: assuming that L ww is nonvanishing and using L qq = 0. Thus, we can confirm the symmetric non-diagonal elements and the vanishing determinant, where the latter corresponds to the tight-coupling condition. Such a choice of fluxes and forces was adopted to identify Onsager coefficients of the finite-time Carnot cycle in [30][31][32]. As we will see below, the vanishing L qq , equivalently, the tight-coupling condition, implies the attainability of the Carnot efficiency in the adiabatic limit ǫ → 0 [39]. Thermodynamic efficiency-. Using the global linear response relations in Eqs. (5) and (6) together with Eq. (26), we formulate the power P and efficiency η of our Gaussian heat engines: where η C ≡ ∆T/T h ≃ ∆T/T c is the Carnot efficiency. In the adiabatic limit F w → 0, we recover η = η C . For small ǫ, the power behaves as P = −L wq ∆T ǫ/T 2 c + O(ǫ 2 ). It should agree with ∆T ∆S ǫ, where ∆S denotes an adiabatic entropy change of the system and ∆T ∆S is an adiabatic work per cycle. Thus, we identify L wq = −L qw = −T 2 c ∆S , which clarifies the vanishing contribution of these antisymmetric parts to the irreversible entropy production rateσ. The efficiency under a given F w , that is, the speed ǫ, is bounded by the upper side as where T c L 2 is the minimum value of L ww . Reparameterizing from T to θ (0 ≤ θ ≤ 1), we have Using the Cauchy-Schwartz inequality, we obtain L ww ≥ T c . Equation (32) constitutes our second main result. It yields a tighter bound than the Carnot efficiency imposed by the conventional second law of thermodynamics and is attained for an optimal protocol under a given cycle speed. Such a bound was obtained by virtue of the detailed structure of the global Onsager coefficients (Eq. (26)). L is equivalent to the thermodynamic length, which constrains the minimum dissipation along finite-time transformations close to equilibrium states [46][47][48][49][50][51][52][53]. An expression similar to Eq. (32) including an effect of temperature-variation speed was recently derived based on a geometric formulation of quantum heat engines [23]. Here, we derived the similar form in terms of the global linear response relations between the speed of adiabatically changing parameters and temperature difference. Example: Brownian heat engine-. We demonstrate our results by using the simplest illustrative case of a onedimensional stochastic Brownian heat engine model (n = m = 1) [8,16,46]. Let x 1 = x be the position of a Brownian particle immersed in a thermal bath. The probability P(x, t) obeys the following FP equation [54,55]: where γ is viscous friction coefficient and H(x, t) = U(x, t) = λ(t) 2 x 2 with λ(t) = λ 0 + g w (ǫt) is a harmonic potential. We identify A and B as A = A 11 = − λ(t) γ and B = B 11 = 2k B T (t) γ . Because the Boltzmann distribution with T c and λ 0 is p 0 (x) = λ 0 2πk B T c e − λ 0 x 2 2k B Tc , the variance at equilibrium is Ξ 0,11 = k B T c /λ 0 . By using the adiabatic solution Ξ ad,11 (t) = k B T (t)/λ(t). The local linear response relations j = j ad + Λf are then obtained from Eqs. (24) and (25) as up to O(|f|), which determines the local and global Onsager matrices Λ and L as (36) respectively. We can confirm the Onsager anti-symmetry in Λ and L, as expected. For a Carnot-like cycle with γ q (T ) = 1 for 0 ≤ T < T h (0 < T h < 1) and γ q (T ) = 0 for T h ≤ T < 1 [33], we have L wq = −L qw = −T 2 c ∆S = k B T 2 c 2 ln(λ 1 /λ 0 ), where λ 1 ≡ λ(T h ) and λ 0 = λ(0) = λ(1) are the minimum and maximum values of λ along the cycle, respectively. We can using the optimal protocol λ * (T ) for a given λ 0 and λ 1 [46]: λ * (T ) = The efficiency bound in Eq. (32) for the present case thus becomes A comparison of the bound Eq. (37) with that, for example, for a linear protocol connecting λ 0 and λ 1 highlights the importance of protocol optimization as a design principle.
Concluding perspective-. We developed a linear response theory for generic Gaussian heat engines as the simplest model of adiabatically driven linear irreversible heat engines. We established the hierarchical relationship between the local and global Onsager coefficients. Further, we derived the efficiency bound under a given rate of adiabatic change; the derived bound is tighter than the Carnot efficiency imposed by the second law of thermodynamics. We expect that the present results will contribute to a deeper understanding of the physical principles and optimal control of nonequilibrium heat engines.
We note complementary approaches for the formulation of the linear irreversible thermodynamics to periodically driven heat engines Ref. [33][34][35][36][37][38]. In these approaches, the other thermodynamic force (that is, in addition to the temperature difference) is the strength of periodic forcing, and not its speed, as in the present approach. Interestingly, the Onsager coefficients in these cases were found to be decomposed into adiabatic and non-adiabatic contributions. The existence of different types of linear irreversible thermodynamics implies the rich and versatile structures of periodically driven heat engines, and this deserves further investigation. | 4,200 | 2021-03-05T00:00:00.000 | [
"Physics"
] |
DC Grid for Domestic Electrification
Various statistics indicate that many of the parts of India, especially rural and island areas have either partial or no access to electricity. The main reason for this scenario is the immense expanse of which the power producing stations and the distribution hubs are located from these rural and distant areas. This emphasizes the significance of subsidiarity of power generation by means of renewable energy resources. Although in current energy production scenario electricity supply is principally by AC current, a large variety of the everyday utility devices like cell phone chargers, computers, laptop chargers etc. all work internally with DC power. The count of intermediate energy transfer steps are significantly abridged by providing DC power to mentioned devices. The paper also states other works that prove the increase in overall system efficiency and thereby cost reduction. With an abundance of solar power at disposal and major modification in the area of power electronic conversion devices, this article suggests a DC grid that can be used for a household in a distant or rural area to power the aforementioned, utilizing Solar PV. A system was designed for a household which is not connected to the main grid and was successfully simulated for several loads totaling to 250 W with the help of an isolated flyback converter at the front end and suitable power electronic conversion devices at each load points. Maximum abstraction of operational energy from renewable sources at a residential and commercial level is intended with the suggested direct current systems.
Introduction
The World Energy Outlook 2015 statuses that nearly 17% of the total inhabitants in the world lacks access to electric power at homes [1]. As per the reports of International Energy Agency, India has over 237 million citizens belonging to this category. According to CEEW (Council for Energy, Environment, and Water), over 50% of houses in the states of West Bengal, Bihar, Madhya Pradesh, Uttar Pradesh, Jharkhand, Orissa has a shortage of electric power in spite of being grid connected. Regardless of the efforts put in over the years to electrify rural stretches, many households in five out of these six states have not more than 8 h of supply or no supply at all and are regularly subjected to blackouts. The states may perhaps slightly better conditions but the same cannot be said for the unfortunate low-income strata of citizens lacking admission to electricity. Most of these homes use kerosene for their lighting purposes which give poor illumination. It also sends out unhealthy fumes, may cause fire perils, ecologically unfavorable and also too exclusive when bought at market tariffs unless subsidized by the Government [2,3].
It may come as surprise to see that not all houses in a village in India maybe be electrified even though the village is said to be grid connected. By the explanation provided by the Government of India, "A village is considered electrified when 10% of the homes in a village are connected to the grid." As on May 2016, 18,452 villages are remaining to be electrified [4]. Nevertheless, during recent past numerous of individuals were able to harvest electricity with the help of fleet-footed economic growth and along with numerous sponsored programs, the country has made significant augmentation in its electrical infrastructure. In the energy production sector, the fossil fuels and conventional methods of power generation are losing it demand rapidly since policymakers around the globe are stressing on the effects of global warming and climate changes. The world is advancing towards green energy to meet the ever increasing power demand, thus requiring a mixing up numerous resources both conventional and non-conventional [5]. The unique energy situation in India is making the country's growth objectives to be revised and modified so that it strive to meet its current demands as well as generating energy which is clean, efficient and environmental friendly [6]. This encourages economy to take up more enterprises and initiatives to extract maximum energy from renewable resources.
The abundance of solar radiation and decreasing prices of Photovoltaic components ease of its maintenance and scalability is making Solar PV generation more popular among renewable energy sources [7]. Even though in most parts of the country the sun shines adequately up to 10 to 12 h a day for the most part of a year, the huge impact of distributed solar power has not been amply exploited. In places receiving sunlight higher than 1400 equivalent peak hours annually, the gap in power shortage can be bridged by using solar energy. The government of India has become conscious of these facts and is enhancing the consumption of renewable power sources, specifically the solar power, in meeting the demand-supply gap nationwide. Thus, pertinent strategic guidelines have been created for endorsing solar power usage across the country. It aims at achieving 100 GW PV (photovoltaic) capacity from current statistics of 20 GW by 2022 with the assistance of the Jawaharlal Nehru National Solar Mission (JNNSM). The JNNSM intends to power rural areas, which was deprived of electricity before. Encouraged by the progress made in 4 years it now aims to achieve 100 GW by 2022 with solar rooftops contributing 40 GW of this. In the light of the launch of JNNSM program few states launched their separate solar policies. The Solar Energy policy of Tamil Nadu initiated in 2012 is targeting 5 GW solar energy by 2023. To encourage solar rooftops the Tamilnadu government provides huge incentives and almost 30% subsidies to buildings incorporating solar rooftops. This movement has encouraged the studies and improvement on Low Voltage DC arrangements, as they are suitable for residence applications as well as can be easily integrated with renewable energy sources and storage systems [8,9]. The rising demand for aggregating renewable energy resources is bringing back DC into the energy distribution frame since it is easy to integrate renewable sources into the grid in such case. Most loads at the utilization terminal these days are DC or non-sinusoidal. As a result, many types of exploration have been going on dc dissemination systems and their prospective uses in residential applications [10]. The DC also enjoys numerous other advantages over the AC system, one of the significant being the reduced number of converters at each power conversion legs and better efficiency in comparison with AC grid. If the solar DC output voltage is fed straight to these device appliances, the conversation stages are reduced from three stage DC-AC-DC to two stage DC-DC when solar PVs and fuel cells are interconnected with DC microgrids [11][12][13][14][15]. Additionally, the lack of reactive power decreases the current required to pass the equal magnitude of energy [16] and also mitigates the issues of skin effect, power factor and harmonics [17].Studies conducted on DC distribution systems in various residences located in various locations and different topologies in the United States showed that energy savings estimation can be up to 5% in case of a non-storage system and up to 14% for a system using storage [18,19]. There are more optimistic researchers that aim to pull of energy savings of 25-30% [20]. Exchanging the prevailing alternating current delivery grids with direct current is impracticable and economically non-viable. This is why DC can become the idyllic pick when it comes to energizing remote areas which are not connected to the main power Energies 2019, 12, 2157 3 of 12 grid, also known as "island areas". This type of areas can be made self-sustainable by harnessing non-conventional energy resources.
This article proposes one direct current microgrid which uses Solar PV to facilitate a domestically situated appliance in an isolated area to energize itself. Microgrids are entities that can be self-controlled and operated in island or grid connected mode when interconnected with the local distribution systems [21]. They mainly refer to small-scale power network having voltage levels lesser than 20 kV and power rating up to 1 MW. Using this system, unsolicited energy changeover steps and losses accompanying by the same are eliminated. The major advantage of DC Microgrid is its ability to comply easily with DC loads and Distributed Energy Resources (DERs). For example, only a DC-DC conversion stage is required in a DC Microgrid when it is connected to solar PV and a battery storage, thus provides a simple and cost cutting structure with the better control strategy. This boosts the general system efficiency and makes the system less complicated. It also supplements the performance and the life of components [22,23]. The absence of normalization, instruction and improvement of protection devices for DC-DC converters are few of the major problems that DC power systems must solve, before being regarded as an appropriate option that supersedes AC power systems in rural and island areas [17]. Figure 1 illustrates the methodology outline of this research works. The works done and the outcome of this methodology is explained in coming sessions. This article proposes one direct current microgrid which uses Solar PV to facilitate a domestically situated appliance in an isolated area to energize itself. Microgrids are entities that can be self-controlled and operated in island or grid connected mode when interconnected with the local distribution systems [21]. They mainly refer to small-scale power network having voltage levels lesser than 20 kV and power rating up to 1 MW. Using this system, unsolicited energy changeover steps and losses accompanying by the same are eliminated. The major advantage of DC Microgrid is its ability to comply easily with DC loads and Distributed Energy Resources (DERs). For example, only a DC-DC conversion stage is required in a DC Microgrid when it is connected to solar PV and a battery storage, thus provides a simple and cost cutting structure with the better control strategy. This boosts the general system efficiency and makes the system less complicated. It also supplements the performance and the life of components [22][23]. The absence of normalization, instruction and improvement of protection devices for DC-DC converters are few of the major problems that DC power systems must solve, before being regarded as an appropriate option that supersedes AC power systems in rural and island areas [17]. Figure 1 illustrates the methodology outline of this research works. The works done and the outcome of this methodology is explained in coming sessions.
Existing Works in DC Microgrids
An analysis has been performed on a 48 V DC microgrid integrated with PV panels using very effective DC loads utilized in a multi-storied building in India [24,25]. The findings show that the DC microgrid is far efficient in bringing cost savings, thereby dropping the electricity invoices. A practical employment of low power solar system designed to supply the basic power needs of a low-income family in India has been studied [9]. The system supplies a cumulative load of 125 W from PV Panel. This system may not be able to meet up to the power requirement of a fully electrified and digital household but is able to show that when minimizing the system cost is priority a low voltage DC distribution system have no challengers.
Works on larger test beds such as 5 kW with high DC link voltage of 380 V has also been conducted to study the feasibility of DC as distribution system [15]. A solar hybrid system of grid connection along with solar array panel feeding 220 V DC link powering up an entire household is realized in [5]. An effective Maximum Power Point Tracking (MPPT) algorithm to obtain constant DC voltage of 12 V or 24 V using a PI controller is studied in [6]. An Off Grid Home (OGH) which is inverter less system to power lighting loads are deployed in [8].
Green Office and Apartments (GOA) technology is a solution offered to ensure all day power using an integration of grid and batteries charged from solar PV [26]. A DC microgrid consisting of 250 W solar panel and charge controllers to regulate battery charging has been proposed in [27,28]. Suggest a novel reconfigurable inverter topology which can perform DC to DC, DC to AC and grid connection at the same time. An experimental prototype of a power balancing circuit to solve mismatching problems while connecting various renewable to a DC link is proposed in [29,30]. Elaborates the concepts of DC house and Null Net Energy (NNE) buildings which supply DC to residential buildings. Various Multiple Input Multiple Output (MIMOCs) DC-to-DC converters that can be used as front end converter for a DC distribution in future homes is discussed in [31][32][33][34][35][36][37][38][39][40][41][42].
Apart from the conventional microgrid works, some researches are done in the field of advanced aspects of microgrid implementation. Researches [43][44][45] discusses about the consumers with distributed storage capacity. In this case, the demand sharing and power quality improvement will be much easier. Refs. [46,47] considers renewable energy sharing mechanism of multiple consumers, rather than the individual renewable energy harvesting topology. Since this research work discusses about the implementation of a DC microgrid in rural domestic area, this advanced techniques are neglected for the initial phase. In addition, since solar energy is weather dependent, to ensure a regulated supply irrespective of the weather or time, storage devices or weather independent renewable energy sources like fuel cells need to be integrated to the microgrid [48,49]. This part also neglected from the simulation, as the outcome will be the same.
Selection of Bus Voltage
The DC grid distribution system having several practical challenges in distributing a regulated power supply [32]. The DC microgrid supplying low voltage and higher currents requires high gauge cables, which leads to an increase in overall losses [32,33]. Thus in order to reduce the losses and save the installation cost, the DC microgrid voltage must be sufficiently high enough. As a paradox, if the link voltage is too high, it leads to the occurrence of sparks, arcing and electric shock. Many research works have been done in order to reduce the arcing and spark phenomenon in order to optimize the DC distribution system. However, this paper deals with loads not requiring more than 240 V voltage and 3.42 A current. Hence the DC link voltage is taken as 72 V [34][35][36][37].
Since the majority of the domestic electrical appliances internally needs DC voltage for its operation, which is obtained conventionally by stepping down of rectified AC voltage supply [38]. Renewable energy resources can directly produce this low value of DC voltage [39]. Hence the rectification stage can be avoided if the load is powered with DC. A customary magnitude for DC grid voltage is not fixed for a microgrid. The chosen loads for this research has rated voltage varies in the range from 5 V to 230 V. For ensuring a coherent transition from grid voltage to rated load voltage, an optimum value of grid voltage of 72 V is chosen [40,41].
Front End Isolated DC to DC Converter
The input voltage V dc of the DC microgrid is considered to be a solar panel whose output is expected to be 24 V. A 20 W, 12 V solar panel (54 × 46 cm) is used for implementing the solar array. To make the rated input to the grid, seven parallel connections of two series connected panels are used. The distribution losses in the microgrid can be reduced to a low value by stepping up the input voltage to a DC voltage of 72 V by a Flyback converter. A flyback converter is chosen for the proposed system as the primary side DC-DC converter for the purpose that it can facilitate galvanic seclusion in amongst the input and the DC microgrid. The specifications of the selected flyback converter are input voltage as 24 V, output voltage as 72 V and output power as 250 W. The simplicity of its topology compared to other isolated SMPS topologies is an added advantage. It also has the lesser component count and lowers cost, making it popular. This will function for an extensive difference of the source voltage, as well as, it can facilitate numerous secluded DC voltage outcomes.
L, C and R denotes inductor, capacitor and resistor respectively. Lm denotes the mutual inductance. For an input voltage V dc of 24 V and grid voltage V grd of 72 V, duty cycle ratio of the flyback converter is 0.42. The front end converter is designed to energize a cumulative device power of 250 W. Isolation transformer of turn's ratio 1:4 is chosen for the proposed topology. The magnetizing inductance Lm of the isolation transformer is 85 µH. Switching frequency is selected as 50 KHz, and for a 1% voltage-ripple, capacitor C1 of 50 µF is used. A clamping circuit is also connected to the isolation transformer to absorb the energy stored in the inductor and provide a path for its dissipation to avoid high surge voltage. The capacitance Ccl of 1 µH is used in the clamping circuit. The microgrid voltage is fed to various devices by Point of Load (POL) converters [42]. Depending upon load specifications POL converters can be Buck-boost, Buck or Boost. Table 1 show various loads utilized by the proposed system. Figure 2 shows the proposed topology for the DC microgrid including the front end converter, bus and loads.
Front End Isolated DC to DC Converter
The input voltage Vdc of the DC microgrid is considered to be a solar panel whose output is expected to be 24 V. A 20 W, 12 V solar panel (54 × 46 cm) is used for implementing the solar array. To make the rated input to the grid, seven parallel connections of two series connected panels are used. The distribution losses in the microgrid can be reduced to a low value by stepping up the input voltage to a DC voltage of 72 V by a Flyback converter. A flyback converter is chosen for the proposed system as the primary side DC-DC converter for the purpose that it can facilitate galvanic seclusion in amongst the input and the DC microgrid. The specifications of the selected flyback converter are input voltage as 24 V, output voltage as 72 V and output power as 250 W. The simplicity of its topology compared to other isolated SMPS topologies is an added advantage. It also has the lesser component count and lowers cost, making it popular. This will function for an extensive difference of the source voltage, as well as, it can facilitate numerous secluded DC voltage outcomes.
L, C and R denotes inductor, capacitor and resistor respectively. Lm denotes the mutual inductance. For an input voltage Vdc of 24 V and grid voltage Vgrd of 72 V, duty cycle ratio of the flyback converter is 0.42. The front end converter is designed to energize a cumulative device power of 250 W. Isolation transformer of turn's ratio 1:4 is chosen for the proposed topology. The magnetizing inductance Lm of the isolation transformer is 85 µH. Switching frequency is selected as 50 KHz, and for a 1% voltage-ripple, capacitor C1 of 50 µF is used. A clamping circuit is also connected to the isolation transformer to absorb the energy stored in the inductor and provide a path for its dissipation to avoid high surge voltage. The capacitance Ccl of 1 µH is used in the clamping circuit. The microgrid voltage is fed to various devices by Point of Load (POL) converters [42]. Depending upon load specifications POL converters can be Buck-boost, Buck or Boost. Table 1 show various loads utilized by the proposed system. Figure 2 shows the proposed topology for the DC microgrid including the front end converter, bus and loads. The circuit topology of the complete system including the front end converter, high voltage loads and low voltage loads are as shown in Figure 3. Here M1, M2, M3 and M4 are the controlled switches.
Energies 2019, 12, x FOR PEER REVIEW 6 of 12 The circuit topology of the complete system including the front end converter, high voltage loads and low voltage loads are as shown in Figure 3. Here M1, M2, M3 and M4 are the controlled switches.
Loads with 24 V to 240 V Rating
The voltage essential for these loads is provided by a buck-boost voltage converter. For a home illuminating application, we are considering five 9 W Syska B22 LED bulb with 240 V DC voltage ratings. A buck-boost converter premeditated for a 1% peak voltage ripple and 10% current ripple of the rated voltage and current respectively. The proposed arrangement of the 24-240 V loads are illustrated in Figure 4. The designed values of inductor and capacitor is tabulated in Table 2.
Loads with 24 V to 240 V Rating
The voltage essential for these loads is provided by a buck-boost voltage converter. For a home illuminating application, we are considering five 9 W Syska B22 LED bulb with 240 V DC voltage ratings. A buck-boost converter premeditated for a 1% peak voltage ripple and 10% current ripple of the rated voltage and current respectively. The proposed arrangement of the 24-240 V loads are illustrated in Figure 4. The designed values of inductor and capacitor is tabulated in Table 2. The circuit topology of the complete system including the front end converter, high voltage loads and low voltage loads are as shown in Figure 3. Here M1, M2, M3 and M4 are the controlled switches.
Loads with 24 V to 240 V Rating
The voltage essential for these loads is provided by a buck-boost voltage converter. For a home illuminating application, we are considering five 9 W Syska B22 LED bulb with 240 V DC voltage ratings. A buck-boost converter premeditated for a 1% peak voltage ripple and 10% current ripple of the rated voltage and current respectively. The proposed arrangement of the 24-240 V loads are illustrated in Figure 4. The designed values of inductor and capacitor is tabulated in Table 2.
Loads with <24 V Rating
Low voltage loads like laptop and DC fan are considered in this section. Figure 5 above illustrates the designed connection diagram for the <24 V devices. These loads essentially need a ripple-free DC output voltage which is usually acquired by using high-efficient DC conversion stages followed by a Energies 2019, 12, 2157 7 of 12 stepping up PFC (power factor correction) circuit. This setup contributes bulkiness to the system [41]. By replacing the above mentioned circuitry with a steeping down buck converter, the power quality of the grid can be maintained with a minimized space consumption. This will reduces the development cost, dimensions and enhances the lifespan of the device [39]. In rural areas, usually the application side dispersal transformer having a 20% to 25% reduced voltage than the general fixed values. Operation of conventional induction motor based devices like household fans with such voltage variation from the general fixed values may results in higher iron losses, which may leads to the permanent damage of motor [8]. The calculated values of various converter parameters for energizing <24 V devices are tabulated in Table 2. Here VR1, VR2 and VR3 represents the voltage drop across <24 V loads R1, R2 and R3 respectively.
Loads with <24 V Rating
Low voltage loads like laptop and DC fan are considered in this section. Figure 5 above illustrates the designed connection diagram for the <24 V devices. These loads essentially need a ripple-free DC output voltage which is usually acquired by using high-efficient DC conversion stages followed by a stepping up PFC (power factor correction) circuit. This setup contributes bulkiness to the system [41]. By replacing the above mentioned circuitry with a steeping down buck converter, the power quality of the grid can be maintained with a minimized space consumption. This will reduces the development cost, dimensions and enhances the lifespan of the device [39]. In rural areas, usually the application side dispersal transformer having a 20% to 25% reduced voltage than the general fixed values. Operation of conventional induction motor based devices like household fans with such voltage variation from the general fixed values may results in higher iron losses, which may leads to the permanent damage of motor [8]. The calculated values of various converter parameters for energizing <24 V devices are tabulated in Table 2. Here VR1, VR2 and VR3 represents the voltage drop across <24 V loads R1, R2 and R3 respectively.
Buck Converter Laptop
Buck Converter DC Fan To mitigate these effects, modern brushless DC motors for DC fans can be used instead of conventional fans having less ripple percentage. In addition the reduction in losses, various advantages like improved power density, enhanced torque, higher life-span, and easy control and reduced maintenance cost.
Numerical Simulation Results
The PSIM Professional Version 9.1.1.400 (Vellore, Tamilnadu, India) was used to simulate the proposed system. The equivalent models of real time loads, devices and sources are used for the simulation. The simulated values of each loads are tabulated in Table 3. In this initial phase of research, simulated results are considered for formulating the conclusion. The flyback converter illustrated in Figure 3 is simulated to formulate the parameters. The system in total has four DC-DC converters. 72 V DC bus voltage used for the electrifying DC microgrid is obtained from front end converter as illustrated in Figure 6a. The magnetizing current waveforms from the flyback converter is shown in Figure 6b. Figure 6a illustrates the magnetizing current of the flyback transformer. The current ripple in the inductor magnetizing current is obtained as 9.64%. Figure 6b shows the DC grid voltage waveform. As the grid input is given by flyback converter, the grid voltage is 72 V as per the rating. The voltage ripple is obtained as 0.9%, which is feasible for a domestic power network. Figure 7a illustrates the LED input voltage waveform and Figure 7b shows the LED input current waveform. The voltage ripple and current ripple is obtained as 0.83% and 1.06% respectively. This reduced ripple denotes the high power quality of the microgrid. Figure 8a,b represents the laptop input voltage and current respectively from the grid. The waveforms are of high power quality. The voltage ripple and current ripple are obtained as 0.7% and 0.58% respectively. Similarly Figure 9a,b illustrates the DC fan input voltage and input current respectively. The voltage ripple and current ripple are found as To mitigate these effects, modern brushless DC motors for DC fans can be used instead of conventional fans having less ripple percentage. In addition the reduction in losses, various advantages like improved power density, enhanced torque, higher life-span, and easy control and reduced maintenance cost.
Numerical Simulation Results
The PSIM Professional Version 9.1.1.400 (Vellore, Tamilnadu, India) was used to simulate the proposed system. The equivalent models of real time loads, devices and sources are used for the simulation. The simulated values of each loads are tabulated in Table 3. In this initial phase of research, simulated results are considered for formulating the conclusion. The flyback converter illustrated in Figure 3 is simulated to formulate the parameters. The system in total has four DC-DC converters. 72 V DC bus voltage used for the electrifying DC microgrid is obtained from front end converter as illustrated in Figure 6a. The magnetizing current waveforms from the flyback converter is shown in Figure 6b. Figure 6a illustrates the magnetizing current of the flyback transformer. The current ripple in the inductor magnetizing current is obtained as 9.64%. Figure 6b shows the DC grid voltage waveform. As the grid input is given by flyback converter, the grid voltage is 72 V as per the rating. The voltage ripple is obtained as 0.9%, which is feasible for a domestic power network. Figure 7a illustrates the LED input voltage waveform and Figure 7b shows the LED input current waveform. The voltage ripple and current ripple is obtained as 0.83% and 1.06% respectively. This reduced ripple denotes the high power quality of the microgrid. Figure 8a,b represents the laptop input voltage and current respectively from the grid. The waveforms are of high power quality. The voltage ripple and current ripple are obtained as 0.7% and 0.58% respectively. Similarly Figure 9a,b illustrates the DC fan input voltage and input current respectively. The voltage ripple and current ripple are found as 2.37% and 0.337% accordingly. From the waveforms of these loads, it is clear that the power quality of the proposed microgrid topology is very high compared to other conventional topologies. 2.37% and 0.337% accordingly. From the waveforms of these loads, it is clear that the power quality of the proposed microgrid topology is very high compared to other conventional topologies. 2.37% and 0.337% accordingly. From the waveforms of these loads, it is clear that the power quality of the proposed microgrid topology is very high compared to other conventional topologies.
Conclusions
This research proposes to design and simulation of a DC microgrid that facilitates standalone powering of a rural household which uses less than 250 W load from Solar PV array. The DC to DC POL conversion systems were effectively connected to a DC bus of 72 V. This grid is able to highlight the benefits that a DC grid arrangement have above the traditional AC grids, with a supreme advantage of reduced converter count at the device end. DC grid is designed to be 72 V as no fixed standards are available for this topology. The grid voltage on simulation is obtained as 72 V. The conversion circuitry for each device were developed and the prerequisite voltage values were attained from the systems. On observing the simulation results, it can be inferred that the designed DC grid can supply the rated power desirable for each load. By realizing on a higher scale, the scope of this project can be commercialized to power individual homes in an island area thereby achieving the goal of 0% unpowered villages.
Conclusions
This research proposes to design and simulation of a DC microgrid that facilitates standalone powering of a rural household which uses less than 250 W load from Solar PV array. The DC to DC POL conversion systems were effectively connected to a DC bus of 72 V. This grid is able to highlight the benefits that a DC grid arrangement have above the traditional AC grids, with a supreme advantage of reduced converter count at the device end. DC grid is designed to be 72 V as no fixed standards are available for this topology. The grid voltage on simulation is obtained as 72 V. The conversion circuitry for each device were developed and the prerequisite voltage values were attained from the systems. On observing the simulation results, it can be inferred that the designed DC grid can supply the rated power desirable for each load. By realizing on a higher scale, the scope of this project can be commercialized to power individual homes in an island area thereby achieving the goal of 0% unpowered villages.
Conclusions
This research proposes to design and simulation of a DC microgrid that facilitates standalone powering of a rural household which uses less than 250 W load from Solar PV array. The DC to DC POL conversion systems were effectively connected to a DC bus of 72 V. This grid is able to highlight the benefits that a DC grid arrangement have above the traditional AC grids, with a supreme advantage of reduced converter count at the device end. DC grid is designed to be 72 V as no fixed standards are available for this topology. The grid voltage on simulation is obtained as 72 V. The conversion circuitry for each device were developed and the prerequisite voltage values were attained from the systems. On observing the simulation results, it can be inferred that the designed DC grid can supply the rated power desirable for each load. By realizing on a higher scale, the scope of this project can be commercialized to power individual homes in an island area thereby achieving the goal of 0% unpowered villages.
Author Contributions: All authors contributed equally to the final dissemination of the research investigation as a full article.
Funding: This research activity received support from EEEIC International, Poland. | 7,467.6 | 2019-06-05T00:00:00.000 | [
"Engineering"
] |
Charm-strange baryon strong decays in a chiral quark model
The strong decays of charm-strange baryons up to N=2 shell are studied in a chiral quark model. The theoretical predictions for the well determined charm-strange baryons, $\Xi_c^*(2645)$, $\Xi_c(2790)$ and $\Xi_c(2815)$, are in good agreement with the experimental data. This model is also extended to analyze the strong decays of the other newly observed charm-strange baryons $\Xi_c(2930)$, $\Xi_c(2980)$, $\Xi_c(3055)$, $\Xi_c(3080)$ and $\Xi_c(3123)$. Our predictions are given as follows. (i) $\Xi_c(2930)$ might be the first $P$-wave excitation of $\Xi_c'$ with $J^P=1/2^-$, favors the $|\Xi_c'\ ^2P_\lambda 1/2^->$ or $|\Xi_c'\ ^4P_\lambda 1/2^->$ state. (ii) $\Xi_c(2980)$ might correspond to two overlapping $P$-wave states $|\Xi_c'\ ^2P_\rho 1/2^->$ and $|\Xi_c'\ ^2P_\rho 3/2^->$, respectively. The $\Xi_c(2980)$ observed in the $\Lambda_c^+\bar{K}\pi$ final state is most likely to be the $|\Xi_c'\ ^2P_\rho 1/2^->$ state, while the narrower resonance with a mass $m\simeq 2.97$ GeV observed in the $\Xi_c^*(2645)\pi$ channel favors to be assigned to the $|\Xi_c'\ ^2P_\rho 3/2^->$ state. (iii) $\Xi_c(3080)$ favors to be classified as the $|\Xi_c\ S_{\rho\rho} 1/2^+>$ state, i.e., the first radial excitation (2S) of $\Xi_c$. (iv) $\Xi_c(3055)$ is most likely to be the first $D$-wave excitation of $\Xi_c$ with $J^P=3/2^+$, favors the $|\Xi_c\ ^2D_{\lambda\lambda} 3/2^+>$ state. (v) $\Xi_c(3123)$ might be assigned to the $|\Xi_c'\ ^4D_{\lambda\lambda} 3/2^+>$, $|\Xi_c'\ ^4D_{\lambda\lambda} 5/2^+>$, or $|\Xi_c\ ^2D_{\rho\rho} 5/2^+>$ state. As a by-product, we calculate the strong decays of the bottom baryons $\Sigma_b^{\pm}$, $\Sigma_b^{*\pm}$ and $\Xi_b^*$, which are in good agreement with the recent observations as well.
I. INTRODUCTION
In recent years, several new charm-strange baryons, Ξ c (2930), Ξ c (2980), Ξ c (3055), Ξ c (3080) and Ξ c (3123), have been observed. Their experimental information has been collected in Tab. I. Ξ c (2980) and Ξ c (3080) are relatively wellestablished in experiments. Both of their isospin states were observed by Belle Collaboration in the Λ + cK π channel [1], and confirmed by BaBar with high statistical significances [2]. Belle also observed a resonance structure around 2.97 GeV with a narrow width of ∼ 18 MeV in the Ξ * c (2645)π decay channel in a separate study [3], which is often considered as the same resonance, Ξ c (2980), observed in the Λ + cK π channel. Ξ c (2930) was found by BaBar in the Λ + c K − final state by analyzing the B − → Λ + cΛ − c K − process [4]. However, this structure is not yet confirmed by Belle. Ξ c (3055) + and Ξ c (3123) + were only observed by BaBar in the Λ + c K − π + final state with statistical significances of 6.4σ and 3.0σ, respectively [2]. No further evidences of them were found when BaBar searched the inclusive Λ + cK and Λ + cK π + π − invariant mass spectra for new narrow states. BaBar's observations show that Ξ c (3055) + and Ξ c (3123) + mostly decay though the intermediate resonant modes Σ c (2455) ++ K − and Σ c (2520) ++ K − , respectively. A good review of the recent experimental results on charmed baryons can be found in [5].
Charmed baryon mass spectroscopy has been investigated in various models [7][8][9][10][11][12][13][14][15][16][17]. The masses of charm-strange baryons in the N ≤ 2 shell predicted within several quark models have been collected in Tabs. II and III. Comparing the experimental data with the quark model predictions, one finds that Ξ c (2930) could be a candidate of the 2S excitation of Ξ c with J P = 1/2 + , or the 1P excitation of Ξ ′ c with J P = 1/2 − , 3/2 − or 5/2 − . Ξ c (2980) might be assigned to the 2S excitation * E-mail<EMAIL_ADDRESS>of Ξ c or Ξ ′ c with J P = 1/2 + . Ξ c (3055) and Ξ c (3080) are most likely to be the 1D excitations of Ξ c with J P = 3/2 + or 5/2 + , or the 2S excitation of Ξ ′ c with J P = 1/2 + . Ξ c (3123) might be classified as 1D excitation of Ξ ′ c with J P = 3/2 + , 5/2 + or 7/2 + . Obviously, only depending on the mass analysis it is difficult to determine the quantum numbers of these newly observed charm-strange baryons. On the other hand, the strong decays of these newly observed charm-strange baryons have been studied in the framework of heavy hadron chiral perturbation theory [18] and 3 P 0 model [19,20], respectively. In [18], Cheng and Chua advocated that the J P numbers of Ξ c (2980) and Ξ c (3080) could be 1/2 + and 5/2 + , respectively. They claimed that under this J P assignment, it is easy to understand why Ξ c (2980) is broader than Ξ c (3080). In [19,20], Chen et al. have analyzed the strong decays of the N = 2 shell excited charm-strange baryons in the 3 P 0 model, they could only exclude some assignments according to the present experimental information. As a whole, although the new charmstrange baryons have been studied in several aspects, such as mass spectroscopy and strong decays, their quantum numbers are not clear so far. Thus, more investigations of these new heavy baryons are needed.
To further understand the nature of these newly observed charm-strange baryons, in this work, we make a systematic study of their strong decays in a chiral quark model, which has been developed and successfully used to deal with the strong decays of charmed baryons and heavy-light mesons [21][22][23][24]. It should be pointed out that very recently, some important progresses in the observation of the bottom baryons have been achieved in experiments as well: CDF Collaboration first measured the natural widths of the bottom baryons Σ ± b and Σ * ± b , and improved the measurement masses [25], and CMS Collaboration observed a new neutral excited bottom baryon with a mass m = 5945.0 ± 0.7 ± 0.3 ± 2.7 MeV, which is most likely to be the Ξ * 0 b [26]. As a by-product, in this work we also calculate the strong decays of these bottom baryons according to the new measurements. This work is organized as follows. In the subsequent section, the charm-strange baryon in the quark model is outlined. Then a brief review of the chiral quark model approach is given in Sec. III. The numerical results are presented and discussed in Sec. IV. Finally, a summary is given in Sec. V.
II. CHARM-STRANGE BARYON IN THE QUARK MODEL
The charmed baryon contains a heavy charm quark, which violates SU(4) symmetry. However, the SU(3) symmetry between the other two light quarks (u, d, or s) is approximately kept. According to the symmetry, the charmed baryons can be classified two different SU(3) flavor representations: the symmetric 6 and antisymmetric antitriplet3. For the charmstrange baryon, the antisymmetric flavor wave function (Ξ ctype) can be written as model. The details of the spatial wave functions can be found in our previous work [21].
The spin-flavor and spatial wave functions of baryons must be symmetric since the color wave function is antisymmetric. The flavor wave functions of the Ξ c -type charm-strange baryons, φ Ξ c , are antisymmetric under the interchange of the u (d) and s quarks, thus, their spin-space wave functions must be symmetric. In contrast, the spin-spatial wave functions of Ξ ′ c -type charm-strange baryons are required to be antisymmetric due to their symmetric flavor wave functions under the interchange of the two light quarks. The notations, wave functions, and quantum numbers of the Ξ c -type and Ξ ′ c -type charm-strange baryons up to N = 2 shell classified in the quark model are listed in Tabs. IV and V, respectively. The Ξ c -type charm-strange baryons classified in the quark model and their possible two body strong decay channels. The notation of the Ξ c -type charmed baryon is denoted by |Ξ c 2S +1 L σ J P as used in Ref. [27]. The Clebsch-Gordan series for the spin and angular-momentum
III. THE CHIRAL QUARK MODEL
In the chiral quark model, the effective low energy quarkmeson pseudoscalar coupling at tree-level is given by [29][30][31][32][33] where ψ j represents the j-th quark field in a baryon and f m is the meson's decay constant. The pseudoscalar-meson octet φ m is expressed as To match non-relativistic harmonic oscillator spatial wave function N Ψ LL z in the quark model, we adopt the nonrelativistic form of Eq. (6) in the calculations, which is given by [29][30][31][32][33] where σ j and µ q correspond to the Pauli spin vector and the reduced mass of the j-th quark in the initial and final baryons, respectively. For emitting a meson, we have ϕ m = e −iq·r j , and for absorbing a meson we have ϕ m = e iq·r j . In the above nonrelativistic expansions, p ′ j = p j − m j /MP c.m. is the internal coordinate for the j-th quark in the baryon rest frame. ω m and q are the energy and three-vector momentum of the meson, respectively. P i and P f stand for the momenta of the initial final baryons, respectively. The isospin operator I j in Eq. (8) is expressed as where a † j (u, d, s) and a j (u, d, s) are the creation and annihilation operators for the u, d and s quarks, and φ P is the mixing angle of η meson in the flavor basis [6,34].
For a light pseudoscalar meson emission in a baryon strong decays, the partial decay amplitudes can be worked out according to the non-relativistic operator of quark-meson coupling. The details of how to work out the decay amplitudes can be seen in our previous work [21]. The quark model permitted two body strong decay channels of each charm-strange baryon have been listed in Tabs. IV and V as well. With the partial decay amplitudes derived from the chiral quark model, we can calculate the strong decay width by where M J iz ,J f z is the transition amplitude, J iz and J f z stand for the third components of the total angular momenta of the initial and final baryons, respectively. δ as a global parameter accounts for the strength of the quark-meson couplings. It has been determined in our previous study of the strong decays of the charmed baryons and heavy-light mesons [21,22]. Here, we fix its value the same as that in Refs. [21,22], i.e. δ = 0.557.
In the calculation, the standard quark model parameters are adopted. Namely, we set m u = m d = 330 MeV, m s = 450 MeV, m c = 1700 MeV and m b = 5000 MeV for the constituent quark masses. The harmonic oscillator parameter α in the wave function N Ψ LL z is taken as α = 0.40 GeV. The decay constants for π, K and η mesons are taken as f π = 132 MeV, f K = f η = 160 MeV, respectively. The masses of the mesons and baryons used in the calculations are adopted from the Particle Data Group [6]. With these parameters, the strong decay properties of the well known heavy-light mesons and charmed baryons have been described reasonably [21][22][23][24]. Tab. VI, from which we find that our predictions are in good agreement with the experimental data [6], and compatible with other theoretical predictions [18,19,[35][36][37][38].
Recently, the improved measurements of the masses and first measurements of natural widths of the bottom baryon states Σ ± b and Σ * ± b were reported by CDF Collaboration [25], and a new neutral excited bottom-strange baryon with a mass m = 5945.0 ± 0.7 ± 0.3 ± 2.7 MeV was observed by CMS Collaboration [26]. Given the measured mass and decay mode of the newly observed bottom-strange baryon, this state most likely corresponds to Ξ * 0 b with J P = 3/2 + . As a by-product, we have calculated the strong decays of the bot- Our results together with other model predictions and experimental data have been listed in Tab. VII. From the table, it is seen that our predictions are in good agreement with the measurements [25,26] and the other model predictions [19,[39][40][41][42]. It should be pointed out that the strong decay properties of Ξ * b were studied in [19,42], where a little large mass m ≃ 5960 MeV was adopted. With the recent measured mass of Ξ * 0 b , the predicted decay widths in [19,42] should be a little smaller than their previous predictions. Ξ c (2790) and Ξ c (2815) are two relatively well-determined P-wave charm-strange baryons with quantum numbers J P = 1/2 − and 3/2 − , respectively. They were observed in the Ξ ′ c π and Ξ c ππ channels, respectively. The Particle Dada Group suggests they belong to the same SU(4) multiplet as Λ c (2593) and Λ c (2625), respectively [6]. According to our previous study, Λ c (2593) and Λ c (2625) can be well explained with the |Λ c 2 P λ (2815), which are listed in Tab VI. Our predicted widths are in the range of observations [6] and compatible with other theoretical predictions [18,19]. On the other hand, Ξ c (2790) as a dynamically generated resonance having J P = 1/2 − was also discussed in [43].
Finally it should be pointed out that Ξ c (2790) and Ξ c (2815) can not be P ρ -mode excited states |Ξ c 2 P ρ because these excitations have large partial decay widths into Ξ c π and Λ + cK channels (see Fig. 1), which disagrees with the observations. The strong decay properties of the P ρ -mode excited states have been shown in Fig. 1. We advise experimentalists to search these missing P-wave states in Ξ c π, Λ + cK and Ξ * c (2645)π invariant mass distributions around the energy region (2.8 ∼ 2.9) GeV. Tab. III) [16,17]. We have analyze the strong decay properties of all the first P-wave excitations of Ξ ′ c and the first radial (2S ) excitations of Ξ c , which have been shown in Fig. 2 and 3.
Firstly, we can exclude the first radial (2S ) excitations of Ξ c as assignments to Ξ c (2930) for the decay channel Λ + cK of these states is forbidden (see Fig. 3).
In the first P-wave excitations of Ξ ′ c , we have noted that the decay modes Λ + cK and Ξ c π for the P ρ -mode excited states, 2 P ρ (1/2 − ) and 2 P ρ (3/2 − ), are forbidden, thus, these states as assignments to Ξ c (2930) should be excluded. Furthermore, it is found that the strong decays of 2 P λ (3/2 − ) and 4 P λ (5/2 − ) are governed by the Ξ c π channel, and the Ξ * c (2645)π decay mode dominates the decay of 4 P λ (3/2 − ). They might be hard observed by BaBar for their small Λ + cK branching ratios. Given the decay modes and decay widths, two J P = 1/2 − states 4 P λ (1/2 − ) and 2 P λ (1/2 − ) seem to be the possible assignments to Ξ c (2930). Considering Ξ c (2930) as the 2 P λ (1/2 − ), from the figure we find that its decays are dominated by Λ + cK and Ξ ′ c π, and the other partial decay widths are negligibly small. Its total width and the partial decay width ratio between Λ + cK and Ξ ′ c π are And considering Ξ c (2930) as the 4 P λ (1/2 − ), we see that the Λ + cK governs the decays of Ξ c (2930), and the other two decay channels Ξ c π and Ξ ′ c π have sizeable widths. The calculated total width and partial decay width ratios are As a whole, Ξ c (2930) is most likely to be the first orbital (1P) excitation of Ξ ′ c with J P = 1/2 − , favors |Ξ ′ c 4 P λ 1/2 − or |Ξ ′ c 2 P λ 1/2 − . To confirm Ξ c (2930) and finally classify it, further observations in the Ξ ′ c π, Ξ c π, Λ + cK invariant mass distributions and measurements of these partial decay ratios are very crucial in experiments.
D. Ξ c (2980)
Ξ c (2980) with a width of ∼ 40 MeV was first found by Belle Collaboration in the Λ + cK π channel, and then confirmed by BaBar with large significances in the intermediate-resonant Σ c (2455)K and nonresonant Λ + cK π decay channels. Belle also observed a resonance structure around 2.97 GeV with a smaller width of ∼ 18 MeV in the Ξ * c (2645)π decay channel in a separate study [3], which is often considered as the same state of Ξ c (2980). It should be pointed out that BaBar and Belle had analyzed the Λ + cK and Ξ c π invariant mass distributions, respectively, but they did not find any structures around 2.98 GeV, which indicates that these partial decay width are too small to be observed or these decay modes are forbid-den. Although Ξ c (2980) is well-established in experiments, its quantum numbers are still unknown. Recently, Ebert et al. calculated the mass spectra of heavy baryons in the heavyquark-light-diquark picture in the framework of the QCDmotivated relativistic quark model, they suggested Ξ c (2980) could be assigned to the first radial (2S ) excitation of Ξ ′ c with J P = 1/2 + [16], which also consists with their early mass analysis [17]. Cheng et al. also discussed the possible classification of Ξ c (2980). They considered that Ξ c (2980) might be the first radial (2S ) excitation of Ξ c with J P = 1/2 + [18].
We have analyzed the strong decay properties of the first radial (2S ) excitations of both Ξ c and Ξ ′ c , which have been shown in Figs. 3 and 4, respectively. From the figures, it is seen that the 2S excitations of both Ξ c and Ξ ′ c have narrow decay widths (< 2 MeV), which are at least an order smaller than those of Ξ c (2980). Furthermore, the decay modes of these first radial excitations are in disagreement with the observations of Ξ c (2980). Thus, the 2S excitations of both Ξ c and Ξ ′ c are excluded as assignments to Ξ c (2980) in present work. Our conclusion is in agreement with that of 3 P 0 calculations [19]. 2980 MeV. Their calculated partial decay widths and total widths have been shown in Figs. 5 and 6. It is seen that the P A (1/2 + , 3/2 + , 5/2 + ), D A (5/2 + , 7/2 + ), D ρρ (3/2 + , 5/2 + ) and D λλ (3/2 + , 5/2 + ) states have too narrow decay widths to compare with the observations of Ξ c (2980). Furthermore, although the decay widths of D A (1/2 + , 3/2 + ) are compatible with the measurement, their decay modes are dominated by Λ + cK and Ξ c π, which disagrees with the observations as well. As a whole, all the states shown in Figs. 5 and 6 are not good assignments to Ξ c (2980) either their decay widths are too narrow to compare with the observations or their decay modes disagree with the observations.
The P ρ -mode states, 2 P ρ (1/2 − ) and 2 P ρ (3/2 − ), in the first P-wave excitations of Ξ ′ c could be candidates of Ξ c (2980) (see Fig. 2). We have noted that excitation of the λ variable unlike excitation in ρ involves the excitation of the "odd" heavy quark. The P ρ -mode excitation of charm-strange baryon is ∼ 70 MeV heavier than the P λ -mode [44,45]. According to our analysis in IV C, Ξ c (2930) might be assigned to a P λ -mode excitation of Ξ ′ c . Thus, the expected mass of the P ρ -mode excitation is ∼ 3.0 GeV, which is comparable with that of Ξ c (2980). As the 2 P ρ (1/2 − ) and 2 P ρ (3/2 − ) candidates, respectively, the partial decay widths and total width of Ξ c (2980) have been listed in Tab. VIII.
If the resonance structure around 2.97 GeV in the Ξ * c (2645)π decay channel is the same state, Ξ c (2980), observed in Λ + cK π decay channel, Ξ c (2980) is most likely to be the J P = 1/2 − excited state 2 P ρ (1/2 − ). The reasons are as follows. (i) The decay modes of 2 P ρ (1/2 − ) are in agreement with the observations. From Tab. VIII, we see that the strong decays of 2 P ρ (1/2 − ) are dominated by Σ cK , and the partial decay width of Ξ * c (2645)π is sizeable as well. The Λ + cK π final state mainly comes from a intermediate process is in good agreement with the data. (iii) The decay channels Ξ c π, Λ + cK and Σ * c (2520)K of 2 P ρ (1/2 − ) are forbidden, which can naturally explain why these decay channels were not observed by Belle and BaBar. It should be mentioned that the same J P quantum number (i.e. J P =1/2 − ) for Ξ c (2980) is also suggested in [43], where the Ξ c (2980) is considered as a dynamically generated resonance.
We have noted that the total width of Ξ c (2980) measured by Belle and BaBar in the Λ + cK π channel is about two times larger than that measured by Belle in the Ξ * c (2645)π decay channel in a separate study. Thus, the resonance with a mass m ≃ 2970 MeV [denoted by Ξ c (2970) in this work] observed in the Ξ * c (2645)π decay channel might be a different resonance from the Ξ c (2980) observed in the Λ + cK π channel, al- though they have comparable masses. According to our analysis, the Ξ c (2970) observed in the Ξ * c (2645)π channel and Ξ c (2980) observed in the Λ + cK π channel might be assigned to the 2 P ρ (1/2 − ) and 2 P ρ (3/2 − ) excitations, respectively. If the 2 P ρ (3/2 − ) is considered as the Ξ c (2970) observed in the Ξ * c (2645)π channel, its total decay width and dominant decay channel Ξ * c (2645)π are in good agreement with the observations (see Tab. VIII). Furthermore, it is interestedly found that when the 2 P ρ (1/2 − ) and 2 P ρ (3/2 − ) excitations are considered as the resonances observed in the Λ + cK π and Ξ * c (2645)π, respectively, we can naturally explain why the width measured in the Ξ c (2645) * π channel is about a factor 2 smaller than that measured in the Λ + cK π channel. In brief, the Ξ c (2970) observed in the Ξ * c (2645)π final state is most likely a different state from the Ξ c (2980) observed in the Λ + cK π final state. The Ξ c (2980) and Ξ c (2970), as two largely overlapping resonances, favor to be classified as the |Ξ ′ c 2 P ρ 1/2 − and |Ξ ′ c 2 P ρ 3/2 − , respectively. Of course, for the uncertainties of the data we can not exclude the Ξ c (2970) and Ξ c (2980) as the same resonance, which favors to be assigned to the |Ξ ′ c 2 P ρ 1/2 − . To finally clarify whether the Ξ c (2970) observed in Ξ * c (2645)π is the same resonance observed in the Λ + cK π channel or not, we expect to measure the partial width ratio Γ[Ξ * c (2645)π] : Γ(Σ cK ) further. If there is only one resonance assigned to |Ξ ′ c 2 P ρ 1/2 − , the ra- Ξ c (3080) + and its isospin partner state Ξ c (3080) 0 were first observed by Belle in the Λ + c K − π + and Λ + c K 0 π − final state, respectively. The existence of Ξ c (3080) +,0 has been confirmed by BaBar Collaboration. Furthermore, BaBar's analysis shows that most of the decay of Ξ c (3080) + proceeds through the intermediate resonant modes Σ c (2455) ++ K − and Σ c (2520) ++ K − with roughly equal branching fractions.
Although Ξ c (3080) has been established in experiments, its quantum is still unclear. Recently, Ebert et al. suggested They suggested that Ξ c (3080) might be the second orbital (1D) excitation of Ξ c with J P = 3/2 + or J P = 5/2 + . More possible assignments to the Ξ c (3080) were suggested by Chen et al. in their 3 P 0 strong decay analysis [19]. BaBar's observations provide us two very important constraints on the assignments to Ξ c (3080): (i) the strong decay is governed by both Σ c (2455)K and Σ c (2520)K, (ii) and the partial width ratio Γ(Σ c (2455)K)/Γ(Σ c (2520)K) ≃ 1. We analyzed the strong decay properties of all the N = 2 shell excitations of both Ξ c and Ξ ′ c , which were shown in Figs. 3-8. From the figures we find that only the |Ξ c 2 S ρρ 1/2 + (i.e., the first radial (2S ) excitation of Ξ c ) satisfies the two constraints of BaBar's observations at the same time: (i) at m ≃ 3.08 GeV the strong decays of |Ξ c 2 S ρρ 1/2 + are dominated by Σ c (2455)K and Σ c (2520)K, the partial other two decay modes Ξ * c (2645)π and Ξ ′ c π only contribute a very small partial width to the decay, (ii) and the predicted partial width ratio between Σ c (2455)K and Σ c (2520)K is Furthermore, if the |Ξ c 2 S ρρ 1/2 + is considered as an assignment to Ξ c (3080), the predicted total width is also in good agreement with the measurements. Finally, it should be point out that as a candidate of Ξ c (3080), the mass of |Ξ c 2 S ρρ 1/2 + consists with the quark model expectations as well. According to our analysis in Sec. VIII, the Ξ c (2980) (observed in the Λ + cK π final state) and Ξ c (2930) could be assigned to P ρ and P λ -mode excitations of Ξ ′ c , respectively. The estimated mass splitting between P ρ and P λ -mode excitation in the N = 1 shell is With the above relation, we can estimate the mass splitting between S ρρ and S λλ excitations in the N = 2 shell, which is In most of the quark models, the predicted masses for the S λλ excitation of Ξ c are in the range of (2.92 ∼ 2.99) GeV (see Tab. II), thus, the mass of Ξ c S ρρ excitation should be in the range of (3.02 ∼ 3.09) GeV, which is comparable with the mass of Ξ c (3080). As a whole, the mass, decay modes, partial width ratio Γ(Σ c (2455)K) : Γ(Σ c (2520)K) and total decay width of |Ξ c 2 S ρρ 1/2 + strongly support it is assigned to Ξ c (3080).
F. Ξ c (3055) +
The Ξ c (3055) + as a new structure was found by BaBar in the Λ + cK π mass distribution with a statistical significance of 6.4σ. It decays through the intermediate resonant mode Σ c (2455) ++ K − . BaBar also searched the inclusive Λ + cK and Λ + cK ππ invariant mass spectra for evidence of Ξ c (3055) + , but no significant structure was found. This state has not yet been confirmed by Belle. According to the calculations of the charm-strange baryon spectrum in various quark models, Ξ c (3055) might be assigned to the second orbital (1D) excitation of Ξ c (see Tab. II).
We have analyzed the strong decay properties of the second orbital excitations of Ξ c , which have been shown in Figs. 5 and 6. From Fig. 5, we find that the P A (1/2 + , 3/2, 5/2 + ) excitations can be firstly excluded as the candidates of Ξ c (3055) + for neither their decay modes nor their decay widths consist with the observations. Furthermore, from Fig. 6 it is seen that the Λ + cK is one of the main decay modes of 4 D A (1/2 + , 3/2, 7/2 + ) and 2 D A (3/2 + , 5/2 + ), if the Σ c (2455) ++ K − decay mode for these states is observed in experiments, the Λ + cK decay mode should be observed as well, which disagrees with the observations of BaBar. Thus, these states as assignments to Ξ c (3055) + should be excluded. The strong decays of 4 D A (5/2 + ), 2 D λλ (5/2 + ) and 2 D ρρ (5/2 + ) are dominated by Ξ * c (2645)π and Σ c (2520)K, the partial width of Σ c (2455)K is negligibly small, thus, these states can not be considered as candidates of Ξ c (3055) + as well. Finally, we find that only two J P = 3/2 + states |Ξ c 2 D λλ 3/2 + and |Ξ c 2 D ρρ 3/2 + , might be candidates of the Ξ c (3055). The partial decay widths and total width of Ξ c (3055) as the |Ξ c 2 D λλ 3/2 + and |Ξ c 2 D ρρ 3/2 + candidates have been listed in Tab. IX, respectively. From the table it is seen that the total widths of both states are compatible with the observations of Ξ c (3055) within its uncertainties. The strong decays of both states are dominated by Σ c (2455)K and the partial width of Σ c (2520)K is negligibly small, which can explain why BaBar only observed the intermediate resonant decay mode Σ c (2455) ++ K − for Ξ c (3055). The Λ + cK decay mode is forbidden for both |Ξ c 2 D λλ 3/2 + and |Ξ c 2 D ρρ 3/2 + , which agrees with the observation that no structures were found around M(Λ + cK ) ≃ 3.05 GeV. As a whole, Ξ c (3055) could be assigned to the second orbital (1D) excitations of Ξ c with J P = 3/2 + , our conclusion is in agreement with that of Ebert et al. according to their mass analysis. However, it is difficult to determine which one can be assigned to Ξ c (3055) + in the |Ξ c 2 D λλ 3/2 + and |Ξ c 2 D ρρ 3/2 + candidates only according to the strong decay properties. We have noted that Ξ c (3080) is most likely to be the Ξ c S ρρ assignment. According to various quark model predictions, the mass of the second orbital excitation Ξ c D ρρ should be larger than that of the first radial excitation Ξ c S ρρ , which indicates that the mass of Ξ c D ρρ might be larger than 3.08 GeV. From this point of view, the |Ξ c 2 D ρρ 3/2 + as an assignments to Ξ c (3055) + should be excluded. Thus, the Ξ c (3055) is most likely to be classified as the |Ξ c 2 D λλ 3/2 + excitation.
BaBar also searched Ξ c (3123) + in the Λ + cK and Λ + cK ππ final states further, however, they did not find any evidence in these channels. Ξ c (3123) + has not yet been confirmed by Belle. From Tab. II, it is seen that the predicted masses of the second orbital (1D) excitations of Ξ ′ c in various quark models are (3.12 ∼ 3.17) GeV. Thus, the 1D excitations of Ξ ′ c might be candidates of Ξ c (3123) + . We have analyzed the strong decay properties of these excitations, which have been shown in Fig. 8.
According to our analysis in Sec. IX, Ξ c (3055) is most likely to be the |Ξ c 2 D λλ 3/2 + excitation. We have noted that the quark model predicted mass of Ξ ′ c D λλ is typically ∼ 100 MeV heavier than that of Ξ c D λλ . Thus, when the |Ξ ′ 3.1 GeV. From Tab. X, it is seen that the partial decay width ratios Γ(Σ cK ) : Γ(Σ * cK ), Γ(Ξ * c 2645π) : Γ(Σ * cK ) and Γ(Ξ c π) : Γ(Σ * cK ) for these possible assignments to Ξ c (3123) are very different, thus, the measurements of these ratios are important to understand the nature of Ξ c (3123). In the chiral quark model framework, the strong decays of charm-strange baryons are studied. As a by-product we also calculate the strong decays of the S -wave bottom baryons Σ ± b , Σ * ± b , Ξ ′ b and Ξ * b . We obtain good descriptions of the strong de-cay properties of the well-determined charm-strange baryons Ξ * (2645), Ξ(2790) and Ξ(2815). Furthermore, the calculated strong decay widths of Σ ± b , Σ * ± b , and Ξ * b are in good agreement with the recent measurements.
Ξ c (2930), if it could be confirmed in experiments, might be the first P-wave excitations of Ξ ′ c with J P = 1/2 − . |Ξ ′ c 2 P λ 1/2 − and |Ξ ′ c 4 P λ 1/2 − could be candidates of Ξ c (2930) according to the present data. Further observations in the Ξ ′ c π, Ξ c π, Λ + cK invariant mass distributions and measurements of these partial decay ratios are very crucial to confirm Ξ c (2930) and classify it finally.
Ξ c (2980) might correspond to two different P ρ -mode exci-tations of Ξ ′ c : one resonance is the broader (Γ ≃ 44 MeV) excitation |Ξ ′ c 2 P ρ 1/2 − , which was observed in the Λ + cK π final state by BaBar and Belle, and the other resonance is the narrower (Γ ≃ 16 MeV) excitation |Ξ ′ c 2 P ρ 3/2 − , which was observed in the Ξ * c (2645)π channel by Belle in a separate study. If the structures were observed in the Λ + cK π and Ξ * c (2645)π final states correspond to the same state Ξ c (2980), which could only be assigned to the |Ξ ′ c 2 P ρ 1/2 − excitation. To finally clarify whether the Ξ c (2970) observed in Ξ * c (2645)π is the same state observed in the Λ + cK π channel or not, we expect to measure the partial width ratio Γ[Ξ * c (2645)π] : Γ(Σ cK ) further. The charm-strange baryon spectrum up to N = 2 shell according to our predictions. In 1P, 2S and 1D excitations, there are two lines for each J P value, which correspond to the masses of the excitations of ρ variable (upper line) and λ variable (lower line), respectively. The mass gap between the λ variable excitation and the ρ variable excitation is assumed to be 50 MeV for the 1P states, and 100 MeV for the 2S and 1D states. The thin lines stand for the states unobserved in experiments. In the 1P (1D) excitations, the fist two J P values are for the excitations of Ξ c , while the last two J P values are for the excitations of Ξ ′ c . In 2S excitations, the fist J P value is for the excitations of Ξ c , while the second J P value is for the excitations of Ξ ′ c .
Finally, according to our predictions we establish a spectroscopy for the observed charm-strange baryons, which is shown in Fig. 9. We also estimate the masses of the charmstrange baryons with different variable (λ or ρ) excitation from these newly observed states in experiments, which are given in Fig. 9. These missing states might be found in future experiments. To provide helpful information for search for the missing charm-strange baryons, in Figs. 1-8 our predictions of their strong decay properties have been shown as well. | 8,737.4 | 2012-05-14T00:00:00.000 | [
"Physics"
] |
C9orf72 hexanucleotide repeat allele tagging SNPs: Associations with ALS risk and longevity
C9orf72 hexanucleotide repeat expansion is a common cause of amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD). The C9orf72 locus may harbor residual risk outside the hexanucleotide repeat expansion, but the evidence is conflicting. Here, we first compared 683 unrelated amyotrophic lateral sclerosis cases and 3,196 controls with Finnish ancestry to find best single nucleotide polymorphisms that tag the C9orf72 hexanucleotide repeat expansion and intermediate-length alleles. Rs2814707 was the best tagging single nucleotide polymorphisms for intermediate-length alleles with ≥7 repeats (p = 5 × 10−307) and rs139185008 for the hexanucleotide repeat expansion (p = 7 × 10−114) as well as alleles with ≥20 repeats. rs139185008*C associated with amyotrophic lateral sclerosis after removing cases with the hexanucleotide repeat expansion, especially in the subpopulation homozygous for the rs2814707*T (p = 0.0002, OR = 5.06), which supports the concept of residual amyotrophic lateral sclerosis risk at the C9orf72 haplotypes other than the hexanucleotide repeat expansion. We then leveraged Finnish biobank data to test the effects of rs2814707*T and rs139185008*C on longevity after removing individuals with amyotrophic lateral sclerosis / frontotemporal dementia diagnoses. In the discovery cohort (n = 230,006), the frequency of rs139185008*C heterozygotes decreased significantly with age in the comparisons between 50 and 80 years vs. >80 years (p = 0.0005) and <50 years vs. >80 years (p = 0.0001). The findings were similar but less significant in a smaller replication cohort (2-sided p = 0.037 in 50–80 years vs. >80 years and 0.061 in <50 years vs. >80 years). Analysis of the allele frequencies in 5-year bins demonstrated that the decrease of rs139185008*C started after the age of 70 years. The hexanucleotide repeat expansion tagging single nucleotide polymorphisms decreasing frequency with age suggests its’ association with age-related diseases probably also outside amyotrophic lateral sclerosis / frontotemporal dementia.
The hexanucleotide repeat alleles can be broadly categorized into small (2-6 repeats), intermediate-length and expansion alleles. The exact threshold of an expansion has not been fully defined but the expansion usually consists of hundreds or thousands of repeats and exhibits somatic mosaicism (Beck et al., 2013).
In addition to the HRE, intermediate-length alleles have also been associated with various diseases, although inconsistently. These include both neurodegenerative (Ng and Tan, 2017) and immunological diseases (Fredi et al., 2019). Immunological disease could potentially develop by alterations in the expression of C9orf72, it has been shown that mice with C9orf72 knockdown develop a fatal autoimmune disease (Atanasio et al., 2016;Burberry et al., 2016). The intermediate-length alleles often occur on the same haplotype as the HRE and there is evidence that DNA methylation and gene expression differs in the intermediate-length alleles as compared to small alleles (Gijselinck et al., 2016).
We have recently reported that in the Finnish population carriership of two intermediate-length alleles is a risk factor for ALS, especially when one of the alleles is ≥ 17 repeats. Similarly, we observed an increased risk for ALS [odds ratio (OR) 1.89, p = 0.018] in individuals homozygous for the intermediate allele tagging singlenucleotide polymorphism (SNP) rs3849942 after excluding carriers of the HRE (Kaivola et al., 2020). Similar findings have been previously reported in other populations, too. Van der Zee et al. (van der Zee et al., 2013) reported that homozygosity for a SNP (rs2814707) was associated with FTD in a Flanders-Belgian case-control study (OR 1.75, p = 0.04) after excluding expansion carriers. In Belgian ALS and FTD-ALS patients a significantly increased risk was found for carriers of two copies of the intermediate length alleles (OR 2.08, p = 0.04) (Gijselinck et al., 2016). These findings suggest that there may be residual risk for ALS/FTD at the C9orf72 locus, other than the HRE. This residual risk could play a role in other diseases, too.
Here, we have first analyzed best tagging SNPs for the C9orf72 intermediate-length alleles and HRE in Finnish ALS cases and controls. Then, we studied if the allele frequencies of these SNPs decrease with age in a large biobank dataset from Finland (FinnGen) after removing individuals with the diagnosis of ALS or FTD to observe possible effect on longevity outside the ALS-FTD spectrum.
Study cohorts ALS case-control cohort and genotyping
To identify the best tagging SNPs for C9orf72 intermediate-length alleles and expansion, we used previously published cohorts (Kaivola et al., 2020) of 705 unrelated ALS with Finnish ancestry and 3,196 controls with genotype data available. All C9orf72 hexanucleotide repeat allele length assessments were done in the same laboratory. Repeat-primed PCR (RP-PCR) was used and all samples with putative alleles of ≥20 repeats including HRE were tested with over-the-repeat PCR. Samples that showed the typical saw tooth pattern in RP-PCR and did not produce longer amplicon in over-therepeat PCR were categorized as expansions. The longest nonexpanded (amplifiable) discrete allele we could detect in controls was 45 repeats, and we used it as the expansion threshold (Kaivola et al., 2019).
Genome-wide genotyping was performed according to manufacturer's instructions. All controls were genotyped with Illumina genotyping arrays (three cohorts with Illumina Global Screening Array 24v2-3, one with Illumina HumanCNV370 array, one with Illumina 610 k array) and ALS cases were genotyped with Affymetrix Axiom custom SNP array. Samples genotyped with the same genotyping array were processed together. Genotyping data underwent standard per-sample and per-variant quality control steps (Supplementary Material) (Anderson et al., 2010). To analyze SNPs that were not covered by the genotyping arrays, we imputed SNPs using a Finnish reference panel (dx.doi.org/10.17504/protocols. io.xbgfijw). After imputation, in each batch we included variants with a minor allele count >3 and imputation INFO score≥0.90. Then, all batches were merged and variants with an overall genotyping rate >0. 95 that were within +-6 Mb of the C9orf72 risk haplotype (chr9: 21547063-33546474, hg38) were included in subsequent analyses. Additionally, two SNPs (rs147211831 and rs117204439) identified in a previous European study to associate with FTD and intermediate allele length (Reus et al., 2021) were included in the study albeit their imputation INFO scores were not≥0.90 in all cohorts (≥0.70 in all cohorts). These SNPs were included to test possible population differences in the haplotype backgrounds.
Biobank cohorts
To test the effect of C9orf72 intermediate-length alleles and HRE tagging SNPs on longevity, we used FinnGen (https://www. finngen.fi/en) (Kurki and Palta, 2022) data for building discovery and replication cohorts. Samples in FinnGen originate from prospective epidemiological cohorts, disease-based cohorts, and hospital biobank samples. In FinnGen, imputed genotype data is integrated with data from national registries such as hospital discharge records, cause of death registry and medicine reimbursement registry.
For replication cohort, we used FinnGen release 10 data and selected 258,910 unrelated individuals with Finnish ancestry. Then, we excluded individuals with ALS and FTD diagnosis (n = 413). Finally, we excluded individuals analyzed in release 9 data (n = 177,455) leaving 81,042 individuals.
Statistical analyses
In our ALS case-control cohort, we used R v. 4.2.1 (RCT, 2022) and PLINK2 (Chang et al., 2015) to perform logistic regression analyses on ALS patients vs. controls and intermediate-length allele (7-45 repeats) carriers vs. non-carriers after exclusion of expansion carriers. We also tested ALS cases with expansion versus controls without expansion. Since we wanted to test only the association between genetic variants and C9orf72 allele length, we did not include covariates in our regression model.
Frontiers in Genetics frontiersin.org 03 50 years, 50-80 years and over 80 years. The age thresholds were based on age quartiles (first quartile 48 years, third quartile 74 years) and on the rationale that ALS and FTD are relatively rare under the age of 50 years but almost all are diagnosed by 80 years (Chang et al., 2015). In Finland the age-of-onset of ALS is under the age of 50 in ca. 20% of carriers of the C9orf72 HRE (Laaksovirta et al., 2022). We also performed an additional analysis across all ages in which we divided individuals into five-year bins between 20 and 95 years. We excluded bins <20 years and >95 years since they were small (n < 1000). We then estimated the allele frequencies with 95% confidence intervals using binom.test function in R in the age bins. We then fitted a logistic regression model that explained the minor allele status (1/0) by the age of the corresponding individual and reported the p-value of the age effect in the discovery (N = 230,006) and replication cohort (N = 80,012). Age was defined as the age-of-death or age at the end of follow-up. In discovery cohort, we performed six independent tests and set the threshold for statistical significance to 0.05/6 = 0.0083. In replication cohort, the threshold for statistical significance was 0.05.
Ethics
The ALS case-control study was approved by the Ethics Committee of the Helsinki University Hospital (diary number 401/ 13/03/01/09 and HUS/1720/2019). All individuals or their next-of-kin gave a written informed consent.
The ethics declarations for FinnGen biobank data are provided in Supplementary Material.
Results
We used two different cohorts, a Finnish ALS case-control cohort and a Finnish biobank cohort. The purpose of the case-control analysis was to 1) identify best SNPs tagging the C9orf72 HRE and intermediate-length alleles, 2) two analyze the association of these SNPs with ALS risk after exclusion of cases with HRE and 3) to analyze the association with ALS risk in Finland using the top SNPs identified in other European populations.
The Finnish biobank data was used to analyze the association of the tagging SNPs with age by comparing their frequencies in different age groups.
We next analyzed SNPs that associate with carriership of intermediate-length alleles (carriers of the HRE were excluded). We compared SNPs in carriers of 7-45 repeat alleles (n = 1,237) vs. noncarriers (n = 2,457) and identified rs2814707*T as the leading intermediate-length allele tagging variant (p = 5.44 × 10 −307 , OR = 130.76, 95% CI = 101.79-169.57). Rs2814707*T was found in 87% of the 7-45 repeat allele carriers and 93% of 8-45 repeat allele carriers ( Figure 1B). As seen in Figure 1B this marker is mainly tagging alleles with ≥8 repeats and is present in 100% of the HRE carriers.
ALS case-control cohort: C9orf72 locus association with ALS after exclusion of carriers of C9orf72 HRE We have previously reported in a largely overlapping data set that two copies of the C9orf72 intermediate-length alleles, especially when the longer allele is ≥ 17 repeats-and homozygosity for the minor allele of rs3849942 (in LD with rs2814707)-associate with ALS risk after exclusion of HRE carriers (Kaivola et al., 2020). Here we extend Frontiers in Genetics frontiersin.org 04 these finding by analyzing rs2814707 and rs139185008 in noncarriers of the HRE to validate our previous observations based on direct C9orf72 repeat length assessments and explore putative haplotype effects.
ALS case-control cohort: Comparative analysis of tagging SNPs discovered in other populations
In a previous case-control study from the Netherlands and United Kingdom, rs147211831 and rs117204439 associated with FTD, C9orf72 HRE and a subset of longer intermediate-length alleles with a median of 12 repeats (Reus et al., 2021). The location of these variants in relation to the HRE and other analyzed variants is shown in Figure 2.
In our Finnish data, these SNPs showed only weak association with the HRE and intermediate-length alleles. Neither SNP tagged consistently longer intermediate alleles, as shown in Figure 1C for rs117204439 which had a stronger association with intermediatelength alleles and had higher MAF among the HRE carriers.
Finnish biobank data: Association of C9orf72 HRE and intermediate-length allele tagging SNPs with age
The discovery cohort included 232,878 unrelated Finnish ancestry individuals without diagnosis of ALS or FTD. Rs139185008*C tags the C9orf72 HRE and longer intermediatelength alleles, and rs2814707*T tags the HRE and intermediatelength alleles with ≥8 repeats.
As shown in Table 1 the frequency of rs139185008*C heterozygotes decreased significantly with age. The difference was statistically significant between the oldest and youngest group (p = 0.0001) as well as between the oldest and middle age group (p = 0.0005). Rs139185008*C homozygosity was too rare (6-37 individuals per group) for meaningful statistical comparisons.
Rs2814707 heterozygote frequency decreased also with age and the difference between oldest and youngest group was nominally significant (p = 0.014) but did not survive Bonferroni correction (Table 1). When we excluded rs139185008*C carriers from rs2814707*T carriers, the frequencies were 26.7%, 26.3% and 26.2% in individuals age <50, 50-80 and >80 years, respectively (p = 0.11, OR = 0.97, 95% CI 0.94-1.01). This finding indicates that the modest age-effect was driven by haplotypes containing rs139185008*C.
We also analyzed allele frequencies across ages 20-95 years in 5year bins, age groups <20 years and >95 years were excluded due to small number of subjects. The discovery cohort included 230,006 unrelated Finnish ancestry individuals aged between 20 and 95 years and without diagnosis of ALS or FTD. We found that rs139185008 allele frequency decreased significantly by age (p = 0.0014, beta = −0.22, standard error = 0.067). In contrast, we rs2814707 homozygosity frequency did not decrease with age (p =
FIGURE 3
The allele frequency of (A) rs139185008*C in discovery cohort in 5-year bins (B), rs139185008*C in replication cohort in 5-year bins (C) rs2814707*T n discovery cohort (D) rs2814707*T homozygote frequency in discovery cohort (E) APOE ε4 allele frequency in discovery cohort and (F) APOE ε4 allele frequency in replication cohort. The red line shows the trend in allele frequency with age, the allele frequency estimate across all age groups is shown by the dashed line.
Frontiers in Genetics frontiersin.org 06 0.83) (Figure 3). To compare our findings to a genetic variant with known association with neurodegenerative diseases and aging, we made similar analysis with the frequencies of APOE ε4 allele, which showed a highly significant decrease with age (p = 3 × 10 −43 ).
Replication cohort
We set to replicate the decrease in rs139185008*C heterozygote frequency in aging using 80,012 non-overlapping individuals from FinnGen. The rs139185008*C heterozygote frequency decreased with age and the difference was statistically significant between the oldest and middle age group (2-sided p = 0.037) but, likely due to smaller number of individuals, only borderline significant in oldest vs. youngest age group (2-sided p = 0.061) ( Table 1).
We also made analyses on allele frequencies across ages 20-95 years in 5-year bins in the replication cohort. Again, the rs139185008*C heterozygote frequency decreased with age ( Figure 3), showed overlapping effect size with the results in discovery cohort but did not reach statistical significance (p = 0.38, beta = −0.10, standard error = 0.12). APOE ε4 allele frequencies still associated significantly with age in the replication cohort (p = 2.28 × 10 −7 ) (Figure 3).
Discussion
In this study, we first analyzed the best tagging SNPs for C9orf72 hexanucleotide repeat intermediate-length alleles and the HRE in a Finnish case-control study. Then, we analyzed the effect of these SNPs on longevity in the FinnGen biobank data.
Our results shed light on the haplotype structure of the C9orf72 HRE and the intermediate-length alleles. We confirmed in an independent dataset that rs139185008 is the best C9orf72 HRE tagging SNP in the Finnish population. In addition, we observed that the rs139185008 tags longer intermediate-length alleles, especially those with ≥20 repeats. Thus, the longer intermediate repeat alleles and the HRE seem to share a relatively rare haplotype in Finland with a carrier frequency of ca. 3% in controls [HRE carrier frequency estimated ca. 0.2% (Kaivola et al., 2019)]. This observation raises the question, whether the longer intermediate-length alleles (≥20 repeats), which are more common in Finland than in other populations studied (Kaivola et al., 2019), are instable and generate expansions to offspring or mosaic expansions in carriers by somatic instability. It has been shown in transfected cells with varying C9orf72 hexanucleotide repeat lengths (11, 20, 22 and 41 repeats) that repeat instability increases with longer C9orf72 repeats and, interestingly, replication fork stalling is observed when there are ≥20 repeats (Thys and Wang, 2015). Rescue of stalled DNA replication is one proposed mechanism for repeat expansions (Mirkin and Mirkin, 2007). Mosaic expansions arising from normal size alleles have been tested in ALS patients spinal cord sections at an estimated detection level of ≥5% mosaicism (Ross et al., 2019). No mosaic expansions were detected in that study, but none of these patients carried intermediate-length alleles of ≥20 repeats (the longest allele was 11 repeats, personal communication by Jay Ross and Guy Rouleau). Testing gonadal and somatic mosaicism in carriers of ≥20 repeat alleles are interesting avenues for future research.
Rs139185008*C was less sensitive marker of the HRE (tagged 80% of HRE) than rs2814707*T (tagged 100% of HRE). As rs2814707 is located closer to the HRE (Figure 2), historical recombination events have most likely occurred between the HRE and rs139185008. The HRE-containing haplotypes seem to differ among European populations. It was previously shown that rs139185008 was not among the top SNPs associated with ALS in the UK Biobank (Rostalski et al., 2021). Here, we tested SNPs identified as HREtagging SNPs in a cohort from the Netherlands and United Kingdom. SNPs rs147211831 and rs117204439 tagged the HRE and intermediate-length alleles with a median of 12 repeats and associated with FTD (Reus et al., 2021). In our Finnish data set, these two SNPs showed only weak association with the expansion and did not consistently tag longer intermediate-length alleles ( Figure 1C). These two SNPs are located at a longer distance from the HRE than our tagging SNPs and encompass almost 200 kb of DNA (Figure 2), it seems that the extended haplotype structures differ within Europe. However, the core haplotype (<50 kb) has not yet been studied with high resolution, this is becoming possible using e.g. long-read sequencing technologies (Ebbert et al., 2018).
We have previously reported that carrying two copies of the intermediate-length alleles is a risk factor for ALS in Finland, especially when one of the alleles is ≥ 17 repeats (Kaivola et al., 2020). Here, we analyzed this phenomenon using tagging SNPs after exclusion of individuals with the HRE. We found that homozygosity for rs2814707*T was a modest risk factor for ALS (OR 1.84, p = 0.012), the carriership of rs139185008*C increased the risk among those homozygous for rs2814707*T (OR = 5.06, p = 0.0002). The majority of these subjects had the intermediate-length allele genotype ≥8/≥20 (Figure 1). However, when carriers of rs139185008*C were removed from this analysis the risk conferred by rs2814707*T homozygosity was lost (OR 1.50, p = 0.14). This result can be partially due to limited statistical power but indicates that major part of the ALS risk is dependent on the rs139185008*C haplotype structure, which includes the longer intermediate-length alleles. It is of note that our originally reported threshold (≥17 repeats) may not be accurate, the threshold of ≥20 repeats may be more generalizable (de Boer et al., 2020;Kaivola and Tienari, 2022). The caveat of hidden non-genotyped HREs (Rollinson et al., 2015) may play a role in our finding of rs139185008*C heterozygous association with ALS (OR 2.15) since rs139185008*C heterozygotes included 15 subjects (all ALS cases) without intermediate-length alleles. SNP imputation errors may also contribute to this finding. However, hidden HRE should not have a major influence on the results when the subjects are heterozygous for two intermediate-length alleles. SNP and hexanucleotide repeat allele analyses complement each other and a summary of these results is shown in Supplementary Table S1 Finnish FTD cohort will be important to further replicate our findings and C9orf72 haplotypes should be analyzed more in detail to uncover the putative HRE-independent effect of this ALS/FTD locus.
In the FinnGen discovery cohort, we observed that rs139185008*C allele frequency decreased with age, when ALS and FTD diagnoses were excluded. We observed that rs139185008*C allele frequency decreased with age also in the replication cohort but the association was not statistically significant in all tests, which is probably due to the ca. 3-fold smaller cohort size and reduced statistical power. The direction of effect and effects sizes did not much differ in the discovery and replication cohorts. The decrease in rs139185008 minor allele frequency started to decrease after 70 years in both discovery and replication cohorts (Figure 3). This observation suggests that rs139185008*C haplotype may play a role in survival outside ALS/FTD, possibly by increasing the risk for other neurodegenerative diseases. As the estimated prevalence of the HRE is ca. 0.2% (Kaivola et al., 2019) and magnitude of the decrease by age was 0.4%-0.5% it is possible that age-related disease risk is conferred partially by the HRE and partially by haplotypes containing rs139185008*C and longer intermediate-length alleles. We did not observe a decrease in the frequency of rs2814707*T homozygotes. This lack of association with survival can be due to the fact that the vast majority of rs2814707*T homozygotes have intermediate-length alleles with 7-16 repeats, for which the increase in ALS risk was not statistically significant in our previous study (Kaivola et al., 2020). The rs139185008*C haplotypes contributed to the results since the small (non-significant) effect on survival observed in rs2814707*T homozygotes was lost after exclusion of rs139185008*C carriers. Another possibility for the lack of survival effect is that rs2814707*T homozygosity may be a more specific risk factor for ALS/FTD, not for other age-related (>80 years) diseases.
Our study has limitations. We have studied exclusively Finnish individuals and our results may not be generalizable to other populations, not even to other European populations as the C9orf72 haplotypes seem to differ to some extent. As previously discussed regarding the Finnish ALS case-control cohort (Kaivola et al., 2020), determining C9orf72 repeat lengths is not always straightforward and genotyping errors are possible and hidden HRE carriers are possible especially in ALS patients carrying rs139185008*C but no intermediatelength alleles. However, the genotyping of the longer intermediate alleles should be reliable, because we performed over-the-repeat PCR and visualized on gel all samples with ≥20 repeats or an expansion to reduce the possibility of mis-genotyping longer intermediate alleles as expansions and vice versa. Furthermore, we observed high concordance with RP-PCR based genotypes and AmplideX C9orf72 determined genotypes (Supplementary Table S3). In the biobank study, disease status was derived from national registries and especially FTD cases could have been misdiagnosed as other dementias or psychiatric conditions. Furthermore, even though rs139185008 and rs2814707 imputation INFO scores were good (>0.90), some degree of contamination with wrong genotypes is probable. This would create noise that would most likely cause regression to the mean and decrease the differences between groups rather than increase. The imputation quality is especially important when analyzing rarer variants or variant combinations since in a small cohort each sample and genotype has more impact on the analysis results than in a big cohort. Small samples sizes of the FinnGen cohort were avoided for that reason.
In conclusion, we observed that rs139185008*C tags C9orf72 HRE and intermediate-length alleles with ≥20 repeats in Finland. Moreover, rs139185008*C frequency decreased with age in a biobank cohort with ALS and FTD diagnoses excluded, indicating population-wide effects in late-onset neurodegenerative diseases as well. In the future, the rs139185008*C haplotypes and risk haplotypes in other populations should be characterized in detail to assess what part(s) of these haplotypes cause increased disease risk.
Ethics statement
The studies involving human participants were reviewed and approved by Helsinki University Hospital Ethics Committee. The patients/participants provided their written informed consent to participate in this study. FinnGen ethic statement is provided in Supplemental Material.
Funding
This study was funded by the Finnish Cultural Foundation, Päivikki and Sakari Sohlberg Foundation, Paulo Foundation, The Finnish Brain Foundation, the Sigrid Juselius Foundation, Helsinki University Hospital grants, ALS tuttu ry and the Finnish Academy (318868). | 5,458.6 | 2023-03-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Predictive Value of Government Accounting Information and the Secondary Brazilian Bond Market
The international literature highlights evidence on the predictive ability of government accounting information in relation to bond markets, especially for sub-national governments’ bonds. However, there is little evidence in the literature about the role of accounting information from national governments. Having observed this gap, we aimed to identify how strongly the government accounting information affects the pricing of the government bonds issued by the Brazilian Federal Government and traded in the secondary market. In this research, we analyzed the transactions carried out without the direct participation of the federal government. The predictive ability of the accounting information of the Brazilian federal government was verified for the period from 2003 to 2012 on a monthly basis. Following the value relevance approach, we developed price and return models for the bond National Treasury Bills, Single Series. After analyzing the presence of unit roots in the price and return series, we estimated regressions using the ordinary least squares method. We showed that the accounting information of the Brazilian federal government has predictive ability regarding the pricing of bonds traded in the secondary market. However, this does not mean that the government accounting information is fully and directly used by investors, but rather that such information is intended as a proxy for information reviewed by investors when negotiating such bonds, these investors being considered as limited rational agents.
Introduction
At the end of fiscal year 2012, to approach the target for the fiscal surplus, the Brazilian federal government undertook a series of transactions that was widely publicized by the government and the media. Using resources of the Brazilian Sovereign Fund (Fundo Soberano, in Portuguese) and state-owned enterprises, the Federal Government increased the fiscal surplus by around R$19 billion (D'Amorim & Schreiber, 2013a).
The operations carried out by the federal government were not welcomed by banks and consulting firms, which adjusted the released government numbers in their reports to investors (D'Amorim & Schreiber, 2013b). The adjustments indicated a reduction in the credibility of the government's implemented policies and the disclosed information. Internationally, reducing the credibility of the information disclosed by the Brazilian federal government, too, was the tone of the news of the end of the year (Mance, 2012).
The situation described above indicates that government financial statements, among other information released by the government, are considered useful for some user groups. The criticized procedures adopted by the federal government show that the market is probably concerned with the process leading to information and the ability to reflect the economic events taking place in the country.
In a few years, regardless of the entity's industry, the academic literature has demonstrated that accounting information plays an important role in decision-making processes for a range of users. Among the probable users of financial information are the following: (a) investors, (b) analysts, (c) creditors, (d) regulators, (e) auditors, (f) workers, and (g) the government.
To analyze the relevance of accounting information to certain decision-making processes, it is necessary to identify the group of users of such information. The main group of users chosen both by researchers and by regulators is investors (Barth, Beaver, In relation to the accounting information generated and evidenced by private entities, studies on the relevance of such information have mainly followed the value relevance approach. From the perspective of this approach, Barth, Beaver, and Landsman (2001) described accounting information as being deemed relevant to a market when it has information content that is able to influence that market's prices.
In private sector entities, the tests on the relevance of accounting information for investors can be performed considering, in addition to accounting information, the data obtained in the capital markets and bond markets. In the capital markets, primarily equity securities are traded that allow entities' participation in the capital. In bond markets, securities are traded in which two or more economic agents combine the exchange of financial flows, on agreed terms and conditions between the parties involved in the transaction.
With the exception of state-owned enterprises, public authorities cannot trade equity securities, because, due to the risk of losing sovereignty, public equity is indivisible. Thus, most public bodies can only issue bonds. Studies on the relevance of accounting information in the public sector, using data from government bond markets, can also be performed from the perspective of value relevance.
From the perspective of value relevance, studies have investigated the behavior of municipal bond markets using models with the price or the return of bonds as the dependent variable and have concentrated on North American markets Apostolou, Reeve, & Giroux, 1984;Soybel, 1992, Marquette & Wilson, 1992Kim, 2001;Summers, 2003;Reck & Wilson, 2006;Plummer, Hutchison, & Patton, 2007).
In the Brazilian context, the main issuer of bonds is the federal government. With the permission of the Brazilian Senate, subnational governments (states, federal districts, and municipalities) can only issue bonds to refinance the principal (Senado Federal, 2001). Such a limitation shall remain in force until the end of 2020 according to the legislation currently in force, aiming to reduce the public debt of governments, and subnational governments are forbidden to issue bonds while owing the federal government.
In September 2012, the operations of Brazilian federal government bonds in the secondary market moved an amount of approximately R$618 billion (Secretaria do Tesouro Nacional, 2013). We can observe that the secondary market for government bonds accounts for a significant volume of transactions. Economic agents operating in their market dispense information to support choices about investments or divestments.
In the Brazilian context, it is possible that government accounting information is considered helpful by economic agents operating in the secondary market for bonds issued by the federal government. However, this relationship has not yet been studied either in the Brazilian literature or in the foreign literature. In Brazil, the relevance of accounting information to the bond market is so far unknown.
Resulting from the detection of this gap, the main objective of this research is to investigate the following question: To what extent does government accounting information affect the pricing of bonds of the Brazilian federal government traded in the secondary market?
Brazilian Federal Government Bonds
The debt of a public entity comprises its obligations set with third parties. Public debt has several origins, being mainly related to the financing of capital expenditures and the occurrence of budget deficits.
According to the Central Bank of Brazil (Banco Central do Brasil, 2008), the federal debt comprises only some of the bonds issued by the central government. The securities coming from the privatization certificates, overdue and renegotiated debt, and agrarian debt are classified as securitized debt and do not enter into the calculation of the federal debt.
In the table below, the main characteristics of the bonds issued by the Brazilian federal government and traded on the domestic market over the past ten years are summarized. Only government securities relating to the federal debt are included. Note: (a) It is also possible to choose the restatement; (b) The bond is sold at a discount rate; (c) It became negotiable on 05/01/1997; (d) There are two periods with minimum terms each.
To control the government bond market operations undertaken by the federal government in the domestic market, the Central Bank uses the Sistema Especial de Liquidaç ã o e de Custódia (SELIC), an information system that operates with www.ccsenet.org/ibr International Business Research Vol. 9, No. 4;2016 those bonds that are registered and settled. According to the Central Bank of Brazil (Banco Central do , the implementation of the SELIC took place in 1979, and today the system has about 500 participants and 10,000 individual customers. In the system, 450 types of securities are recorded, which represent approximately 99% of the portfolio of government bonds. In the structure of the Brazilian federal government, two organs are responsible for the bonds. They are: (a) the National Treasury (STN) and (b) the Central Bank. Currently, only the STN is responsible for issuing bonds, leaving the control and monitoring to both organs. (Brasil, 2000) withdrew the ability to issue bonds from the Central Bank and limited the operations involving their titles.
The STN is responsible for issuing the following bonds: (a) National Treasury Bonuses ( BTNs and LTNs are intended to cover the budget deficit and the anticipated budget revenues from operations. LFTs are meant to fulfill the contract assumption by the Union of Responsibility for Debts of the States and the Federal District and the reduction of the presence of state public entities in the financial sector. NTNs were used in the restructuring of the Brazilian foreign debt (Brasil, 1989(Brasil, , 2001. The bonds that were issued by the Central Bank had short-term maturity and could be traded in the secondary market. According to the data analyzed in the study, the last security issued by the Central Bank of Brazil and traded in the SELIC was an NBC -Special Series issued in 2000 and traded on November 1, 2006, fifteen days before its maturity. Amante, Araujo, and Jeanneau (2007) reported that the domestic market for government bonds has expanded very rapidly since the mid-1990s to become the largest market in Latin America. The authors also emphasized that the Brazilian federal government changed the short-term debt profile and variable rates for long-term debt and fixed rates. Silva, Garrido, and Carvalho (2009) We can observe that the market for bonds issued by the federal government is complex and involves a diversity of bonds. In the current context, the federal government essentially seeks the balance of the cash flow by maintaining the budget balance and the renegotiation of the Brazilian public debt.
Studies Focusing on Government Bond Markets and the Role of Government Accounting Information
The first studies that sought to investigate the efficiency of financial markets, according to Sewell (2011), dated from the nineteenth century and studied the behavior of prices in the capital markets. Research on the behavior of bond markets emerged only in the twentieth century, for example the works published by Fisher (1959) and Robson (1960). Ingram, Brooks, and Copeland (1983) analyzed whether the changes in the credit risk rating made by the rating agencies affected the bond prices of 127 US cities. The classification changes made by Standard & Poor's were analyzed in the period between August 1976 and February 1979. The authors found that the change in the classification affected the yield of a municipal bond during the month in which the change occurred.
Adopting as their theme the influence of accounting regulation in municipal debt costs, Benson, Marks, and Raman (1984) authors showed that stricter accounting regulation is associated with lower costs of government debt. According to Benson, Marks, and Raman (1984), economic agents operating in a bond market interpret tighter regulation as generating higher-quality accounting information. Allen (1994) initially divided five hundred and thirteen municipalities of the United States into two groups: (a) the first was composed of the municipalities that were audited by one of the eight largest firms and (b) the second contained other municipalities. Then, the author examined whether the municipal accounting information could be used to predict the credit risk ratings issued by Moody's during the period between January 1978 and March 1986.
According to the author, only the financial statements audited by the eight largest firms demonstrated predictive power in relation to risk ratings. Allen (1994) stated that the economic agents possibly understand that the work of the eight largest audit firms assigns quality to the government financial information of the respective municipalities. Gore (2004) studied the effects of the incentives caused by the regulation for municipal accounting disclosure, as well as the effects of the disclosure in relation to the bond market. The author analyzed the financial statements of 88 municipalities located in Michigan (for which there was a specific regulation) and 87 municipalities in Pennsylvania (unregulated) for the year 1995. Controlling for other incentives for disclosure, the results indicated that the accounting regulation induced voluntary disclosure in municipalities with low levels of debt; however, they did not show the same effect in those with high levels of debt (Gore, 2004). Gore (2004) reached two conclusions: (a) in all the municipalities, the disclosure level was considered significant in relation to the prices of bonds, and (b) in municipalities with specific regulations, only those that had a low level of debt had a high level of disclosure.
Schuknecht, Hagen, and Wolswijk (2009) analyzed the risk perceived by market participants in relation to public entity issuers of bonds. The authors evaluated 283 bonds issued by national governments and 272 bonds issued by subnational governments in the European Community and Canada during the period between 1991 and 2005. The survey results indicated that, for the national governments, the increased risk was positively associated with increasing debt and the occurrence of deficits. In the case of sub-national governments, which are subject to the same conditions as national governments, the increased risk is also associated with increased financial assistance received from their national governments.
Aside from Schuknecht, Hagen, and Wolswijk's paper, other studies reviewed bonds issued by subnational governments. These studies showed that the public bond markets reflect the information available and thus are effective to some degree. However, these studies did not analyze the behavior of the variables price of the bonds and return from the bonds. The level of regulation and the risk were the variables analyzed in this research.
According to these works, it is apparent that government accounting information has the capacity to influence agents' decisions. In relation to national governments, though, there is little evidence on the influence of accounting information on prices and returns. In this research, we seek to evaluate the role of government accounting information in the Brazilian federal government bond market.
In the following text, we discuss the value relevance approach, considering the context of the public sector.
The Value Relevance Approach Applied in Studies Involving the Public Sector
During the literature review, we identified eight studies involving the public sector that used the value relevance approach. All the analyzed studies dealt with accounting information from North American municipalities (or municipal school districts). They sought to analyze essentially the risk associated with such bonds. Aside from Marquette and Wilson's and Reck and Wilson's papers, the studies used models of government bonds returns.
The earlier works found no evidence that financial information exerts an influence on the pricing of bonds Apostolou et al., 1984;Soybel, 1992). A common finding of these three studies was the low quality of government financial statements due to the lack of or scarce existing regulation in the United States in the 1970s and 1980s.
However, the more recent papers (Marquette & Wilson, 1992;Kim, 2001;Summers, 2003;Reck & Wilson, 2006;Plummer et al., 2007) identified the influence of government accounting information on the prices of government bonds. This finding is related to the evolution of the regulation of governmental accounting in the United States. Roybark, Coffman, and Previts (2012) emphasized that the Governmental Accounting Standards Board (GASB), the regulatory body of governmental accounting in USA, was created only in April 1984. The regulatory bodies prior to the GASB were not independent and had financial constraints. Copeland and Ingram (1983) analyzed the relevance of accounting information from the municipal pension funds to bonds. Due to the absence of an influence of accounting information, the authors stated that the deficiencies in the www.ccsenet.org/ibr Vol. 9, No. 4;2016 practices used reduced the reliability of the information generated at the end of the 1970s in the United States. Financial information was shown to have low predictive power with respect to the classification of risks and returns . The authors added that the low level of reliability affected the level of relevance of accounting information in 62 municipalities in the United States during the year 1977.
Using a sample of 531 bonds issued by municipalities in Minnesota, Apostolou et al. (1984) tested the association between the result (surplus/deficit) of those municipalities and the risks of their underlying securities. Considering the period between July 1977 and June 1980 and bonds with a minimum maturity of five years, the authors showed that the result was not correlated with the bonds' risk. Apostolou et al. (1984) also attributed such evidence to the low quality of municipal financial information. Soybel (1992) investigated the relationship between the government financial statements and the returns of the securities issued by New York City during the period between 1961 and 1975. The author identified that the originally highlighted information had no association with the returns on bonds and mentioned that the accounting practices adopted at the time by New York City did not incorporate information about: (a) revenues transferred by the state and federal governments, (b) taxes on property and improvements, (c) the advance in the recognition of revenue, (d) stabilization reserve revenue, (e) capitalization of current expenditure, (f) deferral of current expenses, (g) pensions, and (h) long-term debt. After adjusting the original information through the implementation of the practices cited above, Soybel (1992) found that the information set showed an association with the returns of bonds issued by New York City. Marquette and Wilson (1992) analyzed the relationship between the bonds' price and the accounting information for a sample of 358 bonds during the period 1961 to 1975. The authors found that the analyzed market could be considered efficient in the semi-strong form in incorporating the publicly available information, including that on government accounting. A critical feature of the sample analyzed by Marquette and Wilson (1992) refers to the regulation exerted by the Securities and Exchange Commission (SEC). As the authors wrote, the SEC had no authority to regulate subnational governments; however, it imposed standards on financial intermediaries (dealers, underwriters, and others). The SEC rules improved the quality of government disclosures, which would explain their behavior in relation to the government bonds.
In their study, Kim (2001) aimed to determine whether market factors, municipality-specific factors, and accounting information would be able to explain the variations in bond returns issued by 103 US cities. The study period comprised the years 1983-1992. Kim (2001) showed that the following items significantly affected the return on bonds according to the survey results: (a) the risk of the public securities market (market factor), (b) the public entity risk (specific factor), and (c) the total revenue, tax revenue, transfer revenue, total expenditure, and long-term debt (accounting information).
Summers (2003) and Plummer et al. (2007) studied the securities issued by school districts in the periods 1995 to 1999and 1995to 2002, respectively. Summers (2003 sought to understand whether the quality of government accounting information would affect the return on bonds, while Plummer et al. (2007) aimed to assess how the financial information generated by different accounting regimes affects the risks of government bonds. Using a sample containing 209 school districts, Summers (2003) found little influence of the quality of financial reporting on the return on government bonds.
The sample analyzed by Plummer et al. (2007) contained 530 school districts. Whereas the standard GASB 34 required the use of modified accrual basis accounting relative to government funds and the accrual basis for consolidated information, the authors showed that the information derived from the modified accrual basis was more significant in explaining the risk of default of the analyzed districts. Reck and Wilson (2006) worked with three samples composed of US municipalities considering the following periods: (a) from 1996 to 1998 for the first two samples and (b) from 1978 to 1989 for the third sample. The authors sought to examine the relationship between the bond prices and the municipal accounting information.
Methodology
Transactions in the secondary market for bonds issued by the Brazilian federal government are controlled by the Central Bank in the SELIC system. Data on these transactions are available from the Central Bank's website (www.bcb.gov.br) and informed the operations carried out from January 2003.
Among the bonds issued by the Brazilian federal government and traded in the secondary market between January 2003 and December 2012, the National Treasury Bills (LTNs) were the bonds chosen for analysis, by virtue of being those with the highest trading volume.
The dependent variables in the study are related to the pricing of government bonds from the federal government. We www.ccsenet.org/ibr Vol. 9, No. 4;2016 selected two variables: (a) price and (b) return. The price refers to the traded value in the secondary market, considering each transaction. The average monthly price was used. The average price unit of measurement is the Brazilian currency, the real. The return includes the percentage change between two prices of a certain bond during two different periods.
To calculate the return, we employed the average monthly prices. The return of the unit of measurement is the percentage.
The financial information of the Brazilian federal government provided the explanatory variables used in the research. This information is disclosed by the STN on its website (www.stn.gov.br) on a monthly basis and covers the period between December 2002 and December 2012. The numbers are shown in thousands of reals.
From the accounting information obtained from the Brazilian federal government, indicators were calculated for use as explanatory variables. The indicators can be segregated into three types: (a) revenue generation capacity, (b) payment capacity, and (c) debt level.
The indicators relating to the revenue generation capacity focused on demonstrating the public entity's ability to obtain an adequate revenue to meet its obligations. We used three indicators: The indicators relating to the payment capacity represent the current public entity's ability to honor the commitments already made. We used two indicators: pag 1 = personnel expenses / total expenditure (4) pag 2 = total expenditure / total revenue (5) The debt-level indicators seek to demonstrate the amount to be paid by the public entity for its current debts. We used four indicators: end 1 = bonds debt / total revenue (6) end 2 = internal debt / total revenue (7) end 3 = external debt / total revenue (8) The variables arising from the above indicators are in percentages.
In an attempt to reduce the interference from other variables that may affect the pricing of the Brazilian federal government bonds with the aim of capturing the likely influence of government accounting information more accurately, we used five control variables: (a) maturity; (b) country risk; (c) overnight SELIC interest rate; (d) exchange rate, and (e) inflation.
Whereas the dependent variables are in a time-series format, the intervention variables were chosen as a way to solve the problem of outlying observations. They allow isolated effects that are not related to the explanatory and control variables. The observations, which were standardized residuals greater than two standard deviations, were considered outliers.
After collecting and organizing the information mentioned above, tests were performed to detect the stationarity of the time series formed. To check whether the series in question were stationary, the following tests were performed: (a) for the identification of two unit roots, the Dickey-Pantula test, and (b) to identify a unitary root, the Dickey-Fuller Generalized Least Squares (DF-GLS) and Kwiatkwoski-Phillips-Schmidt-Shin (KPSS) tests.
The technique chosen depends on the stationarity of the series of the dependent variables. If it is stationary, it is possible to use the classical econometric techniques. The technique chosen was the multiple linear regression with ordinary least squares estimator. Otherwise, the use of a specific technique for time series would be selected, such as the autoregressive integrated moving average (ARIMA) and autoregressive conditionally heteroscedastic (ARCH) models.
Analysis of the Results
The next table presents the results of the Dickey and Pantula tests. As stated in the description given in the methodology, such tests are designed to identify the presence of two unit roots. Considering the statistics of the individual and set tests, it was found that the model without deterministic terms was feasible for the variables. As a complement, that of the correlogram residuals was analyzed for the determination of increased terms, and the residuals showed that it is autocorrelated.
For both variables, the test statistics were lower than the critical values calculated by MacKinnon (1996) for series with 100 observations at the 5% significance level. These results led to the conclusion that none of the variables had two unit roots. The fact that the variables do not have two unit roots minimizes the errors in the DF-GLS and KPSS tests, the results of which are shown next. Kwiatkwoski, Phillips, Schmidt, and Schin (1992) considering the significance level of 5%.
By analyzing the results of the DF-GLS and KPSS tests together, it was found that the two variables were considered stationary in both models (with a steady and constant trend).
In the following table, the results of the analysis of the predictive value of accounting information for the variable price of LTNs are displayed. Notes: *** Significant at 1%. ** Significant at 5%. * Significant at 10%.
Model A01 was composed of all the accounting indicators used as explanatory variables beyond the control and intervention variables. However, this model showed multicollinearity problems. Aside from the A09 model, in which the variable end 3 was present, the other models presented no multicollinearity problems. Thus, models A01 and A09 were excluded from the analysis.
The other regression models were normally distributed, homocedastic, and not autocorrelated. According to the Akaike criterion, the most adjusted models were A07 and A10.
With regard to the indicators relating to the ability to generate revenues, only the rec 1 variable parameter was not significant and the other parameters were negative and significant at the 5% level. For the indicators of the ability to pay, only the parameter of the variable pag 1 was significant and positive at the 10% level. Except for the variable end 3 , the model presented multicollinearity problems; the remaining debt level indicators were positive and significant at the level of 1%.
From these results, we could note that accounting information has predictive value in relation to the pricing of LTN bonds. The debt-level indicators were those with greater predictive power in relation to the others. The higher the level of debt and the lower the capacity to generate income, the higher the price assigned to bonds traded in the secondary market. Such situations may arise from investors' increased perception of risk.
Regarding the control variables, with the exception of the country risk variable, they were significant in most of the other models. The parameters of the maturity and the SELIC rate were negative and significant at the 1% level. The parameter of the exchange rate was significant and positive at the levels of 1% (A08 and A10 models) and 5% (other models). Aside from the A07 and A08 models, the estimated coefficient of inflation was negative and significant at the levels of 10% (A10 model) and 5% (other models).
It appears that the lower the maturity, the SELIC rate, or inflation and the higher the exchange rate, the higher the price of LTN bonds, according to the estimated models. The autoregressive behavior of the p LTN variable was also confirmed. In the next table, we present the regression results for the verification of the use of the predictive value of accounting information relating to LTN returns. www.ccsenet.org/ibr International Business Research Vol. 9, No. 4;2016 Notes: *** Significant at 1%. ** Significant at 5%. * Significant at 10%.
Six of the regressive models, in which the dependent variable was the return of LTN bonds, used to evaluate the predictive value of the financial information presented problems and therefore were excluded from the analyses. They are: (a) models B01 and B09, with problems of serial autocorrelation and multicollinearity, and (b) models B02, B03, B04, and B08 with autocorrelation problems. The other models were normal, homoscedastic, and had no autocorrelated residuals.
The autoregressive behavior of the dependent variable was confirmed. Considering the Akaike criterion, the B05 and B07 models were the most adjusted.
The relationship between the personnel expenditure and the total expenditure, an indicator of payment capacity, had a positive and significant coefficient at the 1% level. The relationship between the bond debt and the total revenue, an indicator of the debt level, exhibited a positive and significant coefficient at the 5% level. This means that the higher the values of the variables pag 1 and end 1 , the greater the LTN returns. These indicators were the ones that showed evidence of the predictive value of governmental financial information.
Inflation was the only significant control variable. It showed a negative parameter, indicating that an increase in inflation is associated with a reduction in the LTN returns. The inflation parameter was significant at the 5% level in the B07 model and at the 1% level in the other models.
For models A01 to A10, four intervention variables were required for the months of: It is noteworthy that the intervention variables related to the months of the years 2003 and 2011 are directly related to moments of contraction of the Brazilian economy, according to the analysis of the variation of the monthly gross domestic product estimated by the Central Bank of Brazil. The other intervention variables probably resulted from specific movements of the secondary bond market; however, they were not clearly identified.
In all the regressive models from A01 to B10, the residuals were stationary according to the DF-GLS and KPSS tests.
Conclusions
According to the evidence found during the research, we can state that the accounting information of the Brazilian federal government has predictive value in relation to the price and return of LTNs in the secondary market. However, this is not to say that the behavior of government financial information was constant in relation to the price and return. Some evidence was found that some of the indicators related to the ability to generate revenue were significant only for the price of LTNs. Regarding other indicators, there is significant evidence for both price and return.
Regarding the control variables, the country risk was the only one with non-significant parameters in the regressions in which the dependent variable was the price. In turn, only inflation was found to have significant parameters in relation to the regressions in which the dependent variable was the return.
Concerning the return, Copeland and Ingram (1977), Summers (2003), and Plummer et al. (2007) demonstrated that government accounting information had little or reasonable predictive power. In this research, it was evident that the accounting information had little predictive power in relation to the returns on debt securities issued by the Brazilian federal government.
We showed that there is an association between the government accounting information and the prices and returns of LTNs in the period between 2003 and 2012. According to the evidence, accounting information can be considered relevant following the assumptions of the value relevance approach. Thus, this association can be explained from two viewpoints. From the first, a more restricted view, the accounting information is part of the information set used by investors. Thus, the financial statements are able to explain a portion of the variations in the prices and returns of a governmental bond. The second view comes from the relaxation of the assumptions made in the first vision. It would be reasonable to think that specific factors of securities (maturity, values adjustment rates, existence of periodic coupons, etc.), specific factors of the issuer (financial and legal restrictions, default risk, geographical location, etc.), and macroeconomic factors (inflation, exchange rate, similar investment returns, etc.) could be used by investors. The accounting information represents proxies of some of the information used by investors. | 7,330.4 | 2016-03-05T00:00:00.000 | [
"Economics",
"Business"
] |
Mechanisms of Angiogenesis Process after Pancreatic Islet Cell Transplantation: Role of Intra-islet Endothelial Cells
Angiogenic sprouting is a complex, multi-step process involving highly integrated cell behaviours, initial interaction with the environment and signalling pathways. Endothelial cells (ECs) are central to the angiogenic process, with recent insights establishing how these cells communicate with each other and with their microenvironment to form branched vascular networks. Using pancreatic islets as a model for vascularized tissue, this review will present a general overview of EC behaviour dynamics in sprouting angiogenesis, particularly focusing on the interplay between VEGF and Notch pathways. A better understanding of molecular mechanisms associated with intra-islet EC cross-talk and its micro-environment may present exciting new perspectives on islet graft to host revascularization and in supporting islet graft survival.
Introduction
Pancreatic islets are highly vascularized and receive 10% of the pancreatic blood flow despite comprising of only 1-2% of the overall tissue mass [1]. Islets represent endocrine "island" clusters, embedded and scattered within large amounts of exocrine acinar tissue [2]. Most islets are irregularly shaped spheroids with a size distribution ranging from 50-200 μm, composed of 800-3,000 cells. In the context of islet studies and transplantation, 1 islet equivalent (IEQ) is often considered as a size of 150 mm, consisting of an average 2,500 cells. The cellular components of the islet include β-cells with the remainder of the islet comprised of other endocrine cells (including glucagon-secreting αcells, somatostatin secreting δ-cells, pancreatic polypeptide-secreting γ-cells, and ghrelin-producing ε-cells), as well as ECs and support cells such as pericytes [3][4][5][6][7][8][9][10][11][12]. Species heterogeneity exists with respect to cellular composition of islets. Rodent islets are primarily composed of β-cells located in the center with other cell types in the periphery, human islets exhibit interconnected α-and β-cells [3][4][5][6][7][8][9][10][11][12][13]14]. β-cell, the central regulator of glucose homeostasis is the largest cellular component of islets in most species [12,13]. Vascular endothelial cells represent a major cell type present in islets and these cells are organized into a highly regulated and morphologically unique microcirculation. Studies using vascular corrosion casts have shown that 1-3 arterioles feed larger islets [15]. The capillary network within islets is about five times denser in comparison with exocrine tissue [16]. The capillary wall is composed of a permeable layer of ECs and contain ten times more fenestrae than ECs present in the exocrine pancreas [17,18]. Rapid and adequate revascularization is critical for survival and function of transplanted islets [19][20][21]. Unlike whole organ transplantation where revascularization occurs through surgical anastomosis of vessels, the revascularization of islets requires the formation of vessel patencies either through inosculation of host and recipient microvessels or through neo-vessel penetration into the islet. The return of islet function depends on reestablishment of new vessels within islet grafts to derive blood flow from the host vascular system [22,23]. Transplanted islet grafts initially have a significant reduction in vascular supply and low oxygen tension in comparison to normal islets [24][25][26]. The human islet isolation technique completely severs the islet vasculature [20,27], the enzymatic digestion step contributing towards partially disrupting intra-islet ECs [22,28,29]. Revascularization is an important process for adequate engraftment of islets. Prevascularizing islets prior to transplantation could potentially improve islet survivability and function by aiding islet-to-host inosculation [ [30,42]. Unpublished data from our lab demonstrates that fresh islets, immediately after isolation, are capable of forming peri-islet vessels in a 3D-gel construct (Figure 1 & 2). The initial molecular events by which intra-islet ECs result in the formation of such vessels have not yet been explored. This review will focus on the VEGF-Notch signalling pathways and their associated molecular regulation which have been well characterized and shown to play key roles in endothelial crosstalk critical to proper vessel sprouting.
Regulation of angiogenesis VEGF family: critical regulators of angiogenesis
The family of VEGF (vascular endothelial growth factor) ligands and their receptors are major regulators of sprouting angiogenesis [43][44][45][46]. VEGFs are critical, as they regulate vessel formation during embryonic development, play a major role in wound healing and in maintaining vessel homeostasis in adult organisms. In addition, impaired vessel function resulting from defects in VEGF ligands or receptors is the cause of many diseases. VEGF was originally described as vascular permeability factor (VPF), an activity released by tumor cells that promotes vascular leakage [43,[47][48][49][50][51][52][53][54][55][56]. VEGF secretion is stimulated by tumor, hypoxia, low pH and many other factors. The VEGF binds to its receptor (VEGFR) located on the blood vessel ECs. The ECs upon activation produce enzymes and other molecules for EC growth and proliferation. Other effects include mobilization of endothelial progenitor cells from bone marrow, increased vascular permeability and tissue factor induction. The VEGF family comprises seven secreted glycoproteins that are designated VEGF-A, VEGF-B, VEGF-C, VEGF-D, VEGF-E, placental growth factor (PlGF) and VEGF-F [57 -59]. VEGF-A, the most well studied factor within the VEGF family, is expressed in the extra-embryonic endoderm and mesoderm as blood islands, and within the intra-embryonic endoderm at E8.5 [60] ( Table 1).
Type of VEGF Role in regulating/modulating ECs References
VEGF-A Most potent pro-angiogenic protein described to date, implicated in both vasculogenesis and angiogenesis. It induces proliferation, sprouting and tube formation of ECs.
Is a potent survival factor for ECs and has been shown to induce the expression of anti-apoptotic proteins in these cells.
Causes vasodilation by inducing the endothelial nitric oxide synthase and so increasing nitric oxide production.
VEGF-A binds many receptors on hematopoietic stem cells (HSCs), monocytes, osteoblasts and neurons; induces HSC mobilization from the bone marrow, monocyte chemo-attraction and osteoblast-mediated bone formation Many cytokines including platelet-derived growth factor, basic fibroblast growth factor, the epidermal growth factor and transforming growth factors induce VEGF-A expression in cells.
[ Shown to play a central role in cardiac development.
[66] [67,68] VEGF-C The mature form of VEGF-C induces mitogenesis, migration and survival of ECs VEGF-C mRNA transcription is induced in ECs in response to pro-inflammatory cytokines (IL-β).
VEGF-E is a potent angiogenic factor and data strongly indicates that the activation of VEGFR-2 alone can stimulate angiogenesis efficiently.
[75] [76] PIGF Originally identified in the placenta; occurs at low levels in the embryo and adult and has primarily been studied in pathological conditions where it is thought to stimulate angiogenesis in coordination with VEGF-A. [77,78] VEGF family members interact with three main receptors, VEGFR-1 (FLt-1), VEGFR-2 (KDR in humans and Flk-1 in mouse) and VEGFR-3 (Flt4), all tyrosine kinase receptors and members of the PGDF receptor family. VEGF receptors possess an extracellular domain consisting of immunoglobulin repeats responsible for VEGF binding and intracellular tyrosine kinase domains. VEGF binding to its receptor leads to receptor dimerization and activation of receptor tyrosine kinases by autophosphorylation. This leads to several biologic effects on endothelial cells. The VEGF receptor transmembrane tyrosine kinases, which upon binding of their ligands to the extracellular domain of the receptor, activate a cascade of downstream proteins after the dimerization and autophosphorylation of the intracellular receptor tyrosine kinases. VEGFR-2 appears to be the main receptor responsible for mediating the proangiogenic effects of VEGF-A [57, 79,80]. VEGF-A and its receptors VEGFR-1 and VEGFR-2 are expressed early in embryonic development ( Table 2).
Type of VEGFR Role in regulating/modulating endothelial cells (ECs) References
VEGFR-1 Expressed in ECs as well as osteoblasts, monocytes/macrophages, placental trophoblasts, renal mesangial cells and also in some hematopoietic stem cells (HSCs).
Y1175 and Y1214 are the two major VEGF-A-dependent autophosphorylation sites in VEGFR-2. However, only autophosphorylation of Y1175 is imperative for VEGF dependent EC proliferation.
In addition to the ECs, VEGFR-2 is also expressed on neuronal cells, osteoblasts, megakaryocytes and HSCs.
It is down-regulated in the blood vascular ECs, and is again up-regulated in angiogenic blood vessels. Sequestration of VEGF-A results in down-regulation of VEGFR-2 and in apoptotic death of some capillary endothelial cells in vivo.
It is an early marker of endothelial and hematopoietic precursor cells in blood islands. [87] [88] [ VEGFR-3 is up-regulated on blood vascular ECs in pathologic conditions such as in vascular tumors and in the periphery of solid tumors.
Widely distributed in vascular tumors and can be considered as a marker of endothelial cell differentiation of vascular neoplasms.
is down-regulated in vivo at sites of endothelial cell-pericyte/smooth muscle cell contacts; suggesting that VEGFR-3 signaling is important in nascent blood vessels, and it becomes redundant as the vessels mature. In humans, VEGFR-3 expression was upregulated in blood vessel endothelium in chronic inflammatory wounds. [93] [89] [94] [95]
Notch signaling
In addition to the VEGF receptor tyrosine kinases and their ligands, several recent studies demonstrate the importance of Notch signalling components such as ligands Dll4 (Delta-like ligand 4), Jagged-1 and Notch1 in EC specification during formation of a functional vascular network [96][97][98][99]. In mammals there are 5 DSL (Delta Serrate Lag-2) ligands: Delta-like 1 (Dll1), Delta-like 3 (Dll3), Delta-like 4 (Dll4), Jagged-1 (Jag1) and Jagged-2 (Jag2). These ligands are type1 cellsurface proteins with multiple tandem epidermal growth factor (EGF) repeats in their extracellular domains (ECDs). DSL ligands bind to Notch receptors, which are large, single pass, type1 transmembrane receptors. There are 4 known Notch receptors, Notch1 to Notch4. Binding of a DSL ligand to the ECD of the Notch receptor (NECD) triggers a series of proteolytic cleavages of Notch, first by a member of the disintegrin and metalloproteases (ADAM) family within the juxtamembrane region, followed by γ-secretase within the transmembrane domain (Table 3). The Notch receptors, ligands, and several signaling pathway components have been identified in endothelial cells in vitro and in vivo, during development and tumor angiogenesis [100][101][102]. Functional studies using gene targeting in mice, mutagenesis and knockdown in zebrafish, and biochemical analysis in cultured endothelial cells have demonstrated that Notch signaling plays a fundamental role in many aspects of endothelial cell biology during angiogenesis [113] (Table 4).
EC phenotypes: Interplay between VEGF and Notch signaling in regulating EC sprouting
An exciting breakthrough within angiogenic research in the past decade has been the identification of different EC phenotypes with different cellular fate specifications that are key in forming a vessel branch [122]. Leading the trail are 'tip cells' which sense and respond to guidance cues. 'Stalk cells' follow behind the tip cells and elongate the stalk of the sprout by proliferating, forming junctions, modulating the extracellular matrix and forming a lumen. 'Phalynx cells' , the most quiescent of the ECs, line vessels once new vessel branches have formed. These cells form a monolayer, are covered by pericytes, attached via tight junctions, and strongly held by a robust basement membrane. Phalynx cells are engaged in optimizing blood flow, tissue perfusion and oxygenation [123][124][125].
Specification of ECs into tip and stalk cells bearing different morphologies and functional properties is central to sprouting initiation [113,126]. Vessel networks, while expanding, require ECs to undergo frequent cycles of sprouting and branching. This results in dynamic transitions between the two cell phenotypes [113,126]. Tip cells express high levels of Dll4, platelet derived growth factor-b (PDGF-b), unc-5 homolog b (UNC5b), VEGFR 2/3 and has low levels of Notch signalling activity [98,99,103,127,128]. Stalk cells produce fewer filopodia, are more proliferative, form tubes, branches and a vascular lumen, establish junctions with neighbouring cells and synthesise basement membrane components [113,129]. Tip cell migration depends on a VEGF gradient migrating outward from parent vessel whereas stalk cell proliferation is regulated by VEGF concentration [127,130]. VEGF stimulates tip cell induction and filopodia formation via VEGFR2 (abundant on filopodia), whereas VEGFR2 blockade is associated with sprouting defects [113]. VEGFR1 expression is induced by Notch signalling to reduce VEGF ligand availability preventing tip cell outward migration. VEGFR1 is predominantly expressed in stalk cells and is involved in guidance and limiting tip cell formation. Loss of VEGFR1 results in increased sprouting and vascularization [131,132]. Notch appears to act as a negative feedback mechanism to regulate VEGF signaling. This regulation may explain the observation that decreased VEGFR-2 allows for local differentiation of endothelial tip cells prior to sprout initiation with VEGF action on tip cells leading to increased Dll4 expression and activation of Notch signaling, which in turn downregulates VEGFR-2 in neighboring stalk cells [46]. Tip cells with higher VEGFR-2 expression will, therefore, readily respond to VEGF while stalk cells with fewer receptors will be less responsive. Interestingly, tip cells do not proliferate in response to VEGF, but rather form filopodia and migrate in the direction of the VEGF gradient. It is the stalk endothelial cells of the growing capillary branch that proliferate [127].
In mouse and zebrafish angiogenesis, VEGFR3 is strongly expressed in the leading tip cell and is downregulated by Notch signalling in the stalk cell [98,133]. Notch1 and Notch4 and the three Notch ligands JAG-1, Dll1 and Dll4 are expressed in ECs for the induction of arterial cell fate and for the selection of endothelial tip and stalk cells during sprouting angiogenesis [134]. Activation of Notch signalling reduces while its loss induces sprouting. Notch-1 deficient ECs adopt tip cell characteristics [97, 98,129] whereas in stalk cells, activation of Notch by Dll4 leads to downregulation of VEGFR-2 and -3 [101,135]. Cells dynamically compete for tip position utilizing differential VEGFR levels, as cells with higher VEGFR signalling produces more Dll4 and therefore inhibit their neighbouring cells. VEGF has been shown to induce the expression of Dll4 and Notch signaling [136]. Elevated Dll4 and VEGFR-2 expression was detected in tip cells compared to neighboring stalk cells [96]. Blockage of VEGF, in animal models, caused a decrease of Dll4 in vessels and inhibited sprouting [99] whereas administration of VEGF induced Dll4 expression [115].
Notch signaling also influences VEGF receptor expression, leading to the downregulation of VEGFR-2, as evidenced by decreased VEGFR-2 levels after Notch activation in ECs and in Dll4-deficient mice [99,109]. Endothelial Notch activation regulates the expression of different VEGFRs (VEGFR1, 2, and 3) as well as the co-receptor Nrp1 [46, 93,97,98,103,114,115,137]. Dll4 activates Notch in adjacent cells, which suppresses the expression of VEGF receptors and thereby restrains endothelial sprouting and proliferation [98,99,113,138]. Notch activation in HUVECS leads to VEGFR1 mRNA induction [120,139]. In contrast, VEGFR2 and Nrp1 mRNA is markedly reduced by Notch activation in HUVECs [137,140,141], indicating that Notch signaling is able to regulate how the ECs respond to VEGF. The Notch and VEGF signaling appear to be intimately associated in angiogenesis. It has been shown that Notch signalling acts downstream of the VEGF pathway during physiological and pathological angiogenesis [115,140,[142][143][144], suggesting that VEGF pathway controls expression of different Notch components (Table 5).
Conclusions and Future Perspectives
Significant progress has been made in our understanding of importance of angiogenesis in health and disease but our knowledge of coordinated events that result in vessel branching and inosculation remains incomplete. We are just beginning to appreciate the interplay of other signalling pathways such as Wnt and BMP in regulating vessel sprouting. Angiogenesis is a complex, multi-step process. Key to this process are ECs, which are pivotal to sprouting angiogenesis and have been implicated in many diseases [60,[161][162][163].
In the last two decades, focus has been paramount on the study of human pancreatic islets, its isolation techniques and in improving islet yield and function because of its critical involvement in debilitating diseases such as Type-1 diabetes and chronic pancreatitis. The dense vasculature within the pancreas is an important determinant in the physiology and disease of islets. The pancreatic islets is an ideal model 'tissue' to learn more about microvasculature and in this context the study of ECs within islets has potential benefits. The islet EC model represents an excellent platform to better understand molecular mechanisms associated with vessel sprouts, an important but greatly understudied area within islet research. Crosstalk of ECs with other islet cells, such as the β-cells has been evaluated [171][172][173][174][175] particularly in increasing β-cell mass and thereby insulin production. Moreover, a number of factors which may potentially improve islet transplantation involve ECs. Vascular ECs of the embryonic aorta have been shown to induce the development of endocrine cells from pancreatic epithelium in mouse [176,177] and overexpression of VEGF-A in transplanted mouse islets was shown to improve insulin secretion and blood glucose regulation in recipient mice [165,178]. Utilizing intra-islet ECs as a model to better understand mechanisms associated with sprouting angiogenesis is likely to generate exciting new hypotheses and offer new insights of how transplanted islets can reestablish vasculature more efficiently and successfully. Modification of human islet preparation: an effective approach to improve graft outcome after islet transplantation? Horm Metab Res 47: 72-77. 28. Konstantinova | 3,795.4 | 2017-02-01T00:00:00.000 | [
"Biology"
] |
Towards Ultrafast Gyroscopes Employing Real-time Intensity and Spectral Domain Measurements of Ultrashort Pulses
Active ring laser gyroscopes (RLG) operating on the principle of the optical Sagnac effect are preferred instruments for a range of applications, such as inertial guidance systems, seismology, and geodesy, that require both high bias stability and high angular velocity resolutions. Operating at such accuracy levels demands special precautions like dithering or multi-mode operation to eliminate frequency lock-in or similar effects introduced due to synchronisation of counter-propagating channels. Recently proposed bidirectional ultrafast fibre lasers can circumvent the limitations of continuous wave RLGs. However, their performance is limited due to the nature of the highly-averaged interrogation of the Sagnac effect. In general, the performance of current optical gyroscopes relies on the available measurement methods used for extracting the signal. Here, by changing the paradigm of traditional measurement and applying spatio-temporal intensity processing, we demonstrate that the bidirectional ultrafast laser can be transformed to an ultrafast gyroscope with acquisition rates of the order of the laser repetition rate, making them at least two orders of magnitude faster than commercially deployed versions. We also show the proof-of-principle for dead-band-free round trip time-resolved spectral domain measurements using the Dispersive Fourier Transform, further enhancing the gyroscopic sensitivity. Our results reveal the high potential of application of novel methods of signal measurements in mid-sized ultrafast fibre laser gyroscopes to achieve performances that are currently available only with large-scale RLGs.
Improving the accuracy of relative positioning or rotational sensing is important both for fundamental science and for various practical engineering applications. High-precision optical measurements have the potential to unlock new breakthrough methods and approaches in this field. With progress in the general understanding of optical phenomena in a laser cavity and the development of advanced laser configurations came the ability to measure ultraslow angular velocities. Thus, optical gyroscopes employing the Sagnac effect make it possible to detect rotations of the ground with 10 −11 rad·s −1 sensitivity, with an integration time of several hundreds of seconds 1,2 , which enables observation of Chandler and Annual wobbles 3 . Alternatively, for applications requiring fast data acquisition rates, such as when a gyroscope is a key part of inertial measurements unit for self-navigation 4,5 , their substantially lower sensitivity is compensated for by high acquisition rates of up to several kilohertz 6 .
Large-ring laser gyroscope technology is capable of providing highly sensitive inertial rotation measurements. Among the impressive recent applications, one can mention the direct observation of the rotational microseismic noise 7 and the detection of very long period geodetic effects on the Earth's rotation vector 3 . However, their application has both practical and fundamental limits and restrictions caused by their size, elaborate fabrication, and maintenance, and more importantly by the impact of the frequency lock-in effect. Backscattered light enhances the coupling between counter-propagating beams, causing carrier frequency synchronisation 8,9 . As a result, the beat note signal disappears for a range of small angular velocities 10 . Numerous approaches have been suggested to mitigate this limitation by decreasing backscattering. This includes the application of high-reflective dielectric coatings of cavity mirrors (∼99.998%), improvement of laser cavity geometry, or dithering the resonance frequency of the cavity with reference to an external laser beam 9,10 . Owing to the lock-in effect, maintenance-free all-fibre configurations, which are typically considered beneficial, become disadvantageous for laser gyroscopes, as they suffer from Rayleigh scattering 11 .
Alternative attempts for frequency lock-in effect elimination have exploited ultrashort pulses instead of continuous wave radiation in laser gyroscopes 12,13 , since the counter-propagating ultrashort pulses interact only in two points in the cavity [14][15][16] . In such configurations, a differential phase shift of counter-propagating pulses due to the gyroscopic effect is generally evaluated by RF measurements of the beat-note frequency shift, that is, the change in the interference pattern of two frequency combs, corresponding to counter-propagating ultrashort pulses at each round trip 17 .
To date, the interrogation of the Sagnac effect has relied on the study of its cumulative effect over time scales much larger than the typical cavity lifetime. Thus, existing methodologies impose a bottleneck on the gyroscope bandwidth, limiting it to a few kilohertz. The availability of ultrafast detectors and high-resolution real-time oscilloscopes has ushered in a multitude of novel, real-time methodologies for studying fibre laser dynamics in both the intensity and spectral domain. For instance, the method of spatio-temporal dynamics 18,19 makes it possible to identify of specific features of interest in the laser output and to observe their evolution with round trip time resolution over several hundreds or even thousands of round trips [20][21][22][23] . The Dispersive Fourier Transform (DFT) 24 is a real-time spectral method that exploits the principle of Fraunhofer diffraction in the temporal domain and can be used to obtain the time-resolved spectra of successive mode-locked pulses [25][26][27] . Such methods used to study the fast dynamics of fibre lasers also open up perspectives for interrogating the Sagnac effect in real-time, enhancing the functionality of gyroscopes.
In this paper, we introduce a new concept of gyroscopic effect evaluation by analysing the dynamics of a pair of counterpropagating ultrashort soliton pulses applying three real-time measurement techniques in as a proof-of-principle demonstration bidirectional ultrafast fibre laser. Firstly, we show how the real-time spatio-temporal dynamics of the counter-propagating pulses can be used for direct observation and analysis of the temporal drift between solitons induced by the Sagnac effect accumulated over several round trips with a resolution of 10 −2 deg·s −1 . Secondly, we show how such real-time measurements can be used to monitor angular velocities at a rate equivalent to the round trip time of the laser, resulting in effective gyroscope bandwidths that surpass currently available modalities by at least two orders of magnitude. Thirdly, we show how the synchronised regimes can be utilised for rotation sensing by employing real-time DFT measurements. All these techniques enable improvement of the sensitivity by at least an order of magnitude when compared to earlier demonstrated 16 by using mode-locked lasers, reaching 10 −3 -10 −4 deg·s −1 . Simple theory based on a linear-regime approximation is presented, which helps to estimate the order of magnitude of sensitivities and errors and ascertain appropriate laser parameters needed to achieve the requisite resolution.
Results
Our experiments employ the hybrid mode-locked erbium-doped fibre laser setup (see Methods section), placed on a rotating circular platform with a diameter of 0.62 m (Fig. 1a). The angular velocities of the platform with an ultrafast laser can be varied from 0 to 0.3 deg·s −1 . The stability of the ultrafast generation is ensured by a hybrid mechanism of passive mode-locking realised via single-walled carbon nanotube (SWNT) polymer saturable absorber and nonlinear polarisation evolution (NPE). NPE relies on the section of polarising fibre with bow-tie geometry and allows tuning of the nonlinear transfer function, predefining the nature of interactions between counter-propagating pulses. The laser cavity comprises of a 3-dB output coupler, which rotates with the entire interferometer cavity, creating the difference in the optical paths for counter-propagating pulses as reported for other simple Sagnac interferometers 28 . Afterwards, the counter-propagating pulses are combined via a 3dB coupler, the output of which was used for the real-time measurements.
The experiment showed that two stable bidirectional regimes are possible: when repetition rates of counter-propagating pulses differ by tens of hertz (Fig. 1b), and when pulses in both channels are generated at the same repetition rate of 14.78 MHz (Fig. 1c). In the latter case, the length of ports of the coupler, combining counter-propagating pulses, were adjusted to achieve pulse separation of ∼100 ps. We use different techniques for each of these two laser generation regimes to evaluate the gyroscopic effect. The case of channels with different repetition rate has been analysed by using spatio-temporal dynamics 18
Time-domain analysis
The methodology of spatio-temporal dynamics -that is, a two-dimensional representation of laser intensity evolution over round trips -make it possible to observe round trip time-resolved dynamics of laser pulses (see Supplementary note S1). Figure 1b shows the spatio-temporal dynamics of the combined laser output over 20 000 consecutive round trips, for the platform at rest. When combined, the separation between pulses changes in the course of round trips; pulses overlap in the interim. The evolution traces for both pulses present straight lines, indicating that there are no attractive or repulsive forces between counter-propagating pulses. Figure 2a shows the relative temporal separation between the CW and CCW pulses after 10 4 round trips at different rotation velocities of the platform, showing that the rotation of the stage influences the cavity conditions for the pulses. The separation between the pulses obtained in the rest condition is the analogue of the conventional gyroscopic bias offset, and its effect can be removed from the actual measured values to obtain the drift introduced solely due to the Sagnac effect (Fig. 2b). This clearly shows the existence of a linear relation between the angular velocity and the corresponding relative pulse temporal drift. We attribute the deviations from the linear approximation to imperfections of the rotation stage, mainly owing to the slippage between the platform and the motor. The pulse separation is seen to decrease over round trips, owing to the direction of rotation of the stage. With the stage rotating in the opposite direction, the pulse separation would increase with round trips. In other words, the bidirectional laser operating in the non-synchronised regime can be used not only for ascertaining the magnitude of the angular velocity but also the direction of rotation.
The sensitivity S of the gyroscope for this particular measurement configuration can be obtained directly from the slope of the linear fit in Fig. 2b, and is the analogue of the conventionally used gyroscopic scale factor. Here, it is estimated to be -1.13 ns/(deg·s −1 )] −1 , or 0.885 deg.s −1 /ns. The primary uncertainty of measurement in this configuration can be attributed to the finite temporal resolution δt res (here, 25 ps) and can be estimated as S · δt res to be 22.12 mdeg·s −1 (384 µrad·s −1 ). A theoretical estimate of the scale factor can be obtained from well-known expressions for the Sagnac effect 28 (see Eq. 2 in Methods), as 539 µdeg·s −1 (9.4 µrad·s −1 ), which is much smaller than the experimentally obtained value. The differences can be attributed to the fact that the analytical expression Eq. 2 in Methods only takes into account the linear effects as brought about by the Sagnac effect, not considering the nonlinear effects introduced across the laser cavity, similar to observed in nonlinear loop mirrors 30,31 . Yet, the experimental results show there is a clear linear relationship between the angular velocity of the stage and the relative pulse separation, thus allowing us to use the current configuration for high accuracy angular velocity determination. Indeed, based on this understanding, and the nature of the real-time measurements made, it can be shown that the variation of the value of the scale factor, i.e. the bias stability, is of the order of 1.8E-10 ppm (see Supplementary note 4). The resolution limit can be countered for a given laser configuration by only increasing the number of round trips N RT . Here, since the repetition period of the pulses in counter-propagating channels is ∼66 ns and the pulse relatively shift is 0.295 ps within one round trip time, one can consider at least 1.98×10 5 round trips after pulses coincide (for the platform in rest). Thus, analysis of entire recorded dataset would make it possible to increase the resolution of the gyroscopic measurements by more than one order of magnitude.
Ultrafast gyroscope In the above, the cumulative Sagnac time shift accrued after 10 4 round trips was used to estimate the average angular velocity of the stage over the equivalent time period. One can obtain a measure of the angular velocity when N RT = 1 (see Eq. 2 in Methods). While this method decreases the gyroscope sensitivity by a factor of N RT , it has the potential to increase the effective gyroscope bandwidth to the order of the cavity repetition frequency, making it at least three orders of magnitude faster than commercially available gyroscopes, and, to the best of our knowledge, one of the highest bandwidth 3/10 gyroscopes, even surpassing MEMs technologies. Figure 3a shows the round trip resolved temporal separation δ T N RT introduced between the CW and CCW pulses. The gyroscopic bias offset has been removed, leaving behind only the Sagnac effect induced temporal drift between the pulses. The plot indicates a linearly increasing separation between the pulses (Fig. 3a). As in Fig. 2b, the pulse separation is seen to decrease over round trips owing to the direction of rotation of the stage, which agrees with the definition in the Eq. 2 in Methods. Thus, the average Sagnac temporal drift δt Sagnac over each round trip can be obtained from the slope of the linear fit function (Fig. 3a, dotted line). Deviations about this linear fit, ε N RT can result from instantaneous drifts of the stage away from the mean angular velocity. Figure 3b shows the residuals of the linear fit, which is not ε N RT but its cumulative sum up to the round trip N RT -that is, ∑ N RT i=1 ε N RT . ε N RT -can thus be obtained a simple first-order difference of the residuals of the linear fit (Fig. 3c, also see Supplementary note 6). Thus, with knowledge of S and the instantaneous temporal drifts, the round trip time-resolved angular velocity measurements as shown in Fig. 3c (blue curve) can be obtained directly from time-domain spatio-temporal dynamics, via the formula: For the current laser, in principle, the acquisition rate or gyroscope bandwidth can be as high as 418 · 10 3 samples per second (or 418 kHz), where the scale factor S is 94 mrad·s −1 , calculated using Eq. 2 in Methods. This value is estimated under the consideration of a finite temporal resolution of the real-time oscilloscope and using the value of the scale factor obtained using the linear approximation (see Supplementary note 5). However, as the scale factor in the experiment is drastically reduced owing to the nonlinearities, the bandwidth obtained in the experiment is closer to 19 kHz, which is an order of magnitude higher than current state-of-the-art and commercially available instruments 32,33 . The finite bandwidth effect has been taken into account in the above instantaneous angular velocity dynamics by incorporating a moving-window smoothing operation (orange curve, Fig. 3c). We would like to stress that the increase of the gyroscopic bandwidth always comes at the expense of the decrease of the gyroscope sensitivity by the same factor. Defining the final application of gyroscope will allow identification of suitable accuracy versus bandwidth trade off. While the angular velocity of our stage is currently limited to about 0.2 deg·s −1 , the above methodology is not limited by the angular velocity, and actually offers better gyroscope bandwidths with increasing angular velocities. Figure 1(c) shows the second regime of operation of the bidirectional laser, where the CW and CCW pulses have equal repetition rates, resulting in their observed parallel trajectories on the spatio-temporal dynamics. The length of output coupler fibre ports was chosen in such way to ensure small separation between counter-propagating pulses when combined. As can be seen in Fig. 1(c) or Fig. S3 pulses preserve separation of ∼100 ps. The locking of the pulse repetition rates of counter-propagating pulse trains is caused by passive synchronisation owing to cross-phase modulation 30 and the SWNT dynamics [34][35][36] . The transition from the non-synchronised mode described above to the pulse separation locked regime does not modify the pulse parameters within experimental accuracy (see Fig. 5). . round trip time-resolved gyroscopic measurements. a, Temporal separation between CW and CCW pulses introduced by the Sagnac effect, for the stage at rest (blue curve) and in motion (orange curve). The drift due to the gyroscopic bias has been compensated. Black dashed line is a linear fit, indicative of constant angular velocity. b, Blue curve -Residual of the linear fit function, which is a cumulative sum Σε N RT of the instantaneous deviations of the pulse separation over each round trip. The jagged appearance is a sampling artefact, which has been removed by a smoothing operation to give the orange curve. c, round trip resolved angular velocity obtained using Eq.1, where ε N RT is obtained from a first-order difference of the smoothed curve in Fig. b. The orange curve here is again obtained by a smoothing operation to remove the sampling artefacts When such pulse synchronisation occurs, the influence of the Sagnac effect cannot be directly observed by using intensity domain spatio-temporal approaches. To prove that the pulses exhibit stationary behaviour regardless gyroscope rotation, the dynamics of autocorrelation function over round trips was investigated (see Supplementary note S2) for different platform rotation velocities. To analyse gyroscopic effect using this regime, we shift to the spectral domain and apply the DFT technique (see Supplementary note S2). The single-shot spectrum of combined pulses is highly modulated due to interference between stretched combined pulses ( Fig. 4a and its inset -blue plot). It is known that the frequency of modulation of the interference pattern is inversely related to the temporal separation t sep of the pulses ∆ν = 1/t sep 37 . The averaged single-shot spectrum (red curve in Fig. 4a) is in good agreement with the time-averaged spectrum recorded with the optical spectrum analyser (OSA) for combined pulses (navy curve), the position of characteristic Kelly side-bands correlates with one in individual spectra of counter-propagating pulses. The wavelength axis is obtained by mapping the obtained spectrum with the averaged one measured using OSA. In the previous scenario, the gyroscopic effect introduces a change in the temporal separation, which should cause a change in the modulation frequency. Here, however, the temporal separation between the pulses does not change appreciably with changing angular velocity of the table (see Fig. S3), rendering the change in the modulation frequency unresolvable.
DFT analysis
Here, we can utilise the additional time-scale available to us -that is, the evolution of the spectrum over round trips -to reveal the presence of the gyroscopic effect. Figure 4b demonstrates recorded single-shot spectra over 5 000 consecutive round trips. The gyroscopic effect in this regime of operation of the laser would manifest in the form of a change in the tilt of the modulation of the DFT spectrum (see inset in Fig. 4b), ascertained by measuring a change in the frequency of the spectrally resolved intensity variation over round trips at ∼1556.9 nm (white dashed line, Fig. 4b). The magnitude of the effect can be obtained by calculating the FFT of the spectrally resolved intensity dynamics. Figure 4c shows how the change in the tilt is converted to a change in the frequency of modulation f mod , as a function of the angular velocity of the table. Here, the FFT is accounted for the dynamics measured over 5000 round trips. The inset in Fig. 4c shows the round trip spectral intensity evolution for two angular velocities, measured along the spectral maximum (dashed line in Fig. 4b).
The inset in Fig. 4d shows a one-to-one correspondence between the trends of the actual angular velocity and the magnitude of the gyroscopic shift as revealed by the DFT modulation tilt, justifying the use of the linear fit (red line in Fig. 4d). The values obtained using DFT-based measurements were individually confirmed via beat note measurements (see Supplementary note S3). The tilt in the spectral modulation when the platform is in rest appears due to initial differences in carrier frequencies of counter-propagating pulses, producing carrier-envelope offset (CEO). The slope of the linear fit can then determine the gyroscopic sensitivity S DFT , here 17.2 mdeg·s −1 /kHz, with an error of (3 kHz) −1 (as set by the FFT resolution). The use of the DFT methodology allows resolution of the angular velocity of 7.2 mdeg·s −1 (125 µrad·s −1 ), while the theoretically predicted resolution according to Eq. 3 (see Methods) when N RT = 5000 is 10 µdeg·s −1 . Here, the trade-off is a loss in temporal resolution of the angular velocity drift. The resolution can be enhanced via moving window FFTs or even higher-order windowing operations like the Wigner Ville distribution 29,38 . The sensitivity can be enhanced by observing the intensity evolution over longer periods, here up to 3.96×10 5 round trips. Therefore, the sensitivity is in principle limited by the oscilloscope memory.
Discussion and conclusion Ours is the first demonstration of the application of real-time intensity and spectral domain approaches to measuring the gyroscopic effect in a bidirectional ring ultrafast fibre laser. A direct time domain measurement of the Sagnac effect was previously considered as a low-accuracy gyroscopic signal processing method. However, the advent of high-bandwidth detectors and oscilloscopes allow us to interrogate this effect directly by using the recently emerged methodologies of spatio-temporal dynamics and DFT. We have shown in proof-of-principle experiments how the spatiotemporal dynamics approach can increase gyroscope readout rates up to two orders of magnitude higher than commercially available options. The measurement configuration is also highly simplified, requiring only the use of a coupler for combining the counter-propagating pulses; accurate contemporisation of the pulses becomes less of an issue. For experimental gyroscopic effect evaluation, real-time spatio-temporal dynamics takes pulse relative motion as an advantage. The DFT approach also implies tolerable pulse separation. This technique eliminates carrier-to-envelope offset frequencies noise caused bias frequency drift 39 and, hence, does not require additional stabilising elements to be introduced into a laser cavity. For conventional active configurations offering comparable resolutions, lasers used typically require active stabilisation from environmental effects. Here, however, no form of stabilisation or even thermal isolation was applied. This can be attributed in part to the relatively short time-scales investigated and to the high stability of the ultrafast laser (jitter < 0.8 ps). The combination of ultrashort pulse fibre laser design, together with the demonstrated signal processing approaches, presents significantly new and highly promising techniques of interrogating the Sagnac effect. The capabilities of these techniques can be enhanced further, approaching specifications of large ring laser gyroscopes, by the improvement of direct detection electronic systems for signal analysis. The performance of modern optical gyroscopes is often limited not by intrinsic physical effects, but rather by the available measurement methodology. Recently emerged techniques for characterisation of optical field and laser radiation in particular [18][19][20][21][22][23][24][25][26][27] has the potential to revolutionise the field of optical gyroscopes. The current laser configuration provides a rotation resolution of 384 µrad·s −1 in the spatio-temporal approach for N RT = 10 4 round trips and 125 µrad·s −1 for DFT analysis over 5 000 round trips. Within the available range of platform rotation velocities, none of the presented techniques has demonstrated dead-band in gyroscopic measurements. While more advanced and largescale active laser configurations still offer better stability and accuracy, the results obtained using the relatively simple and compact bidirectional ultrafast laser hold a possibility to be further improved by more than an order of magnitude within the current laser design by increasing the number of round trips, or alternatively, by using a resonator of larger scale. Extension of the laser cavity can lead to highly non-trivial pulse dynamics, as indicated by the observed deviation of the behaviour of the studied laser from the linear Sagnac approximation. This was also confirmed in different context of achieving ultralong cavity operation 40,41 . The general methods proposed here can be applied in various applications beyond gyroscopes, helping to reveal the underlying physics of bidirectional ultrafast lasers. Currently, none of existing theoretical model is capable to completely describe the dynamics of the interaction between counter-propagating solitons inside the bidirectional mode-locked laser, most critically in saturable absorber or their non-local interaction in the gain medium. Our further plans include development of such numerical model to analyse pulse dynamics in rotating cavity, which will be published elsewhere.
In this work, we used a bidirectional mode-locked laser as a convenient and straightforward platform to generate and study dynamics of ultrashort pulses during cavity rotation. However, recent works on seismology and gyroscopy using interferometers based on telecommunication optical fibre cables 42 , passive ultrafast FOGs 43 or microresonator gyroscopes 44 clearly demonstrate the high potential of enhancing their performance further by using the novel approaches and methodology of measurements demonstrated here. Although it remains a subject for further investigation, we anticipate that the availability of new a generation of measurement techniques will lead to development of new technology solutions for real-time gyroscopes. | 5,745.6 | 2019-03-07T00:00:00.000 | [
"Physics",
"Engineering"
] |
Redesigning fruit and vegetable distribution network in Tehran using a city logistics model
aDepartment of Industrial Engineering, Iran University of Science and Technology C H R O N I C L E A B S T R A C T Article history: Received November 18, 2017 Received in revised format: April 28, 2018 Accepted May 4, 2018 Available online May 5, 2018 Tehran, as one of the most populated capital cities worldwide, is categorized in the group of highly polluted cities in terms of the geographical location as well as increased number of industries, vehicles, domestic fuel consumption, intra-city trips, increased manufacturing units, and in general excessive increase in the consumption of fossil energies. City logistics models can be effectively helpful for solving the complicated problems of this city. In the present study, a queuing theory-based bi-objective mathematical model is presented, which aims to optimize the environmental and economic costs in city logistics operations. It also tries to reduce the response time in the network. The first objective is associated with all beneficiaries and the second one is applicable for perishable and necessary goods. The proposed model makes decisions on urban distribution centers location problem. Subsequently, as a case study, the fruit and vegetable distribution network of Tehran city is investigated and redesigned via the proposed modelling. The results of the implementation of the model through traditional and augmented ε-constraint methods indicate the efficiency of the proposed model in redesigning the given network.
Introduction
Meeting citizens' public needs, especially foods is one of the most important and perhaps the most principal elements of urban services. Besides, providing welfare and comfort for citizens entails proper deployment, optimal distribution, comprehensiveness and perfectness of applications and usages, as well as diversity of supplied products in markets and shopping centers. This is because proper deployment of supply centers has a significant impact on reducing intra-city trips and traffic jams as well as energy-and cost-savings. It is impossible to accomplish proper deployment of supply centers without considering the geographical factors of population, location, and space as well as other factors such as transportation infrastructures, land, fair access, adaptability and adjacency, population density, capability and capacity, environmental considerations, and parking space. In this regard, it is essential to develop models that take into account and apply these factors in urban designs to the possible extent (Yang et al., 2016).
Based on the research conducted by the United Nation (UN), it is estimated that more than 60% of the entire world's population will be residing in urban areas by 2030 and above 70% by 2050. High density of population in urban areas has caused various problems including high energy consumption rate, air pollution, and traffic congestion. Advancement of logistic systems, such as on-time and smart retailing, inclines suppliers to keep their inventories at a low level and try to make savings in storage costs. These factors have resulted in the increased demand for commodities and services and simultaneously reduced volume of these demands, followed thereby by increased traffic of freight vehicles and, consequently, increased emission of pollutants (Taniguchi et al., 2001). City logistics models can be effective for solving such complicated problems . In this regard, several policy measures have been implemented and assessed using various models in a number of cities around the world .
In the present study, a three-level network is investigated in order to optimize the city logistics distribution operations and simultaneously to reduce the economic and environmental costs. Meanwhile, it is attempted to minimize the response time in the network. In the given network, the first level represents the logistic centers in suburban areas, the second level represents the distribution centers inside the city, and the third level represents the sales terminals as demand points across the city. It is supposed to select some fixed sites for constructing urban distribution centers. Besides, it is necessary to make decisions on the capacity of distribution centers as well as the manner of allocating these distribution centers to the logistic centers and the sales terminals to the distribution centers. The demand for commodities is considered as probabilistic and the network is modelled based on the queuing theory. For the provided model, the policy of putting tax on carbon and applying the lowcarbon emission resources for deployment at urban distribution centers is used. Afterwards, the mathematical model presented in this work is applied as a case study in order to design a fruit and vegetable distribution network in Tehran. Initially, the fruit distribution status in this city is described. Then, using the data and information gathered from the sources and organizations affiliated to Tehran Municipality, it is attempted to adjust the required parameters of the problem to the possible extent. Finally, the results derived from solving the mathematical model via traditional and augmented εconstraint methods in this case study are presented. Results of the present study indicate high efficiency of the proposed model in achieving its objectives and the preference of the augmented approach in comparison with traditional one. At the end, the conclusion as well as some suggestions for future studies are provided.
Review of literature
City logistics was introduced for the first time by Taniguchi in 2001. Since then, many researchers have presented papers and studies with a focus on this area. Notwithstanding these works, mathematical modelling of city logistics requires further attempts as well as development of relevant models. In this regard, numerous terms and definitions have been proposed to date in order to express the concept of city logistics. Among them, it would be better to adopt the most comprehensive definition (Wolpert & Reuter, 2012). Some of the definitions proposed in this regard are as follows: a. Freight transportation in urban areas (Barceló et al., 2005) b. Routing and displacing commodities and associated activities such as warehousing (Qiu & Yang, 2005) c. Optimizing urban freight transportation systems (Crainic et al., 2009) d. Providing various services for the optimal management of displacement of commodities in cities (Dablanc, 2007) e. Optimization process of logistics and transportation activities in urban areas considering all beneficiaries (Taniguchi et al., 2001) The last definition for city logistics by Taniguchi et al. (2001) seems to be more comprehensive.
Objectives of city logistics can be defined from two perspectives. In the first perspective, these objectives can be categorized as economic, environmental, and social, while the second perspective deals with mobility, sustainability, viability, and flexibility .
So far, numerous studies have been conducted in order to investigate and identify the modellings of city logistics presented by various researchers (Anand et al., 2012;Anand et al., 2015;Muñuzuri & Pablo, 2012;Wolpert & Reuter, 2012).
According to these studies, most of the modellings have been performed with a focus on the economic and environmental objectives and some others have addressed the problems of crisis and disaster as well as the issue of emergency logistics in cities (He et al., 2013). Optimizing the location of logistic facilities in metropolitan areas at any time, either crisis or normal conditions, is considered of great importance due to its considerable effect on traffic congestion and air pollution (Duren & Miller, 2012).
The majority of the modellings have been performed from the viewpoint of city's authorities and managers. However, the sustainable and green objectives have been highly regarded by the authors in recent years (Teimoury et al., 2017). Among such research projects, Yang et al. (2016) and Moutaoukil et al. (2015) can be mentioned.
Several innovative projects have been aimed to reduce the emission of CO2 and greenhouse gases in urban areas, which has been accomplished mainly in three ways: stabilizing the flow of commodities, applying the low-emission vehicles, and setting the regulations of access control to urban centers. Stabilization of the flow of commodities, which is mainly based on the use of a single distribution center, seems to be a suitable solution for optimizing the final delivery inside the city .
In addition to these works, it would be an interesting idea to apply the queuing theory in order to optimize the demand responding time in city logistics systems and, consequently, focus on increasing the customer satisfaction in addition to attempting to reduce logistic costs (Saeedi et al., 2018).
Problem presentation and mathematical modelling
Freight vehicles gather commodities and goods from logistic centers (LC) in the suburban areas and, then, transfer them to the intra-city distribution centers (DC) in order for further processes (including packaging, storage, combining, barcoding, etc.). Eventually, these commodities are distributed extensively among sales terminals (ST), also called demand points (Saeedi et al., 2018). In the present study, objective of the problem was to select some fixed sites for constructing urban distribution centers. Due to the limitation of capital costs, only a few number of distribution centers could be constructed and, subsequently, only a certain number of these activated centers would receive the governmental support to be equipped with low-carbon facilities (e.g. employment of the equipment, which can consume natural liquid gas as fuel, or more complex structures in designing distribution centers with optimal carbon rate). Furthermore, regarding the carbon tax policies adopted by the government and city managers, the costs of carbon emissions resulted from processing of commodities in distribution centers as well as transportation operations by vehicles within the network should be taken into consideration. The ultimate objective was to minimize the total operational costs as well as to minimize the response time. The first objective could be attractive for all beneficiaries and the second one is appropriately applicable for perishable and necessary commodities.
In this network, the nodes and commodities played the roles of server and customer, respectively. At the network's nodes, operations such as production, storage, packaging, barcoding, cutting, mixing, combining, loading, discharging, sorting, processing, and delivery were performed. The governing conditions of the problem were associated with uncertainty. Thus, under such conditions, the demand for commodities and the service-providing time were considered as probabilistic.
3.1.Assumptions
Each sales terminal can supply the demand for a certain commodity only from a single distribution center, but there is no limitation for supplying the sales terminals' demands from several distribution centers. Each node of the network is considered as an M/M/1 queuing system. Service time at the network's nodes is probabilistic and is considered as having an exponential distribution function. Entry of demand into sales terminals is considered probabilistic with an exponential distribution function and the value of demand is considered as having a uniform distribution function. Total fixed cost at DCs Cost unit of processing at distribution center j for r-type commodity Cost unit of transport of r-type commodity from distribution center j to each sales terminal Cost unit of transport of r-type commodity from logistics center i to each distribution center pej Carbon emission unit from all processing stages at distribution center j tej Carbon emission unit of vehicles from distribution center j to each sales terminal ei Carbon emission unit of vehicles from logistic center i to each distribution center Ui
Symbols and parameters
Commodity supply capacity of logistics center i W Number of DCs planned to construct V Number of resources with low carbon emissions that should be allocated to the distribution centers a Carbon tax rate b Carbon emission reduction percentage at each distribution center where the lowcarbon resources have been considered Demand entry rate at network's nodes ( , and ) Service-providing rate at network's nodes ( , and ) Parameter of negative exponential distribution c Lower bound of a uniformly distributed random variable that indicates the quantity of commodity in a demand d Upper bound of a uniformly distributed random variable that indicates the quantity of commodity in a demand Response time of the system for commodity type r from node i to node k, going through DC located at node j Sojourn time of commodity type r in the system Waiting time of commodity type r in the queue Average number of commodities in the system Average number commodities in the queue Transportation time for commodity type r from node i to node j Transportation time for commodity type r from node j to node k Transportation speed for commodity type r from node i to node j Transportation speed for commodity type r from node j to node k
Decision variables
Amount of r-type commodity that is carried from logistics center i to distribution center j Zj 1 if DC j is set up; 0, otherwise Cj Processing capacity designed at distribution center j Pj Equal to 1 if low-carbon resources are allocated to distribution center j; 0 otherwise 1 if commodity type r is delivered from LC i to DC j; 0, otherwise 1 if commodity type r is delivered from DC j to ST k; 0, otherwise
Mathematical model
, , 0,1 ∀ ∈ , ∈ , ∈ , ∈ The first objective function minimizes the total operational cost. The total operational cost is comprised of two parts, the first of which is the total operation cost regardless of the carbon tax cost, and the second is the carbon tax cost imposed on carbon due to implementation of the carbon tax policy. The first part includes four items: fixed cost of constructing the distribution center, total variable cost of processing at the distribution center, total cost of delivery from distribution centers to sales terminals, and total cost of transportation from logistics centers to distribution centers. On the other hand, the second part consists of three items: the carbon cost resulting from the processing stages at distribution center, cost of delivery from distribution center to sales terminal, and cost of transportation from logistics center to distribution center. Moreover, the second objective function minimizes the total response time in the network as well.
Constraints (3) and (4) state that a distribution center cannot join the distribution activities as long as it is not constructed. Constraint (5) states that some amount of a certain commodity will be delivered from a certain logistics center to a certain distribution center only when the relationship between that logistics and distribution center has been established. Constraint (6) states that only the activated distribution centers will be equipped with low-carbon resources and equipment. Constraint (7) states that only the activated distribution centers will have capacity. Constraint (8) expresses that the capacity of each distribution center should be larger than or equal to its total output flow. Constraint (9) states that the sum of capacities of all distribution centers should be larger than or equal to the sum of demands of the sales terminals. Constraint (10) states that the capacity of each distribution center should be larger than or equal to its total input flow. Constraint (11) states that the sum of capacities of all distribution centers should be larger than or equal to the total flow that is transferred from all logistics centers to all distribution centers, which means that the distribution centers should be large enough for storing all the commodities carried from logistics centers. Constraints (12) and (13) refer to the logistics center's ability to supply commodities. Constraint (12) indicates that the total amount of the commodity that is transferred from each logistics center to the distribution centers should be less than the capacity of that logistics center. Constraint (13) also shows that the total amount of the commodity that is transferred from logistics centers to distribution centers should be less than total capacity of the logistics centers. Constraint (14) states that the sum of flows of the r-type commodity entering from all distribution centers into each sales terminal should meet the demand for the r-type commodity of that sales terminal. Constraint (15) shows that for each commodity, the input flow to each distribution center should be larger than or equal to its output flow. Constraint (16) states that the total input flow to a distribution center should be larger than or equal to its total output flow. Constraint (17) states that for each commodity, sum of the output flows coming out of all logistics centers should be larger than or equal to the sum of demands of all distribution centers. Constraint (18) shows that the total output flow coming out of the logistics centers should meet the total demand of the sales terminals. Constraint (19) states that the sum of fixed costs of the activated distribution centers should not be larger than the value of the available budget. Constraint (20) shows that W distribution centers should be constructed. Constraint (21) states that the resources with low carbon emission should be allocated only to V activated distribution centers. Constraint (22) ensures that all logistics centers will join the distribution activities. Constraint (23) states that none of the activated distribution centers should be without relationship. In fact, the purpose of considering Constraints (22) and (23) is to utilize the potential of all logistical centers and activated distribution centers for supplying the commodities. In cases that the capacity of the intended centers is at such a level that may seem very difficult even to supply demands of the sales terminals in unit of time, there will be no need for considering these two constraints, and the model itself will attempt to utilize the capacity of all these centers in order to supply the given demand. Constraint (24) shows that each distribution center can obtain its demand for r-type commodity from a single logistics center at most. Similarly, Constraint (25) shows that each sales terminal can obtain its demand for r-type commodity from at most a single distribution center. Constraint (26) shows that the system's response time for r-type commodity, which is carried from the logistics center i to the sales terminal k through the distribution center j, is equal to that commodity's duration of presence at the first to third levels of the network plus the transportation time of that commodity between the network's levels. It should be noted that the total time of presence in the system ( ) is equal to the sum of the times of presence at the first to third levels of the network. Constraint (27) states that variables , , , and can take the values of 0 and 1. Constraint (28) and (29) also express that variables and have non-negative and integer values.
Queuing model
The studied queuing network is a series-parallel network consisted of three levels. At each serviceproviding node in the network, the queuing system is assumed as M/M/1 and service-providing is performed with an exponential distribution with μ parameter. Besides, the system is based on the FIFO (first-in/first-out) approach. It is assumed that the demand for commodities at demand point k follows an exponential distribution with parameter . Since each distribution center provides services for a group of the demand points, the demand of each of these distribution centers would be equal to the total demand of its relevant downstream service-receivers. Therefore, it follows the following equation: (30) ∀ ∈ Moreover, commodity demand for logistics centers follows the exponential distribution, the parameter of which can be obtained through the following equation: (31) ∀ ∈ Therefore, the schematic figure of the network would be as follows: In this model, the demand for a commodity is expressed by two indices of demand occurrence and demand size at each time of occurrence. It is assumed that occurrence of the demand for each commodity is a random variable U with exponential distribution and density function , and the demand size at each time of occurrence is a random variable V with uniform distribution and density function .
Since these two random variables were independent from each other, the following equations hold: , 0 0, 0 2 Therefore, in the multi-commodity model of the present study, we have: The r-type commodity demand of the sales terminals (demand points) follows the exponential distribution with parameter . Also, is the productivity rate and the following equations hold: According to the above-mentioned discussions and equations, with regard to our discussion on the queuing model of the given problem in the present study, Eq. (44) shows the average duration of stay in the system in three stages of the network. Eqs. (45)(46)(47) show the average duration of waiting in the queue, average number of commodities existing in the system, and average number of commodities waiting in the queue, respectively.
Based on the above points, the second objective function of the problem (Eq.2), which is equal to the total response time, is generally built up of aggregation of five parts: min By substituting, the following function is obtained:
Linearization of the first objective function
In the above model, item (1) in both the first and the second parts of the first objective function include the multiplication of two decision variables. To simplify the solution, it can be linearized using the following method.
For the first item in the first part, the following logical method can be used: 1 0 0 By this definition, the previous multiplication of variables and are converted into a single variable . The equations based on are represented in constraints of the model. However, regarding constraint (7), variable can be entirely removed from the objective function without defining a variable such as ACj.
For item (1) in the second part, the optimal solution can be obtained under the following conditions: Therefore, here, is replaced for ∑ ∑ ∑ . A non-negative interval variable MC is created, which can be specified as following: Similarly, the -based logical equations are shown in the model's set of constraints. Now, the first objective function of the problem can be specified as follows: (53)
Fruit distribution network in Tehran
Tehran, as one of the most populated capital cities in the world, is also one of the highly polluted cities worldwide due to its increased population, industries, number of vehicles, and excessive increase in the consumption of fossil energies. Air pollution is among the most important environmental issues challenging the people living in this city. A considerable portion of the air pollution in cities is created by motor vehicles and moving resources.
Fig. 3. Location of Tehran Central Market of Fruit and Vegetable
Several years ago, the Municipality constructed some markets and centers in order to eliminate the role of dealers, which consequently led to the emergence of the "Tehran Municipality Management of Fruit and Vegetable Organization". The purpose of this organization has been to establish the required facilities for providing and distributing the fruits and vegetables as well as agricultural crops. Since then, other duties and responsibilities have been also gradually delegated to the organization, some of which include constructing the Central Market of Fruit and Vegetable in order to provide the required facilities for fruit and vegetable transactions, supplying the daily markets, reducing the traffic load and thereby air pollution, gaining the control of distribution, and cutting off the exclusivity and eliminating dealers from market. In addition, the qualitative and quantitative development of the local markets in alignment with policies of fighting against overcharging was included in Tehran Municipality's programs. Accordingly, there are currently about 219 markets, local markets, and fruit and vegetable markets across Tehran, which in some way constitute Iran's largest foods and agricultural products supply network with the capacity of meeting the requirements of hundreds of thousands of people every day. The Central Market of Fruit and Vegetable of Tehran with an area of 270 ha, which is a logistic center, plays a significant role in managing the supply of demands for food in these markets, and is located in the southern part of the city.
Referral of hundreds of thousands of people to these markets every day represents the considerable number of city trips made for meeting the daily needs of citizens. According to the results of the surveys conducted by Tehran Municipality's General Office of Social and Cultural Studies, the number of citizens who walk to these markets has been considerably increased in recent years. Besides, according to the same reports, the statistics indicates the reduced number of trips by personal cars for going to these markets. On the other hand, whole shopping of daily needs from markets along with improved accessibility of the markets has led to the reduced time and distance covered by those who use personal cars. Hence, by constructing and expanding the fruit and vegetable markets across city of Tehran, volume of intra-city trips by personal cars has been reduced considerably, which in turn has led to a considerable reduction in traffic congestion and air pollution as well as fuel consumption. This will be more perceivable when we see that Tehran has been facing the growth of population as well as annual increase of several thousands in the number of vehicles. Fig. 4 demonstrates the situation of the markets of fruit and vegetable across Tehran.
Necessity of redesigning fruit distribution network in Tehran
Regarding the above-mentioned points, creation and development of fruit and vegetable markets across Tehran and constructing the central market of fruit and vegetable as a large logistic center seem to be successful in achieving the intended objectives of the relevant authorities and decision-makers to a large extent. Easy and fair access to the food and agricultural products, elimination of dealers and exclusivists, control and management of distribution of products, as well as reduction of the intra-city traffic load and air pollution are among the successfully accomplished objectives. However, in order to achieve these objectives more desirably, reduce the freight vehicles traffic load that has been so far disregarded in distribution operations, and reduce the pollutants resulted from operations of the freight vehicles in Tehran, the fruit and vegetable distribution network in Tehran requires to be redesigned. Transportation by freight vehicles among more than 200 markets and the central market, as the main supplier, entails long distances and, as a result, considerable environmental and economic costs. Therefore, such costs can be reduced by creating another level in the fruit distribution network in Tehran and constructing the intra-city distribution centers. The mathematical model presented in this study can be used as a useful pattern and tool in redesigning the fruit distribution network in Tehran.
Data and values of parameters
The next stage is aimed to redesign the fruit and vegetable distribution network in Tehran by applying the proposed model and using the existing available information. In this study, the 22 districts of city of Tehran have been considered as the candidate points for constructing the urban distribution centers. Also, the information of 171 major sales markets, though the total number of these markets mounted to 219 as mentioned earlier, has been extracted by referring to the relevant organization. Furthermore, the direct distances among the centers of these 22 districts and the Tehran Fruit and Vegetable Central Market as well as the distances of the centers of these districts from the sales markets were calculated and gathered in kilometer. Fig. 5 demonstrates the 22 districts of Tehran.
Fig. 5. Twenty-two districts of Tehran
On this basis, the given network was composed of 1 logistic center (central market) with indefinite capacity, 22 candidate places for constructing urban distribution centers, and 171 sales terminals as demand points. Moreover, since the demand for all fruit was supplied by the bulk-sales markets in a unit of weight (e.g. in ton), the number of commodities was considered equal to 1. The constraint of the available budget for constructing the distribution centers has not been taken into account. A total of 5 distribution centers would be constructed, 3 of which would be equipped with low-carbon equipment.
To estimate the parameters related to the sales markets' demand with regard to the statistics published by Tehran Municipality in 2015, which is retrievable on the website of this organization, the population of the 22 districts was considered as the basis for estimation. Thus, the sales markets would have different demands depending on the district wherein they were located. Basically, the more the population of a district, the higher the upper and lower limits of demand for that district would be. The demand occurrence rate for the districts with very low populations was considered less than that of the districts with high population. Table 1 represents the values of these parameters. Due to the unavailability of further information, other parameters of the model were initiated in accordance with the following explanations.
Fixed costs to establish distribution centers are evenly between 5,000 and 6,000 and cost of the processing variable in the distribution centers uniformly ranged between 10 and 20 monetary units. The carbon emission reduction percentage at any distribution center, where the low-carbon resources were considered, was equal to 50%. Also, the carbon tax rate was assumed equal to 30%. The serviceproviding rate at the first to third levels of the network was assumed equal to 12, 8, and 4, respectively. The average speed of the vehicles at the first and second levels of the network was equal to 30 and 20 km/h, respectively. Naturally, the transportation time between the network levels was equal to the distances of the network points divided by the mean speed. Other parameters are initialized in Table 2: Table 2 Values of other parameters Uniformly ranging between 10 and 20 monetary units per unit of commodity Uniformly ranging between 4 and 5 monetary units per unit of commodity and unit of distance Uniformly ranging between 2 and 4 monetary units per unit of commodity and unit of distance pej Uniformly ranging between 100 and 600 units of carbon emission per unit of commodity tej Uniformly ranging between 0.5 and 3 units of carbon emission per unit of commodity and unit of distance ei Uniformly ranging between 1 and 2 units of carbon emission per unit of commodity and unit of distance Ui Uniformly ranging between 1000 and 2000 units of commodity
Solution and results
Considering the above values, now, we describe the results obtained from solving the model via epsilon constraint method and by BARON solver in GAMS software. Epsilon constraint method is based on converting a multi-objective optimization problem into a single-objective optimization problem. This method is one of the most well-known approaches for dealing with the multi-objective optimization problems, which solves such problems by transferring all the objective functions, except one, to the constraints at each stage. In fact, in this method, one of the objectives of the given problem is optimized as the main objective relative to the other objectives as constraints, which is called epsilon constraint (Ehrgott, 2005;Bérubé et al., 2009). This method was first developed by Haimes et al. (1971) and, then, its details were described in Changkong and Haimes (1983) study.
In the proposed problem in the present study, the first objective, i.e. the total operational cost, was investigated as the main objective and the second objective, i.e. the response time, as the secondary objective. Therefore, regarding the epsilon constraint method, formulation of the objectives was as follows: In order to determine the Pareto points, first, each objective function was solved separately. The obtained results are reported in Table 3. 490 9,15,17 6,9,15,17,18 17.180 5997060.690 Min TC 0.950 2.492 16,17,19 5,7,16,17,19 14.126 6347208.016 Min RT The ideal value for the first objective function and the worst value for the second one were 5997060 and 17.180, respectively, and the problem did not have any multiple optimal solutions. Thus, there was no solution that could dominate the above optimal solution. Subsequently, based on the ε-constraint method and considering Δ=0.3, the optimal points of the problem were generated, followed then by presenting the best obtained solutions, meaning the non-dominated Pareto solutions for the objective functions. The consecutive repetitions of the ε-constraint method yielded 8 solutions for the problem, the characteristics of which are provided in Table 4. The values of the objective functions provided in the first and the last rows of this table represent the ideal and nadir values for the two objective functions.
Fig. 6. Dominated and non-dominated points
In most of the obtained Pareto solutions, District (19) and its adjacent districts were activated as distribution centers. Since the logistic center, i.e. the central market of fruit and vegetable, was located in District (19), it could simultaneously play the role of a distribution center as well and cover the demand of District (19) and its adjacent districts. Hence, considering the approach that there was no need for a distribution center in the districts in the southern part of Tehran, the focus should be put on the location of the other 4 distribution centers in the remaining districts. Therefore, the demands of Districts (15) The other 4 distribution centers in the remaining districts, which included 16 districts, would be located.
Results of solving the model through this approach, which was more appropriate than the previous state, are provided in Table 5. In this state, the ideal value for the first objective function and the worst value for the second one were equal to 4760693 and 15.25, respectively. Subsequently, based on the ε-constraint method and considering Δ=0.5, the optimal points of the problem were generated, followed then by presenting the best obtained solutions, i.e. the non-dominated Pareto solutions for the objective functions. The consecutive repetitions of the ε-constraint method yielded 10 solutions for the problem, the characteristics of which are provided in Table 6. The values of the objective functions provided in the first and last rows of this table represent the ideal and nadir values for the two objective functions. Fig. 7 shows all the effective points. Therefore, the number of Pareto solutions obtained for this problems via the above-mentioned approach with Δ=0.5 was equal to 10. Selecting one of the effective points for execution was based on the priorities of the decision-makers and beneficiaries. One of the effective points for execution was selected based on the priorities of the decision-makers and beneficiaries. The important point about the set of effective points obtained from the ε-constraint method was its precision and perfectness. In other words, in case of using other methods of solving the multi-objective problems for the mathematical model, the final solution obtained from these methods would be one of the effective solutions obtained via the ε-constraint method.
Augmented ε-constraint method
Since Miettinen (1999) has found that the solutions obtained by ε-constraint method is a weak Pareto optimal solutions for multi-objective optimization, an improved method, namely, augmented εconstraint method is applied in order to generate better Pareto solutions. According to the Mavrotas (2009), by applying some modifications to the method, the results can be improved. These modifications in the problem with p objective functions are described as follows, where t s is the slack variables considered for the problem and eps is a small value between 10-3 and 10-6 based on Mavrotas (2009). Accordingly, the solutions obtained by augmented ε-constraint method are the only efficient solutions and the weakly efficient solutions generation is avoided (Mavrotas, 2009).
In order to present a comparison between ε-constraint method and augmented ε-constraint method, the augmented ε-constraint is applied by considering 10 breakpoints and Δ=0.5, and the obtained results are shown in Table 7. As it is clear in Table 7, the obtained results by the augmented approach are better in terms of both objective functions. In other words, it makes all the Pareto solutions obtained by traditional ε-constraint method dominated. The solutions have differences in Zj values in some breakpoints too. Fig. 8 illustrates the obtained Pareto front by the augmented approach. As it is obvious, Objective 1 (TC)
Millions
Objective 2 (RT ( it has made a different front compared to the traditional one which is depicted in Fig. 9 to show this difference. Therefore, the solutions obtained by the augmented approach proposed as optimal solutions.
Conclusion
It is crucially necessary to attempt to reduce the costs and pollutants within the city transportation and logistics distribution network, especially in metropolitan areas with dense population. Meanwhile, apropos of the necessary and perishable commodities, it is essential to rapidly meet the demands to the possible extent. In the present paper, a bi-objective mathematical model was proposed for deigning the city logistics distribution network based on the queuing theory. In this model, the first objective was to minimize the environmental and economic costs at the network level and the second one was to minimize the response time in the network. The major innovations of this study were the application of the queuing theory in order to improve precision, flexibility, and applicability of the modellings of city logistics, on one hand, and application of the carbon tax policy as well as the incentive policy of allocation of low-carbon resources to distribution centers in order to reduce the pollutants emission rate on the other hand. Subsequently, as a case study with practical use, the fruit distribution network in Tehran was investigated and redesigned. On the whole, findings of the present study by ε-constraint methods have indicated the good performance of the proposed model in redesigning the given network.
Integrating the problem presented in this study with the vehicle routing problems, assuming the distances between the intra-city centers as orthogonal, and applying better methods for problem solving are some of the suggestions that can be presented for future studies. | 8,639.4 | 2019-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Business"
] |
Uncertainty quanti fi cation of fl ood damage estimation for urban drainage risk management
This paper presents a method of quantifying the uncertainty associated with inundation damage data for an urban catchment when undertaking stormwater drainage design and management. Usually fl ood damage is estimated by multiplying the inundated asset value by the damage rate corresponding to the inundation depth. The uncertainty of the asset value and the damage rate is described by probability distributions estimated from an analysis of actual fl ood damage data from a national government survey. With the inclusion of uncertainty in the damage rate and asset value, the damage potential curve de fi ning the damage-frequency relationship is no longer a deterministic single-value curve. Through Monte Carlo simulations, which incorporate the uncertainty of the inundation damage from the damage rate and asset value, a probabilistic damage potential relation can be established, which can be expressed in terms of a series of curves with different percentile levels. The method is demonstrated through the establishment of probabilistic damage potential curves for a typical urban catchment, the Zenpukuji river basin in Tokyo Metropolis, under two scenarios, namely, with and without a planned fl ood control reservoir.
INTRODUCTION
Flood damage estimation, together with flood inundation calculations, are two essential processes in risk-based design and management of flooding in urban areas. In these estimation and calculation processes, uncertainty of various kinds exist (NRC ; Pappenberger & Beven ; Apel et al. ). Yen & Ang () classified the uncertainty into two types: objective and subjective uncertainty. The former is associated with any random process that can be deduced from statistical samples. The latter is attributed to the lack of available quantitative information for the phenomena and processes. In flood damage estimation, reliable flood damage statistics should be used, if available, to carry out the flood risk assessment for a more defensible design and to inform decision making.
Flood inundation damage is generally calculated by the stage-damage curves and loss functions (Smith ).
Procedures have been provided for flood damage evaluation (FlOODsite ). The various parameters used in damage estimations, such as asset value and damage rate, are based on flood damage statistics and they often possess uncertainty that is greater than in hydraulic inundation calculations. The uncertainty with regards to hydraulic parameters affecting depth of flooding are widely discussed (Pappenberger et al. ). However, data and models to represent the exposure and damage parameters have generally been unquestioned (Chatterton et al. ).
Several studies have been undertaken to analyze the uncertainty in flood inundation calculations and flood damage estimations (De Moel & Aerts ; Bates et al. ). However, very few studies deal with uncertainty quantification of flood damage estimates based on actual flood damage statistics. Thus, it is the objective of this study to focus on the uncertainty quantification of flood damage estimates based on a national survey of flood inundation damage in Japan (PWRI ).
FLOOD DAMAGE ESTIMATION METHOD
The method used to estimate flood inundation damage in this study follows the procedure developed by Morita (). Figure 1 outlines the estimation procedure, in which storm hyetographs are transformed into inundation depths prior to estimating the corresponding flood inundation damage in monetary terms using a flood damage prediction model. In the study, a set of design storm hyetographs of different return periods or rainfall intensity-duration-frequency (IDF) curves are used as the input storm data. Finally, the flood damage prediction model produces a damage potential curve that defines the relationship between design storm return period and the monetary value of the inundation damage.
In the study, a geographic information system (GIS)based flood damage prediction model is used. The model consists of two modules: Module-1 for calculating inundation depths from rainfall hyetographs and Module-2 for estimating flood inundation damage corresponding to the inundation depths provided by Module-1.
Flood damage prediction model
In Module-1, this study used the XP-SWMM 1D/2D model to calculate the inundation depths over an urban catchment under a given hyetograph. XP-SWMM is a stormwater modelling software that is widely used to deal with complex hydrologic, hydraulic and water quality problems in urban catchments (Phillips et al. ). Any other flood inundation simulation models with capabilities that are comparable to XP-SWMM can also be used. The numerical inundation simulation in the study adopted a 50 m × 50 m grid to match the mesh system in the Tokyo Metropolitan Government asset database. The modelling accuracy can be improved by adopting a much smaller grid size, which is now computationally feasible, to capture more accurately the local features that may have impacts on inundation depth and flood damage.
For flood damage prediction, flow velocity could be another factor which influences flood damage. However, in urban areas, the influence of inundation depth on flood damage estimation is more significant than that of flow velocity (Kreibich et al. ).
The monetary value of inundation damage is estimated by Module-2 as a function of the inundation depth (Morita ). Flood damage can be tangible and intangible. Tangible damage is evaluated on a monetary basis, and can be categorized into direct damage and indirect damage (Penning-Rowsell et al. ). Direct damage refers to physical damage to houses, household articles, offices, retail outlets, corporate assets and other items due to direct contact with floodwater. In this study, direct damage due to inundation was divided into 11 damage types, of which four types relate to building structures and seven types relate to movable items (see shaded cells in Figure 2). Indirect damage is due to business losses caused by direct damage to shops leading to service disruption and by damage to public infrastructure and utilities. Limited consideration of indirect damage was given in this study and is described later. Floods can also cause intangible damage, which cannot be expressed in monetary terms; examples include psychological trauma, loss of life, social unrest, public health issues, etc. In this study, the focus is on direct tangible damage to buildings and their contents, with limited consideration given to indirect damage. Flood induced intangible damage is not considered in the current study.
Flood damage statistics
In this study flood inundation damage in a urban catchment mainly focuses on direct damage caused by physical contact with floodwater for 11 damage types (see Figure 2) including houses, household articles, business buildings (factories, offices, retail outlets), corporate assets and others. The direct damage for household articles is calculated by multiplying the estimated asset value of an article by a damage rate which is a function of inundation depth computable from Module-1. The damage rate herein is the percentage of value of household articles damaged by floodwater. Module-2 for inundation damage estimation follows in general the manual published by River Bureau of the Ministry of Land Infrastructure Transportation and Tourism ().
For flood damage estimation, data on damage rates and asset values were obtained from a nationwide flood damage survey in Japan conducted by the Public Works Research Institute of the Construction Ministry from 1993 to 1995 (PWRI ). The inundation damage survey covered asset value, monetary damage, damage rate, inundation depth, sedimentation depth, inundation duration, building structure type, and number of floors for each exposure. Hundreds of flood damage samples in the Tokyo Metropolitan area were used as a database for the flood damage estimation.
Flood damage estimation
To calculate the value of direct structural damage, the asset value is multiplied by the damage rate determined by the inundation depth. The same method was adopted for the damage to movable items in buildings. The direct damage for structures, DS, movables in the domestic households, DM1, and movables in business buildings, DM2, are estimated for each 50 m × 50 m grid cell using the following equations: where DM1(i) ¼ direct movable damage for households in grid cell i; VM1 ¼ valuation of domestic articles per unit floor area.
where DM2(i) ¼ direct movable damage for business buildings in grid cell i; NM2 k ¼ number of employees of a business building per unit floor area of damage type k; VM2 k ¼ asset valuation of a business building per unit employee of type k.
In Equations (1)-(3), the variables α(i,j), A(i,j), and damage type k (k ¼ 1-11) are obtained from the GIS database of the Tokyo Metropolitan Government. The other variables, VS k , VM1, NM2 k , and VM2 k are extracted from the statistics of the Tax Bureau of Tokyo Metropolitan Government. The total damage of each grid cell i is obtained by summing the damage to all buildings within the grid cell.
In order to estimate the monetary value of inundation damage using Module-2, GIS data of the private and corporation assets within the study catchment are utilized in flood damage calculations by overlaying the assets data and the calculated inundation depth for each building. In the study, the GIS assets data of the Tokyo Metropolitan Government were used. Figure 3 shows a map superposing In damage calculations, the relationships between damage rate and inundation depth of damage type k in grid cell i are described by R k (h i ), which defines damage rate as a function of inundation depth. The curves were obtained for the 11 direct damage types (see Figure 2) according to inundation damage statistics (PWRI ). As an example, Figure 4 shows the survey data of damage rate versus inundation depth for the household articles in a private home. The two curves in Figure 4 are defined by the logistic functions obtained by means of unconstrained least square (ULS) method and constrained least square (CLS) method. Note that the curve by the ULS method (the dashed line) produces a damage rate-inundation depth (y vs. x) relation which is a poor representation of the data, especially when the inundation depth x is very shallow or very deep. The logistic curve obtained by the CLS method (the solid line) is a better representation of the data, although there remains a high scatter of data in comparison with the curve. The logistic function was adopted for its amenability to describe damage rate-inundation depth relation because: (1) the value of damage rate is bounded between 0 and 100; and (2) damage rate at a given household location would, in general, increase monotonically from near zero with shallow inundation to a certain high depth beyond which full damage would happen. The use of the logistic model would restrict the estimated value of damage rate to stay within [0, 100]. The methods using ULS and CLS to establish the damage rateinundation depth relationship is detailed in Appendix A (available with the online version of this paper).
Indirect damage was calculated using a relation between the inundation depth and the number of business interruption days for each business entity. The indirect damage for business interruption was obtained by multiplying the number of days of interruption by the employees' added value per day of each business entity.
Although the proposed framework integrating GISbased flood inundation simulation and flood damage estimation is applied to flood risk management in an urban setting, it can equally be implemented, with proper modification, to rural areas. Modifications for Module-2 include inundation damage information about farmhouses and agricultural crops. As for Module-1, any numerical inundation simulation models suitable for overland flow on farmlands would be suitable.
Damage potential curve
The monetary inundation damage costs are calculated by the flood damage prediction model under design storms of various return periods. The resulting damage-frequency relation is described as a damage potential curve. The damage potential curve shows the relationship between design storm return period and flood damage. By only considering inherent randomness of rainfall without accounting for uncertainty in other factors, such as asset values and damage rates, such damage potential curves in general will show only a one-to-one relationship between damage and return period that does not account for the underlying scatter in the damage data (see Figure 4).
ASSESSING UNCERTAINTY IN FLOOD DAMAGE ESTIMATION
Flood inundation damage can be estimated by multiplying the asset value by the damage rate, which is a function of the inundation depth obtainable from Module-1. The damage rate-inundation depth relation defined by the solid line in Figure 4 can be used for deterministic flood damage estimation without having regard to the scatter of the data. However, it is clearly revealed in Figure 4 that the data variability is too significant to be ignored. Therefore, it is warranted to develop a probabilistic flood damage estimation model to capture the intrinsic variability of the damage rate and asset value.
Assessing uncertainty in the damage rate
To quantify uncertainty in the damage rate as shown in Figure 4, the scatter of the data around the estimated damage rate-inundation depth relation can be represented by using a suitable probability distribution. As mentioned above, the estimated value for the damage rate should be bounded within [0, 100]. A logistic model for the damage rate is a suitable choice for fitting the damage rate-inundation depth data such as Figure 4, that is, where y ¼ damage rate; x ¼ inundation depth; and a & b ¼ model coefficients. Equation (4) can be linearized as: The best-fit model coefficients can be determined by linear regression using a least squares fit. Both ULS and CLS methods (described in Appendix A, available with the online version of this paper) are used to determine the best-fit model coefficients. The logistic curve (the solid line) shown in Figure 4 was derived by the CLS method. The constrained method allows the incorporation of damage rate-inundation depth characteristics for the study site, giving a more sensible relation (see Appendix A for comparing the results produced by the two least square methods).
Based on Equation (5), the residuals for Y ¼ ln[(100y)/y)] in the linearized model can be defined by: with e i being the residual corresponding to the i-th observed data. According to linear regression theory (Kutner et al. ), the solid straight line in the linearized domain (see Figure 5) defines the mean response of Y conditioned on a specified inundation depth. To quantify the uncertainty in damage rate, a probability distribution for the residuals around the solid straight line in Figure 5 is sought. The validation of assumed distributions was carried out by the chi-square test for four theoretical distributions: normal, lognormal, triangular, and uniform distribution. The normal distribution was adopted herein to describe the dispersion, e ∼ N(μ ¼ 0, σ ¼ 1:788), as shown in Figure 6. The solid line and the data (x, Y ) in Figure 5 were retransformed into the logistic curve in x-y domain in Figure 7. The straight lines of Y, Y þ z 0.95 σ, YÀz 0.95 σ in Figure 5 correspond to the mean, the upper and lower bounds of 95% confidence intervals of the logistic curves in Figure 7, respectively.
Assessing uncertainty in the asset value
Not only the uncertainty of damage rates but also the scatter of asset value data should be assessed to quantify their uncertainty with a probability distribution. The log-normal distribution was found to properly describe uncertainty in the asset value data residuals in the same way as presented in Figure 6. The uncertainties in both the asset value and the damage rate were thus incorporated in the inundation damage calculation. The monetary inundation damage can then be calculated by multiplying the probabilistic asset values by the probabilistic damage rates. In this study, a Monte Carlo simulation was applied to quantify the uncertainty of direct damage for the 11 damage types for households and businesses.
RESULTS AND DISCUSSION
Application of flood damage prediction model to urban drainage area The flood damage prediction model was applied to a typical urban catchment located in the Zenpukuji River basin in Tokyo Metropolis. The catchment has an area of 18.3 km 2 and is densely populated with a high concentration of private houses and business buildings. In the catchment, flood control reservoirs were constructed in the 1980s and 1990s to reduce the flood inundation damage. At the present time, a new reservoir with a storage capacity 200,000 m 3 (denoted by the dark spot in Figure 3) is being planned and is expected to work effectively for flood control. In this study, two probabilistic damage potential curves were developed with and without the proposed reservoir. Inundation calculations were carried out for every 50 m × 50 m grid cell within the study catchment for each design hyetograph. As an example, Figure 3 shows the flood inundation map for a 30-year storm under the present catchment condition. The calculated results were superimposed with GIS data developed by the Tokyo Metropolitan Government. The database includes asset data organized for a 50 m × 50 m grid cell, which is identical to the calculation grid used in this study.
Quantification of uncertainty in flood damage estimates
To quantify the uncertainty in flood damage estimates, a Monte Carlo simulation was undertaken using Crystal Ball software. For each design storm of a chosen return period 1,000 Monte Carlo repetitions were carried out where the asset values and the damage rates were randomly generated. The random damage rate and asset value were treated as statistically independent. The flood damage was estimated for 24-hour rainstorms with 14 different return periods, i.e. 1.2 years, 2 years, 3 years, 4 years, 5 years, 7.5 years, 10 years, 15 years, 20 years, 30 years, 50 years, 100 years, 150 years and 200 years. The input hyetographs were created based on rainfall IDF curves published by the Tokyo Metropolitan Government with the 14 return periods considered using the alternating block method (Chow et al. ). The total flood damage in the Zenpukuji River basin was described as a relationship between flood damage percentile and the return period (see Figure 8 without the planned flood control reservoir and Figure 9 with the planned flood control reservoir).
In Figure 8, the probabilistic damage potential relation is represented by a series of curves, each associated with a different percentile level. The damage potential curve associated with the p-th percentile indicates the non-exceedance probability level of the flood damage amount. The heavy solid curve in the middle of Figure 8 denotes the median value (50th percentile) of flood damage for different return periods. The uncertainty of flood damage potential can be expressed as a confidence band, such as 20%, 40%, 60%, or 80%, centred around the median curve. The width of the confidence band increases with the confidence level. Figure 8 also shows that the width of the confidence band is narrower for small return periods and becomes broader as the return period increases. This can be attributed largely to the smaller scatter of damage rates for the lower inundation depth as shown in Figure 4.
When the planned new reservoir is in service, the damage potential curves for the river basin are expected to shift downward as shown in Figure 9. This is because the presence of the flood control reservoir will reduce the inundation depth downstream of the reservoir and will reduce flood damage. The comparison between the two sets of potential damage curves enables the effectiveness of the planned reservoir to be assessed taking into account the uncertainty of flood damage estimates. Multiplication of the damage potential curve and storm probability curve yields a risk density curve, and the integration of the risk density curve produces the risk cost or annual expected damage (AED) (Morita , ). The AED, along with the uncertainty of the construction cost, can be then used in risk-based decision making for flood control planning and management (Morita ). When the damage potential relation is a single curve, without considering uncertainty in the damage rate and asset value, the determination of the risk density curve and risk cost is quite straightforward. However, the process for determining risk cost becomes more cumbersome when the damage potential curve is probabilistic, as shown in Figures 8 and 9.
CONCLUSIONS
A framework which integrates a GIS-based flood damage prediction model and flood damage estimation uncertainty is presented in this paper. A constrained logistic regression analysis was implemented to establish a probabilistic damage rate-inundation depth model for various damage types to houses, household articles and business buildings from actual surveyed flood inundation damage data for Metropolitan Tokyo. Based on the probabilistic damage rate-inundation depth relationships and probabilistic relation for the asset value, Monte Carlo simulations were undertaken to develop probabilistic damage potential curves. With the inclusion of uncertainty in the damage rate and asset value, the damage potential is no longer a single-value curve, but is subject to uncertainty. In a numerical example for an actual urban watershed in Metropolitan Tokyo, its probabilistic flood damage potential can be expressed in terms of a series of curves, each corresponding to different percentile levels. The probabilistic flood damage potential curve can be transformed into the flood damage area chart with stipulated reliability values for flood risk management in urban areas. Because probabilistic damage potential curves are not unique, additional treatment is needed to obtain the AED for evaluating the effectiveness of a flood control project when including uncertainty in the flood damage estimates. | 4,922.4 | 2018-09-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Electromagnetic Detection System with Magnetic Dipole Source for Near-Surface Detection
This paper proposes a nondestructive, separate transmitter-receiver (TX-RX) electromagnetic measurement system for near-surface detection. Different from the traditional dual-coil integrated design, the proposed transient electromagnetic (TEM) system performs shallow subsurface detection using independent TX coil and movable RX coils. This configuration requires a large primary field so that the far-away secondary field is able to generate reliably induced voltages. To achieve this goal, a bipolar current-pulsed power supply (BCPPS) with a late resonant charging strategy is designed to produce a sufficiently large magnetic moment for the exciting coil with low source interference. The magnetic dipole source (MDS) with a large proportion of weight is separated from the field observation device and does not need to be dragged or transported during the detection process. This setup lowers the weight of the scanning device to 3 kg and greatly improves the measurement efficiency. The results of the laboratory test verify the effectiveness of the separate MDS and RX module system. Field experimental detection further demonstrates that the proposed system can realize highly efficient and shallow surface detection within a 200 m range of the MDS device.
Introduction
Nondestructive detection using electromagnetic wave propagation characteristics for near-surface anomalies has received a lot of attention in different applications such as for geoarchaeological mapping, municipal maintenance, substation grounding grid detection, and unexploded bomb detection [1][2][3][4].The time-domain transient electromagnetic method (TEM) typically employs two different multi-turn coils: the TX coil in the transmitter and the RX coil in the receiver.The former transmits the time-varying current to generate the primary electromagnetic field and the latter receives the secondary field-induced voltage generated from the eddy currents of subsurface conductors [5,6].TEM systems are very effective in geological detection [7,8].
Conventional TEM detection devices configure the TX coils and RX coils in the same place [9].The weak coupling design for the towed TEM system [10] improves detection sensitivity and the investigation depth; however, the calibration demands huge computational power.Moreover, it is difficult to accurately locate the relative position of the TX loop and RX coil.The opposing-coils TEM system [11,12] can reduce the effect of inherent mutual induction between the TX loop and RX coil at the expense of an increase in weight due to the complex coil configuration.The detection result is also sensitive to the vertical position of the RX coil.The towed TEM system uses the dual transmitter moment measurement technique to obtain geoelectric information on different subsurface layers [13].The over-long offset (>9 m) between the TX loop and RX coil makes it difficult to operate the all-terrain vehicle and obtain valid data consistently.In frequency-domain detection, heavy or large-volume mobile scanning devices also cause implementation challenges, which greatly reduce detection efficiency and increase the cost [14,15].
Recently, airborne TEM (ATEM) systems have been developed for fast and convenient detection [16].ATEM requires a large TX magnetic moment.Thus, TEM systems carried by helicopters utilize a gigantic coil with a radius of more than 20 m for deep geological surveys [17,18].A bucking coil is connected on one side of the TX coil loop to achieve primary field compensation [19].A transmitter with more electronic components and energy-storage devices is also needed to ensure that the current pulses in the coil can generate a strong primary field [20].These TX-RX-integrated systems are not used for near-surface exploration due to their high cost and difficulty in operation.
For low-cost and large-region exploration, a semi-ATEM (SATEM) system, which uses a grounded-wire source on the surface to generate an electromagnetic field and a receiver hanging on unmanned aerial vehicles to acquire the secondary field in the air, has been developed [21,22].This requires not only a solid electrical connection with the soil using electrodes or steel rods but also wires several kilometers long to send current pulses into the ground.The power supply also needs to provide tens of kW of power to ensure a sufficiently large current to excite the magnetic and electric fields due to high earth resistance and long grounded wires [23,24].Moreover, the combined effects of the transmitter wire and grounding points challenge the simplicity of the secondary response [25].
In this paper, a fast electromagnetic measurement system for near-surface detection is proposed.Unlike current commercial TEM instruments, the fixed-position TX module and the mobile RX device are completely separated.Moreover, it has a similar detection mode to SATEM but uses a smaller magnetic source and is supplied with a lower power grade.The transmit coil in the proposed system employs a magnetic dipole source that is designed to circumvent the challenges associated with SATEM that require laying extensive transmit loops over several kilometers.Additionally, this system integrates a drone-mounted receiving system weighing a mere 3 kg, thereby sidestepping the logistical constraints of take-off sites and construction environments faced by manned helicopters.Concurrently, this drone-mounted approach significantly enhances detection efficiency compared to traditional ground-based TEM systems.A bipolar current pulse power supply (BCPPS) has been designed to overcome the magnetic field attenuation caused by the overlong offset between the transmitter and receiver.BCPPS is equipped with resonant charging in the late turn-off period to attain a large TX magnetic moment for the relatively small coil.With BCPPS and a high-precision sampling design, the proposed system can realize rapid scanning and the real-time detection of the near-surface within 200 m around the magnetic dipole source (MDS).In addition, the weight of the scanning unit is lowered to 3 kg, which greatly improves the construction efficiency and cuts the vehicle cost.The working principles, system design, and performance evaluation in the field experiment are elaborated in detail in the following sections.
Detection Principles 2.1. Magnetic Dipole Source for Rapid TEM Detection
The TX coil and RX coil of TEM detection systems are usually integrated together and are arranged at the same central point [10,11,13,14].Figure 1a depicts such a configuration.The offset between the RX coil and TX coil is either small or they are arranged at the same central point.In this configuration, a transmitter with a small magnetic moment output can ensure the magnitude of the induced voltages; however, the transmitter, receiver, and dual-coil combination need to be moved as a whole along the survey line.The weight and volume of mobile devices are very large, requiring high labor and transportation costs, and it is difficult to complete construction in some uneven areas.To overcome the above defects, a nondestructive TEM system with variable offsets is proposed, the instrument design of which is different from traditional equipment.Figure 1b shows the schematic of the and it is difficult to complete construction in some uneven areas.To overcome the above defects, a nondestructive TEM system with variable offsets is proposed, the instrument design of which is different from traditional equipment.Figure 1b shows the schematic of the proposed TEM system with an MDS for near-surface detection.The TX submodule includes the BCPPS and the TX coil, which generates large trapezoidal current pulses and diffuses a strong primary field to the shallow surface.The corresponding RX submodule includes a receiver and the RX coils, which are carried on a portable vehicle for rapid detection.Both the BCPPS and the receiver are operated and monitored remotely using wireless communication modules and a control system with a standard software interface.The TX time period of current pulses and the sampling time of induced voltages are synchronized through the world time of the global position system (GPS).Meanwhile, the real-time kinematic (RTK) module will record the position information of the two separated units and upload it to the control software for real-time mapping.
Transmitter
When the TX coil produces a fixed primary field and the RX module movably scans the near surface in a certain range, underground conductors with different resistivities induce voltages in the RX coils due to the eddy current effect after the exciting current pulses are turned off.The three-component inductive coils are arranged in orthogonal order to reflect the variation of the magnetic fields in different directions.The anomaly locations can then be quickly found through data processing and imaging using the system software.Both the BCPPS and the receiver are operated and monitored remotely using wireless communication modules and a control system with a standard software interface.The TX time period of current pulses and the sampling time of induced voltages are synchronized through the world time of the global position system (GPS).Meanwhile, the real-time kinematic (RTK) module will record the position information of the two separated units and upload it to the control software for real-time mapping.
Response Analysis
When the TX coil produces a fixed primary field and the RX module movably scans the near surface in a certain range, underground conductors with different resistivities induce voltages in the RX coils due to the eddy current effect after the exciting current pulses are turned off.The three-component inductive coils are arranged in orthogonal order to reflect the variation of the magnetic fields in different directions.The anomaly locations can then be quickly found through data processing and imaging using the system software.Both the BCPPS and the receiver are operated and monitored remotely communication modules and a control system with a standard software in time period of current pulses and the sampling time of induced voltages are through the world time of the global position system (GPS).Meanwhile, th ematic (RTK) module will record the position information of the two separ upload it to the control software for real-time mapping.
Response Analysis
When the TX coil produces a fixed primary field and the RX module the near surface in a certain range, underground conductors with differe induce voltages in the RX coils due to the eddy current effect after the e pulses are turned off.The three-component inductive coils are arranged order to reflect the variation of the magnetic fields in different directions locations can then be quickly found through data processing and imaging tem software.In light of the findings presented in Literature 6 and depicted in Figure 2, the magnetic field components along the four directions in the dipole field coordinate system can be expressed as
Response Analysis
When ϕ = 0 and the observation point is in the plane of the loop (z = 0); the zcomponent of the primary field can be simplified as Equation ( 2) shows that regular magnetic fields are built up in the surrounding space when the time-varying current flows through the exciting coil placed horizontally on the ground.Increasing the TX magnetic moment M enhances the primary field outside the MDS.
To ascertain the viability of the proposed detection system, COMSOL (V5.4) simulations were conducted to model the system's response signal.The geoelectric model is shown in Figure 3a, in which a metal cube with a side length of 0.3 m is buried 200 m away from the MDS on the y-axis.The comparison results of the finite element simulation with different MDS parameters are shown in Figure 3b,c.The parameters in Figure 3a are set according to the scheme in the literature [26], and the parameters in Figure 3b are set according to the target parameters of this paper.It can be seen that the larger and steeper current pulses in MDS can excite a stronger secondary field response outside the loop so that the magnetic field above the low-resistivity anomaly shows a greater relative difference from other locations.
In light of the findings presented in Literature 6 and depicted in Figure 2, the magnetic field components along the four directions in the dipole field coordinate system can be expressed as ( ) and the observation point is in the plane of the loop (z = 0); the z-component of the primary field can be simplified as Equation ( 2) shows that regular magnetic fields are built up in the surrounding space when the time-varying current flows through the exciting coil placed horizontally on the ground.Increasing the TX magnetic moment M enhances the primary field outside the MDS.
To ascertain the viability of the proposed detection system, COMSOL (V5.4) simulations were conducted to model the system's response signal.The geoelectric model is shown in Figure 3a, in which a metal cube with a side length of 0.3 m is buried 200 m away from the MDS on the y-axis.The comparison results of the finite element simulation with different MDS parameters are shown in Figure 3b,c.The parameters in Figure 3a are set according to the scheme in the literature [26], and the parameters in Figure 3b are set according to the target parameters of this paper.It can be seen that the larger and steeper current pulses in MDS can excite a stronger secondary field response outside the loop so that the magnetic field above the low-resistivity anomaly shows a greater relative difference from other locations.Finally, the transient response of nonmagnetic half-space surfaces needs to be further discussed.To simplify the analysis, consider the induced electromagnetic force (EMF) uz of the Z direction.It is shown below that uz is critical in the subsequent data acquisition Finally, the transient response of nonmagnetic half-space surfaces needs to be further discussed.To simplify the analysis, consider the induced electromagnetic force (EMF) u z of the Z direction.It is shown below that u z is critical in the subsequent data acquisition and position mapping.This induced EMF generated by the RX coil at a distance r from MDS can be expressed as [7].
where α is the transition coefficient, S r is the effective area of the RX coil, and t f is the falling time of the current.J 0 denotes the zero-order Bessel function [7].G (λ, t) is a function of the variable λ and the observation time t; moreover, it is also related to the resistivity and thickness of the formation and can be obtained using the inverse Laplace transform [25].
It can be seen from Equation (3) that larger peak value I and shorter falling time t f of the TX current contribute to a higher response amplitude.The resistivity difference of the underground medium changes the magnetic field above it; this is further reflected by the induced voltage in the RX coils.
BCPPS and New Charging Strategy
According to the physical model established in Section 3, an increase in distance between the RX module and the MDS rapidly weakens the magnetic intensity.As a result, when the observation device is far from the TX coil, the voltage induced by the secondary field may decay to a level less than that of the ambient electromagnetic noise.This makes it difficult to extract a valid signal for further processing.A solution may be to increase the intensity of the primary field generated using the MDS to strengthen the secondary field at the observation point a distance away.
To achieve this goal, it is required to have a BCPPS that can generate a large current pulse in a high-inductive TX coil [19].In addition, a flat-top, sharply rising edge and a highly linear falling edge are three distinct features of the current pulse that would have positive effects on the measured response [27].
Two conventional BCPPS topologies for ground TEM detection have been proposed for such purpose.One is the clamping scheme using TVS [28] and another is the steep pulse current source scheme [26].The respective circuit diagrams are shown in Figure 4.The former exhibits a slowly rising edge because the high voltage clamping only occurs during the falling of the pulse [28].This limits the repeat frequency of the current pulses and the detection efficiency.Moreover, the heating of TVS during clamping is a problem that increases the failure risk along with the operating time.The steep pulse current source scheme employs a boost module to supplement the energy gap and improve the edge steepness of the pulse.However, low charging efficiency makes it difficult to increase the peak value and edge steepness [26].The multi-pulse charging process also causes considerable noise during the data measurement process [29,30].and position mapping.This induced EMF generated by the RX coil at a distance r from MDS can be expressed as [7].
where α is the transition coefficient, Sr is the effective area of the RX coil, and tf is the falling time of the current.J0 denotes the zero-order Bessel function [7].G (λ, t) is a function of the variable λ and the observation time t; moreover, it is also related to the resistivity and thickness of the formation and can be obtained using the inverse Laplace transform [25].It can be seen from Equation (3) that larger peak value I and shorter falling time tf of the TX current contribute to a higher response amplitude.The resistivity difference of the underground medium changes the magnetic field above it; this is further reflected by the induced voltage in the RX coils.
BCPPS and New Charging Strategy
According to the physical model established in Section 3, an increase in distance between the RX module and the MDS rapidly weakens the magnetic intensity.As a result, when the observation device is far from the TX coil, the voltage induced by the secondary field may decay to a level less than that of the ambient electromagnetic noise.This makes it difficult to extract a valid signal for further processing.A solution may be to increase the intensity of the primary field generated using the MDS to strengthen the secondary field at the observation point a distance away.
To achieve this goal, it is required to have a BCPPS that can generate a large current pulse in a high-inductive TX coil [19].In addition, a flat-top, sharply rising edge and a highly linear falling edge are three distinct features of the current pulse that would have positive effects on the measured response [27].
Two conventional BCPPS topologies for ground TEM detection have been proposed for such purpose.One is the clamping scheme using TVS [28] and another is the steep pulse current source scheme [26].The respective circuit diagrams are shown in Figure 4.The former exhibits a slowly rising edge because the high voltage clamping only occurs during the falling of the pulse [28].This limits the repeat frequency of the current pulses and the detection efficiency.Moreover, the heating of TVS during clamping is a problem that increases the failure risk along with the operating time.The steep pulse current source scheme employs a boost module to supplement the energy gap and improve the edge steepness of the pulse.However, low charging efficiency makes it difficult to increase the peak value and edge steepness [26].The multi-pulse charging process also causes considerable noise during the data measurement process [29,30].
Proposed BCPPS
This paper develops a new BCPPS for the TEM detection system using MDS.
Proposed BCPPS
This paper develops a new BCPPS for the TEM detection system using MDS.power supply E1 and a high-gain boost converter E2, which supplies the voltage converter is paralleled with a capacitor bank CH to form the constant voltage sou most critical part of the proposed BCPPS is the resonant charging module, as sh the red dashed block in Figure 5.It consists of two switches, S5 and S6, and an storage capacitor, Cb.The resonant charging module performs the energy storage ing control, and power switching.It fully charges Cb in the late turn-off period t the high-current pulse and enable low-interference output for the MDS.The proposed BCPPS also contains an H-bridge inverter, which ensures the ing outputs of current pulses.The TX coil is modeled using the inductance Lo sistance Ro, and the stray capacitance Co.A damping resistor Rd1 is connected in p the terminals of the coil to overcome the underdamped oscillation of the curre caused by Co.The selection of Rd1 has been described in [28].For simplicity, only L of the coil are considered in the analysis.Since Rd1 is very large, around several h ohms, neglecting Co does not affect the analysis result in the steady state.The proposed BCPPS also contains an H-bridge inverter, which ensures the alternating outputs of current pulses.The TX coil is modeled using the inductance L o , the resistance R o , and the stray capacitance C o .A damping resistor R d1 is connected in parallel at the terminals of the coil to overcome the underdamped oscillation of the current pulse caused by C o .The selection of R d1 has been described in [28].For simplicity, only L o and R o of the coil are considered in the analysis.Since R d1 is very large, around several hundred ohms, neglecting C o does not affect the analysis result in the steady state.
Figure 6 illustrates the waveforms of the voltages and currents of the proposed BCPPS.The positive and the negative current pulse cycles are shown.The analysis below only describes the positive half cycle because the operation principle of the negative cycle is the same as that of the positive one.Current i o denotes the current pulse generated by the TX coil.
power supply E1 and a high-gain boost converter E2, which supplies the voltage kU1.The converter is paralleled with a capacitor bank CH to form the constant voltage source.The most critical part of the proposed BCPPS is the resonant charging module, as shown by the red dashed block in Figure 5.It consists of two switches, S5 and S6, and an energy storage capacitor, Cb.The resonant charging module performs the energy storage, clamping control, and power switching.It fully charges Cb in the late turn-off period to realize the high-current pulse and enable low-interference output for the MDS.The proposed BCPPS also contains an H-bridge inverter, which ensures the alternating outputs of current pulses.The TX coil is modeled using the inductance Lo, the resistance Ro, and the stray capacitance Co.A damping resistor Rd1 is connected in parallel at the terminals of the coil to overcome the underdamped oscillation of the current pulse caused by Co.The selection of Rd1 has been described in [28].For simplicity, only Lo and Ro of the coil are considered in the analysis.Since Rd1 is very large, around several hundred ohms, neglecting Co does not affect the analysis result in the steady state.
Figure 6 illustrates the waveforms of the voltages and currents of the proposed BCPPS.The positive and the negative current pulse cycles are shown.The analysis below only describes the positive half cycle because the operation principle of the negative cycle is the same as that of the positive one.Current io denotes the current pulse generated by the TX coil.The several variables used in Figure 6 are described as follows.T o denotes the period of the bipolar current pulse.The duty cycles of the switches from S 1 to S 4 are R 1/2 , and the duty cycle of S 5 is R 2 ; that of S 6 is r.The time frame of half the pulse cycle is partitioned into time intervals [0, t 1 ], [t 1 , t 2 ], [t 2 , t 3 ], and [t 3 , t 7 ].The first interval denotes the pulse rising period, the second is the flat-top period of the pulse, the third is the pulse falling period, and the fourth is the pulse turn-off period.Critical parameters include the pulse rise time t r = t 1 , the pulse falling time t f = t 3 − t 2 , the flat-top duration t FT = t 2 − t 1 , the current turn-off duration t off = t 7 − t 3 , and the turn-on duration of the switch S 6, t b = t 6 − t 5 .These parameters can be expressed, in terms of T o and the duty cycles, as
Operational Principles
Four operation modes of BCPPS are considered.Mode 1 is the pulse rising period t r .Mode 2 is the pulse flat-top period t FT .Mode 3 is the pulse falling period t f = t 3 − t 2 .Mode 4 is the pulse turn-off period, where the turn-off time is t off .It includes two parts.The first part appears at the early stage of this period, [t 3 , t 4 ].It is called the measuring window and is marked by the red shape in Figure 6.The second part appears in the late stage, [t 5 , t 6 ].It is called the charging window and is marked in blue in Figure 6.
The corresponding circuit configurations of these four operation modes are shown in Figure 7a-d.Detailed operation principles are explained below.
The several variables used in Figure 6 are described as follows.To denotes the period of the bipolar current pulse.The duty cycles of the switches from S1 to S4 are R1/2, and the duty cycle of S5 is R2; that of S6 is r.The time frame of half the pulse cycle is partitioned into time intervals [0, t1], [t1, t2], [t2, t3], and [t3, t7].The first interval denotes the pulse rising period, the second is the flat-top period of the pulse, the third is the pulse falling period, and the fourth is the pulse turn-off period.Critical parameters include the pulse rise time tr = t1, the pulse falling time tf = t3 − t2, the flat-top duration tFT = t2 − t1, the current turn-off duration toff = t7− t3, and the turn-on duration of the switch S6, tb = t6 − t5.These parameters can be expressed, in terms of To and the duty cycles, as
Operational Principles
Four operation modes of BCPPS are considered.Mode 1 is the pulse rising period tr.Mode 2 is the pulse flat-top period tFT.Mode 3 is the pulse falling period tf = t3 − t2.Mode 4 is the pulse turn-off period, where the turn-off time is toff.It includes two parts.The first part appears at the early stage of this period, [t3, t4].It is called the measuring window and is marked by the red shape in Figure 6.The second part appears in the late stage, [t5, t6].It is called the charging window and is marked in blue in Figure 6.
The corresponding circuit configurations of these four operation modes are shown in Figure 7a-d.Detailed operation principles are explained below.
Mode 1
During the t r time period of the positive current pulse, switches S 1 , S 4 , and S 5 are turned on, while S 2 , S 3 , and S 6 are off.Note that both the rising time and the steep rise of the current pulse depend on the turn-on interval of S 5 .It can be seen from Figure 7a that C b , S 5 , S 1 , L o , R o , and S 4 form a resonant loop.The current in the TX coil rises rapidly to its peak value I p .The dynamic of the current pulse is governed by The values of electrical elements should be selected to satisfy the following relationship Suppose that the initial voltage of the capacitor C b is U 0 .Solving Equation ( 4) yields the maximum instantaneous current I p = i o (t 1 ), which is expressed as The parameters µ 1 and δ 1 are
Mode 2
This is the flat-top stage of the current pulse.It can be seen from Figure 7b that switches S 1 and S 4 remain on while S 5 is turned off.The supply channel between C b and the H-bridge inverter is cut off so that diode D 5 is on.Now, the TX coil is powered by E 1 .During this interval, S 1 and S 4 are the only switches conducting.By neglecting the voltage drop of the diodes and switches, the current is governed by Then, the ultimate instantaneous current I f can be obtained In fact, the flat-top current is not flat.It falls approximately in a linear fashion with the slope The pulse is of a trapezoidal type (flat top) only when U 1 = R o I p .Thus, in actual detection applications, the proper selection of U 1 and t FT is critical to obtain an accurate secondary field signal.
Mode 3
The current pulse falling edge.In this period, switches S 1 and S 4 are turned off while their opposing freewheeling diodes D 2 and D 3 are forward-biased.When the current in the TX coil decreases to zero, all remaining energy is transferred to the energy-storage capacitor.Assuming that the voltage of C b at the end of this stage is U 2 , the second-order differential Equation (4) in mode 1 can also apply to this stage: Sensors 2023, 23, 9771 9 of 17
Mode 4
Regarding the turn-off period of the current pulse, i.e., during this period, i o = 0 i AB = 0.This mode performs two critical operations.One is to record the induced voltage.This operation occurs at the early stage of this period.It is called the measuring window and is marked in red in Figure 6.Another is to charge the high-voltage capacitor C b for the generation of the next current pulse.This operation occurs in the later part of this period.It is called the charging window, and it is marked in blue in Figure 6.
In the measuring window, switches S 1 to S 6 are all turned off and the high-gain boost converter is in the no-load state.The switching noise of BCPPS is the lowest.This creates a low-interference period for the receiver to sample the induced voltages.On the other hand, the early secondary field signal is, relatively, more sensitive to the underground anomalies because of the separated MDS, particularly when the distance between the TX loop and RX coils is tens of meters away.In this case, the BCPPS is quiet during the measuring time.This is good for obtaining accurate detection results.
In the charging window, S 6 is turned on and other switches remain off.It can be seen from Figure 7d that the high-gain boost converter, E 2 , L b , and C b form a resonant loop to charge the capacitor voltage.The equivalent circuit of the resonant charging loop is shown in Figure 8. / sin( ) Regarding the turn-off period of the current pulse, i.e., during this period, io 0. This mode performs two critical operations.One is to record the induced volta operation occurs at the early stage of this period.It is called the measuring wind is marked in red in Figure 6.Another is to charge the high-voltage capacitor C generation of the next current pulse.This operation occurs in the later part of thi It is called the charging window, and it is marked in blue in Figure 6.
In the measuring window, switches S1 to S6 are all turned off and the high-ga converter is in the no-load state.The switching noise of BCPPS is the lowest.Thi a low-interference period for the receiver to sample the induced voltages.On t hand, the early secondary field signal is, relatively, more sensitive to the unde anomalies because of the separated MDS, particularly when the distance betwee loop and RX coils is tens of meters away.In this case, the BCPPS is quiet during th uring time.This is good for obtaining accurate detection results.
In the charging window, S6 is turned on and other switches remain off.It can from Figure 7d that the high-gain boost converter, E2, Lb, and Cb form a resonan charge the capacitor voltage.The equivalent circuit of the resonant charging loop i in Figure 8. Req represents the equivalent resistance of switch S6, inductor Lb, and capacito capacitor voltage is governed by [ , ] t t t ∈ .The current through the inductor can be obtained by The parameters should satisfy the following relationship, The capacitor voltage uCb can be obtained by R eq represents the equivalent resistance of switch S 6 , inductor L b , and capacitor C b .The capacitor voltage is governed by for t ∈ [t 5 , t 6 ].The current through the inductor can be obtained by The parameters should satisfy the following relationship, The capacitor voltage u Cb can be obtained by and the charging current i b by Parameters µ 2 , δ 2 , and θ 2 are It follows from Equation ( 16) that the effective charging time is t c = π/µ 2 and, at t = t 5 + θ 2 /µ 2 , the charging current i b reaches its peak value It is required that the maximum instantaneous output power of the high-gain boost converter E 2 should exceed kU 1 I b,max .Moreover, the saturation current of the inductor L b must be larger than I b,max .Since the resonant process needs to be completed within the on-time t b of S 6 , the value of L b should be selected in the range Another advantage of this design is that the charging current would drop to zero before S 6 is turned off.Thus, there is almost no turn-off loss during the switch.When the resonant charging is completed, the capacitor voltage u Cb is raised to The above derivation is the basis of the capacitance and voltage selection for C b .It is also known that the clamping voltage between the terminals of the TX loop is determined using the gain of the converter, the parameters of passive devices, and the conducting time of the switches in the proposed BCPPS.
Considering that the energy storage capacitor can still rely on the energy feedback of the load inductor to raise the voltage in the case of non-resonant charging, it is obvious that L b can only participate in the replenishment process when the charging voltage is higher than the upper limit of the clamp voltage in the case of single-wave charging.The minimum design value of the converter gain k can be obtained as follows: Theoretically, the larger the gain, the more energy will be transferred from the power supply end to the capacitor C b ; however, it is also necessary to consider the current output capacity of the boost converter and the output parallel capacitor.The maximum design value of k can be obtained as follows: where I M is the peak output current of E 2 and C H . Compared with other charging schemes such as the current-limiting resistor, it not only achieves high efficiency and low-loss charging but also greatly reduces the source's interference in the sampling interval of the RX signal.
System Design
This section describes the design considerations of the proposed MDS-TEM measurement system.The main units include the MDS, data acquisition and processing, and system integration.Figure 9 shows the hardware of these components.The numbers in the parentheses of the components correspond to those in Figure 1b.
System Design
This section describes the design considerations of the proposed MDS-TEM measurement system.The main units include the MDS, data acquisition and processing, and system integration.Figure 9 shows the hardware of these components.The numbers in the parentheses of the components correspond to those in Figure 1b.
Design of MDS
The MDS module consists of TX coils and the new BCPPS.BCPPS is designed to supply high-current pulses to the TX coil with a peak magnitude exceeding 300 A. The circuit diagram is shown in Figure 4 and the hardware is shown in Figure 8. Switches S1 to S5 are IGBT switches needed for the high-current pulse, and S6 is a MOSFET switch, which has low conduction loss and is more efficient for the capacitor charging process.A boost converter with an adjustable gain of 10 to 40 for output voltage is used to charge the capacitor in the late turn-off period of the current pulse.The magnitude of a current pulse can be adjusted by varying the boost gain.In field detection, the gain k can be selected according to the power provided by E1 and the size of the to-be-detected area.
The MDS module can be set at one location and does not need to move around during each TEM detection process.The TX coil is wound by 40 turns of the metal wire, and its diameter is about two meters.The MDS (coils plus BCPPS) weighs around 30 kg.
Design for Data Receiving
In the conventionally integrated TX-RX configuration, the coupling effect is one of the design considerations [10,11,13].This effect is no longer a concern in the proposed MDS-TEM system because RX coils and the TX loop are separate and far away from each other.The scanning platform only contains the RX module, which includes RX coils and the controller for sampling and data transmission.This greatly simplifies the design process.This structure facilitates the voltage induction from three orthogonal planes because the magnetic fields above the detection area are oriented in multiple directions.Figure 10 shows the diagram of RX signal acquisition and the processing unit.
Design of MDS
The MDS module consists of TX coils and the new BCPPS.BCPPS is designed to supply high-current pulses to the TX coil with a peak magnitude exceeding 300 A. The circuit diagram is shown in Figure 4 and the hardware is shown in Figure 8. Switches S 1 to S 5 are IGBT switches needed for the high-current pulse, and S 6 is a MOSFET switch, which has low conduction loss and is more efficient for the capacitor charging process.A boost converter with an adjustable gain of 10 to 40 for output voltage is used to charge the capacitor in the late turn-off period of the current pulse.The magnitude of a current pulse can be adjusted by varying the boost gain.In field detection, the gain k can be selected according to the power provided by E 1 and the size of the to-be-detected area.
The MDS module can be set at one location and does not need to move around during each TEM detection process.The TX coil is wound by 40 turns of the metal wire, and its diameter is about two meters.The MDS (coils plus BCPPS) weighs around 30 kg.
Design for Data Receiving
In the conventionally integrated TX-RX configuration, the coupling effect is one of the design considerations [10,11,13].This effect is no longer a concern in the proposed MDS-TEM system because RX coils and the TX loop are separate and far away from each other.The scanning platform only contains the RX module, which includes RX coils and the controller for sampling and data transmission.This greatly simplifies the design process.This structure facilitates the voltage induction from three orthogonal planes because the magnetic fields above the detection area are oriented in multiple directions.Figure 10 shows the diagram of RX signal acquisition and the processing unit.In Figure 10, σ(t) is the induced EMF, Rd2 is the damping resistance, and Ri, Li, and Ci are the internal resistance, inductance, and distributed capacitance of the RX coil, respectively.The distributed capacitance Ci is lowered using the piecewise structure and fillers in a multi-layer winding.This expanded the −3 dB bandwidth of the coil.The transfer function from σ(t) to u(t) can be derived as The amplified analog signal is then processed using a band-pass filter and a highprecision AD converter.When digitized signals are transmitted to FPGA, the induced voltages during the early turn-off period of the current pulse are extracted and uploaded to the control platform via wireless communication.
System Integration
The proposed MDS-TEM also contains a remote control platform.One controller is designed specifically for BCPPS.Another is to sample the induced voltage in the RX module.Both controllers are embedded in the real-time kinematic (RTK) module.The functions are used to record the positions of the MDS and RX for data normalization and to upload to the control software (V1.1.2) for real-time mapping.RTK also generates the GPS pulses for the synchronization of these two controllers.The time difference between these two modules is limited to 100 ns by a cycle correction algorithm to contain the instability of the crystal oscillator in FPGA.
A 2.4G wireless communication module is developed to receive and send data between controllers and a PC so that BCPPS can be controlled by a PC from a distance (hundreds of meters away).The module also monitors the charging voltage and current pulse.The induced voltage of the RX coils can be uploaded remotely and used for real-time mapping.This paves the way for small drones or unmanned vehicles to be deployed to scan the target areas and collect data.
Experimental Results
This section presents the experimental results to verify the effectiveness of the proposed MDS-TEM system.Three aspects are evaluated.First, current pulses generated by the proposed BCPPS are compared with those generated using conventional BCPPS circuits to show the benefits of the proposed BCPPS circuit.Second, the induced voltages of the RX module are presented to indicate that the detection of an anomaly is possible.Third, field test results are shown to demonstrate the detection effectiveness of the proposed MDS-TEM system.
Performance of the BCPPS
Table 1 lists the main parameters of the MDS and BCPPS.Figure 11 shows the current pulses generated using three BCPPS circuits.In Figure 10, σ(t) is the induced EMF, R d2 is the damping resistance, and R i , L i , and C i are the internal resistance, inductance, and distributed capacitance of the RX coil, respectively.The distributed capacitance C i is lowered using the piecewise structure and fillers in a multi-layer winding.This expanded the −3 dB bandwidth of the coil.The transfer function from σ(t) to u(t) can be derived as The amplified analog signal is then processed using a band-pass filter and a highprecision AD converter.When digitized signals are transmitted to FPGA, the induced voltages during the early turn-off period of the current pulse are extracted and uploaded to the control platform via wireless communication.
System Integration
The proposed MDS-TEM also contains a remote control platform.One controller is designed specifically for BCPPS.Another is to sample the induced voltage in the RX module.Both controllers are embedded in the real-time kinematic (RTK) module.The functions are used to record the positions of the MDS and RX for data normalization and to upload to the control software (V1.1.2) for real-time mapping.RTK also generates the GPS pulses for the synchronization of these two controllers.The time difference between these two modules is limited to 100 ns by a cycle correction algorithm to contain the instability of the crystal oscillator in FPGA.
A 2.4G wireless communication module is developed to receive and send data between controllers and a PC so that BCPPS can be controlled by a PC from a distance (hundreds of meters away).The module also monitors the charging voltage and current pulse.The induced voltage of the RX coils can be uploaded remotely and used for real-time mapping.This paves the way for small drones or unmanned vehicles to be deployed to scan the target areas and collect data.
Experimental Results
This section presents the experimental results to verify the effectiveness of the proposed MDS-TEM system.Three aspects are evaluated.First, current pulses generated by the proposed BCPPS are compared with those generated using conventional BCPPS circuits to show the benefits of the proposed BCPPS circuit.Second, the induced voltages of the RX module are presented to indicate that the detection of an anomaly is possible.Third, field test results are shown to demonstrate the detection effectiveness of the proposed MDS-TEM system.
Performance of the BCPPS
Table 1 lists the main parameters of the MDS and BCPPS.Figure 11 shows the current pulses generated using three BCPPS circuits.Figure 11a shows the current pulse generated using the clamping scheme, exhibiting a steep falling edge and a very slowly rising edge; moreover, the magnitude is less than 60 A. The current pulse shown in Figure 11b is generated using the boost topology, displaying flat top, linear and fast rising and falling edges, and a magnitude of up to 80 A. Figure 11c shows the current pulses generated using the proposed method with adjustable voltage gains.The pulses exhibit linear and steep rising and falling edges with a magnitude ranging from 160 A to 270 A. For example, with a charging voltage of 920 V, the peak current pulse reaches 270 A in 0.4 ms.This provides a large TX magnetic moment for a magnetic dipole of a 2 m radius and 40 turns of coils.Note that the flat top of the current pulse is not strictly flat, especially when the charging voltage gain is large.The slope of the flat top can be improved by modifying the circuit parameters according to Equation (11). Figure 11a shows the current pulse generated using the clamping scheme, exhibiting a steep falling edge and a very slowly rising edge; moreover, the magnitude is less than 60 A. The current pulse shown in Figure 11b is generated using the boost topology, displaying flat top, linear and fast rising and falling edges, and a magnitude of up to 80 A. Figure 11c shows the current pulses generated using the proposed method with adjustable voltage gains.The pulses exhibit linear and steep rising and falling edges with a magnitude ranging from 160 A to 270 A. For example, with a charging voltage of 920 V, the peak current pulse reaches 270 A in 0.4 ms.This provides a large TX magnetic moment for a magnetic dipole of a 2 m radius and 40 turns of coils.Note that the flat top of the current pulse is not strictly flat, especially when the charging voltage gain is large.The slope of the flat top can be improved by modifying the circuit parameters according to Equation (11).
Field Test
A designed experiment was carried out in an agricultural area with less electromagnetic interference to evaluate the performance of the proposed system.Figure 12a shows the test arrangement.The field scanned using the RX module was a 30 m × 20 m rectangular area that was partitioned by dashed lines into small squares with a size of 2.5 m × 2.5 m.The intersection points of the dashed lines are the measuring points at which the RX scanning is taken.The scanned area is at least 160 m away from the center of the TX coil.
Field Test
A designed experiment was carried out in an agricultural area with less electromagnetic interference to evaluate the performance of the proposed system.Figure 12a shows the test arrangement.The field scanned using the RX module was a 30 m × 20 m rectangular area that was partitioned by dashed lines into small squares with a size of 2.5 m × 2.5 m.The intersection points of the dashed lines are the measuring points at which the RX scanning is taken.The scanned area is at least 160 m away from the center of the TX coil.
Suppose that a metal bucket was buried 1.5 m underground at the intersection point marked by the red dot in Figure 12a as the anomalous object.The diameter of the cylindrical bucket is 0.25 m, and the height is 0.3 m.The peak current of the pulse was 270 A after adjusting the converter's gain.The region was scanned continuously using the RX module.The measured waveforms of the induced voltages at all measuring points were extracted using RTK coordinates.These waveforms were used to locate anomalous objects.
Figure 12b shows the induced voltages recorded at Z-coil during the early stage of the turn-off process.The red curve is the voltage measured directly above the anomalous body, as shown by the red dot in Figure 12a.The black curves are voltages recorded at other measuring points.The red curve of the voltage in Figure 12b shows a smaller amplitude than the black curves, and it changes the polarity from negative to positive much earlier compared with the black curves.This may be because the eddy current generated by the underground metal body slows down the attenuation of the magnetic field.This phenomenon is consistent with the theoretical analysis presented in Section 2. Additionally, Figure 12c displays the simulated results of an induced voltage with (red curve) and without (black curve) a metallic body.The zero-crossing characteristics when a metal body is present align with the experiment data, with the zero-crossing points occurring earlier than in the scenarios without a metal body.
Suppose that a metal bucket was buried 1.5 m underground at the intersection point marked by the red dot in Figure 12a as the anomalous object.The diameter of the cylindrical bucket is 0.25 m, and the height is 0.3 m.The peak current of the pulse was 270 A after adjusting the converter's gain.The region was scanned continuously using the RX module.The measured waveforms of the induced voltages at all measuring points were extracted using RTK coordinates.These waveforms were used to locate anomalous objects.
Figure 12b shows the induced voltages recorded at Z-coil during the early stage of the turn-off process.The red curve is the voltage measured directly above the anomalous body, as shown by the red dot in Figure 12a.The black curves are voltages recorded at other measuring points.The red curve of the voltage in Figure 12b shows a smaller amplitude than the black curves, and it changes the polarity from negative to positive much earlier compared with the black curves.This may be because the eddy current generated by the underground metal body slows down the attenuation of the magnetic field.This phenomenon is consistent with the theoretical analysis presented in Section 2. Additionally, Figure 12c displays the simulated results of an induced voltage with (red curve) and without (black curve) a metallic body.The zero-crossing characteristics when a metal body is present align with the experiment data, with the zero-crossing points occurring earlier than in the scenarios without a metal body.Figure 13 shows the contour map of the induced voltage scan.It represents the induced voltage signal at 452 μs following the commencement of the transmitting current turn-off.The horizontal axis and vertical axis coordinates correspond to the scanned area of Figure 12.In Figure 13, the areas in red denote the positively induced voltages, and the remaining areas denote the negative voltages.The polarity change of the induced voltage is mainly due to the stronger eddy current effect caused by the underground, low-resistant metal body than that of the soil.This polarity change phenomenon can be used to locate shallow and underground low-resistant anomalous objects.Thus, it is concluded that the area in red is where the anomalous body is buried below.Figure 13 can be obtained immediately after the scanning of the target region because the induced voltage is extracted directly.This result shows that the proposed MDS-TEM system provides an alternative means for near-surface detection in addition to the conventional RX-TX-integrated systems [10,13].Figure 13 shows the contour map of the induced voltage scan.It represents the induced voltage signal at 452 µs following the commencement of the transmitting current turn-off.The horizontal axis and vertical axis coordinates correspond to the scanned area of Figure 12.In Figure 13, the areas in red denote the positively induced voltages, and the remaining areas denote the negative voltages.The polarity change of the induced voltage is mainly due to the stronger eddy current effect caused by the underground, low-resistant metal body than that of the soil.This polarity change phenomenon can be used to locate shallow and underground low-resistant anomalous objects.Thus, it is concluded that the area in red is where the anomalous body is buried below.Figure 13 can be obtained immediately after the scanning of the target region because the induced voltage is extracted directly.This result shows that the proposed MDS-TEM system provides an alternative means for near-surface detection in addition to the conventional RX-TX-integrated systems [10,13].Table 2 shows the operation parameters of the systems in [10,13] and that of the proposed system.Table 2 shows that the proposed system offers a much higher TX peak magnetic moment and smaller RX size and weight than the other two TEM systems.The length of the mobile acquisition device is less than 1.0 m, and the weight is less than 3.0 kg.Sep- Table 2 shows the operation parameters of the systems in [10,13] and that of the proposed system.Table 2 shows that the proposed system offers a much higher TX peak magnetic moment and smaller RX size and weight than the other two TEM systems.The length of the mobile acquisition device is less than 1.0 m, and the weight is less than 3.0 kg.Separate and small RX modules allow for more flexibility in operation and offer much larger surveying areas than the TX-RX-integrated systems.
Conclusions
This paper presents an MDS-TEM system with separate TX and RX configurations.This MDS-TEM system has a fixed excitation source, a magnetic dipole source, and a mobile RX module that is small, lightweight, and remotely controlled.A new BCPPS design is also proposed to solve the challenge that the primary field may not be large enough to produce a strong secondary field a distance away from the TX module so that valid induced voltages can be obtained.The critical unit of this BCPPS is the resonant charging module, which performs energy storage, clamping control, and power switching.It provides resonant charging in the late turn-off period to realize the high-current pulse and low-interference output for the MDS.Detailed operation principles of BCPPS have been described, and performance comparisons with conventional BCPPS circuits were presented.Compared to other TEM systems, the proposed system, due to its use of separate transmitting and receiving coils, faces a limitation where the primary field generated by the transmitting coil travels a greater distance to the anomaly targets than in other TEM systems.This limits the system's maximum detection depth and range.We address this issue by designing a resonant dual-wave supplementary energy circuit to increase the magnetic moment, thereby compensating for deficiencies in detection depth and range.Additionally, due to structural differences in the system, the correction and inversion of signals received by the coils present new challenges, which we are actively working to resolve.This experiment has been carried out to show that the proposed TEM detection system provides satisfactory near-surface detection results while the RX module is positioned at a 200 m horizontal distance away from the TX module.The new BCPPS can provide a stable TX peak current of up to 300 A and a peak magnetic moment of 3.75 × 10 4 Am 2 .Mobile RX scanning greatly reduces shallow detection costs and improves survey efficiency.
Figure 1 .
Figure 1.Schematics of the different TEM detection systems: (a) traditional TX-RX-integrated detection system; (b) proposed detection system with magnetic dipole source.
Figure 2 Figure 2 .
Figure2depicts the magnetic field components in the dipole coordinate system for a typical vertical MDS.
Figure 1 .
Figure 1.Schematics of the different TEM detection systems: (a) traditional TX-RX-integrated detection system; (b) proposed detection system with magnetic dipole source.
Figure 2 Figure 1 .
Figure2depicts the magnetic field components in the dipole coordinate system for a typical vertical MDS.
Figure 2 Figure 2 .
Figure 2 depicts the magnetic field components in the dipole coordina typical vertical MDS.
Figure 2 .
Figure 2. Magnetic field components in a dipole field coordinate system for a typical vertical MDS.
Figure 3 .
Figure 3. Geoelectric model and finite element simulation results of MDS detection when t s = 200 µs: (a) Three-dimensional geoelectric model and simulation parameters.(b) Field distribution Hz (A/m) with I p = 20 A and t f = 200 µs.(c) Field distribution Hz (A/m) with I p = 300 A and t f = 400 µs.
Figure 4 .
Figure 4. Conventional BCPPS for ground TEM detection: (a) Clamping topology with TVS.(b) Steep pulse current source with boost topology.
Figure 5 depicts such a circuit diagram.The source part includes a conventional low-voltage DC
Figure 4 .
Figure 4. Conventional BCPPS for ground TEM detection: (a) Clamping topology with TVS.(b) Steep pulse current source with boost topology.
Figure 5 depicts such a circuit diagram.The source part includes a conventional low-voltage DC power supply E 1 and a high-gain boost converter E 2 , which supplies the voltage kU 1 .The converter is paralleled with a capacitor bank C H to form the constant voltage source.The most critical part of the proposed BCPPS is the resonant charging module, as shown by the red dashed block in Figure 5.It consists of two switches, S 5 and S 6 , and an energy storage capacitor, C b .The resonant charging module performs the energy storage, clamping control, and power switching.It fully charges C b in the late turn-off period to realize the high-current pulse and enable low-interference output for the MDS.
E 1 ibFigure 5 .
Figure 5. Proposed BCPPS for MDS in the ground TEM detection.
Figure 6 Figure 6 .
Figure 6.Waveforms of critical elements for the steady state operation of the proposed BCP
Figure 5 .
Figure 5. Proposed BCPPS for MDS in the ground TEM detection.
E 1 ibFigure 5 .
Figure 5. Proposed BCPPS for MDS in the ground TEM detection.
Figure 6 .
Figure 6.Waveforms of critical elements for the steady state operation of the proposed BCPPS circuit in Figure 5. Red shade: data measuring window.Blue shade: capacitor Cb charging window.
Figure 6 .
Figure 6.Waveforms of critical elements for the steady state operation of the proposed BCPPS circuit in Figure 5. Red shade: data measuring window.Blue shade: capacitor C b charging window.
Figure 8 .
Figure 8. Equivalent circuit of the resonant charging loop.
Figure 8 .
Figure 8. Equivalent circuit of the resonant charging loop.
Figure 9 .
Figure 9. Composition of the proposed electromagnetic measurement system with separated MDS: (a) Functional block diagram of the system.(b) Physical photos of the system.
Figure 9 .
Figure 9. Composition of the proposed electromagnetic measurement system with separated MDS: (a) Functional block diagram of the system.(b) Physical photos of the system.
Figure 10 .
Figure 10.RX signal acquisition and preliminary processing.
Figure 10 .
Figure 10.RX signal acquisition and preliminary processing.
Figure 11 .
Figure 11.Current pulse waveforms generated using various BCPPS circuits: (a) Clamping scheme with TVS.(b) Steep pulse current source with boost module.(c) Proposed topology for MDS with different converter gains.
Figure 11 .
Figure 11.pulse waveforms generated using various BCPPS circuits: (a) Clamping scheme with TVS.(b) Steep pulse current source with boost module.(c) Proposed topology for MDS with different converter gains.
Figure 12 .
Figure 12.Measured induced voltages of the survey line.The measuring points are located at the intersection of the dotted lines: (a) Experimental layout of the detection process.(b) Z-coil data during the early turn-off process at measuring points.(c) The simulation results for induced voltage in scenarios with and without a metal body.
Figure 12 .
Figure 12.Measured induced voltages of the survey line.The measuring points are located at the intersection of the dotted lines: (a) Experimental layout of the detection process.(b) Z-coil data during the early turn-off process at measuring points.(c) The simulation results for induced voltage in scenarios with and without a metal body.
- 2 Figure 13 .
Figure 13.Position imaging of the induced voltage scanning.
Figure 13 .
Figure 13.Position imaging of the induced voltage scanning.
Table 1 .
Parameter values of MDS.
Table 1 .
Parameter values of MDS.
Table 2 .
Comparison of three TEM systems for near-surface detection. | 13,567.4 | 2023-12-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Mode I Crack Propagation Experimental Analysis of Adhesive Bonded Joints Comprising Glass Fibre Composite Material under Impact and Constant Amplitude Fatigue Loading
The T-90 Calima is a low-wing monoplane aircraft. Its structure is mainly composed of different components of composite materials, which are mainly bonded by using adhesive joints of different thicknesses. The T-90 Calima is a trainer aircraft; thus, adverse operating conditions such as hard landings, which cause impact loads, may affect the structural integrity of aircrafts. As a result, in this study, the mode I crack propagation rate of a typical adhesive joint of the aircraft is estimated under impact and constant amplitude fatigue loading. To this end, effects of adhesive thickness on the mechanical performance of the joint under quasistatic loading conditions, impact and constant amplitude fatigue in double cantilever beam (DCB) specimens are experimentally investigated. Cyclic impact is induced using a drop-weight impact testing machine to obtain the crack propagation rate (da/dN) as a function of the maximum strain energy release rate (GImax) diagram; likewise, this diagram is also obtained under constant amplitude fatigue, and both diagrams are compared to determine the effect of each type of loading on the structural integrity of the joint. Results reveal that the crack propagation rate under impact fatigue is three orders of magnitude greater than that under constant amplitude fatigue.
Introduction
Composite materials are widely used in aircraft structures mainly due to their exceptional stiffness-to-weight ratio. This characteristic makes them more attractive than other materials for applications in which low-weight requirements and high-stiffness conditions are desirable, predominantly in cases where fuel or energy consumption is directly related to the structure weight [1].
Since the development of the aircraft industry, various methods have been employed to join aircraft components; however, the broad use of composites in the last two decades has addressed interests of engineers and researchers for employing adhesives as a potential joining technology [2].
The adhesive bonding process permits the joining of substrates with an adhesive between them. Compared to the conventional mechanical fastener method, this method provides several advantages, including lower structural weight, reduced stress concentrators and higher performance under fatigue conditions. All of these factors in turn lead to the increase in the structural integrity of the system [3]. Nevertheless, the adhesive bonding process exhibits some disadvantages. The reliability of the method considerably depends on the quality of the prepared surface. Moreover, adhesives are susceptible to environmental degradation and are perishable materials; thus, the mechanical strength of joints can be affected by the foregoing factors. Finally, defects in adhesives can be evaluated by nondestructive testing, but methods are not reliable for determining strength; hence, destructive tests should be employed instead of nondestructive ones [4].
Trainer aircrafts may be exposed to adverse operating situations, such as hard landings, which are characterized by a ground approach with a vertical speed greater than 3 m/s [5]. During landing, the main landing gear first impacts the ground, and the force generated by the impact is transferred to the wings, as the main landing gear in low-wing monoplane aircrafts is supported by the wing beam. Hence, the entire structure absorbs the energy induced during landing [6].
The aircraft is mainly manufactured using composite materials assembled by adhesive joints. The adhesive comprises a mixture of an epoxy resin or a hardener and a filler substance, which is a flocked cotton fibre. This filler makes the mixture consistent, thereby easing the application of the adhesive to the structure. The adhesive thicknesses of the main aircraft joints are 3.18 mm to 12.70 mm; hence, it is important to analyse the dynamic response of adhesive joints with respect to changes in thickness, thus the adhesive thicknesses of DCB specimens used in this study are in the mentioned interval due to a requirement of the actual manufacturing process of the aircraft. Moreover, the most critical joint of the aircraft is located in the support joint between the fuselage and wing beam; thus, it is crucial to obtain knowledge about the mechanical behaviour of a typical adhesive joint of an aircraft when a crack appears and starts to propagate inside it [7].
Fracture toughness of an adhesive joint depends on the bond line thickness during the propagation of the crack inside the adhesive [8]. This can be explained on the basis of the development of a plastic zone at the crack tip. When the bond line thickness is less than the diameter of the plastic zone, the development of the plastic zone at the crack tip is restricted, resulting in low fracture toughness [9]. With the increase in the bond line thickness, substrates exert a low constraint on the development of the plastic zone; hence, fracture toughness increases until the complete development of the plastic zone at the crack tip [10]. At this point, the distortion of the stress distribution at the crack tip can occur, modifying the shape of the plastic area and converting it into an ellipse; thus, the fracture toughness increases due to the distribution of the stress in a large area. At a considerably high bond line thickness, the diameter of the plastic zone at the crack tip is completely developed due to the decrease in the constraint as the presence of stiff adherends is not sufficient to restrict the growth of the plastic zone, thereby reducing the length of the plastic zone and resulting in low fracture toughness [11]. The crack growth rate da/dN, where a is the crack length, and N is the number of cycles, depends on the amount of plastic deformation; hence, in general, a higher adhesive thickness leads to lower constraint and subsequently a larger plastic deformation; hence, crack growth rates are lower. Additionally, a higher crack growth rate resistance is correlated with higher fracture toughness [12]. Impact fatigue is induced by low-energy, low-velocity impacts, which detrimentally affect the performance and reliability of components [13], and may cause internal damage in the form of delamination, crack formation and fibre breakage, rendering few visual signs on the external surface of the damaged material [14]. Notably, few studies investigated the effects of impact fatigue phenomenon on the structural integrity of adhesive joints [15]. However, important research regarding the understanding of the crack behaviour of adhesives and adhesive joint systems in composite materials has been reported.
Carbon-fibre-reinforced polymers (CFRPs) were tested under a mixed mode impact and constant amplitude fatigue loading, and results revealed that the crack propagation rate is highly dependent on the fracture mode; under repeated impact, the rate is greater than that under constant amplitude fatigue [16]. A crack growth law for impact fatigue was developed [17], on the basis of the relation between the maximum dynamic strain energy release rate and the impact fatigue crack growth rate in an epoxy adhesive FM73. Another study analysed the delamination failure mechanism in bonded composite joints, and results revealed that propagation occurs as a result of the interaction of several fracture mechanisms, which are significantly affected by the type of the applied load; in summary, impact fatigue causes more damage than constant amplitude fatigue [18]. Similarly, the response of lap strap joints comprising CFRP bonded with a rubber-modified epoxy adhesive under constant amplitude sinusoidal loading, cyclic in-plane impact and their combination was investigated. Two main failure patterns were noted: Initially, cohesive failure in the adhesive with low fatigue crack propagation rates and then failure of the adhesive and CFRP, with an acceleration of the crack growth during the transition of one failure pattern to the other one under the constant amplitude fatigue regime. Introduction of a small number of impacts between constant amplitude sinusoidal loading blocks led to the increase in the crack propagation rate [19].
Furthermore, DCB specimens were tested under impact fatigue and constant amplitude fatigue loading. The crack propagation rate was one order of magnitude lower under constant amplitude fatigue loading for the same range of the maximum strain energy release rate; hence, it is imperative to examine this type of loading for the structural integrity analysis of an adhesive joint [20,21]. The fracture toughness also depends on the type of adhesives and substrates of the bonded system. The fracture toughness of brittle adhesives exhibit low dependence on the strain rate; nevertheless, the fracture toughness of ductile adhesives is more likely affected by the loading condition than that of brittle ones [22]. Impact strength of a joint comprising composite substrates is highly dependent on substrate properties [23]. Previously [24], joints manufactured using carbon-fibre substrates with different fibre orientations were subjected to impact load, and results revealed that irrespective of differences between stacking sequences of substrates, joint failure occurs due to the delamination of the substrate. In addition, the rate dependency of the structural integrity of the joint was mostly caused by the composite resin matrix [25,26]. Impact tests of adhesive bonded joints also have been performed using dissimilar substrates. In [27], the dynamic strength of single lap joints with steel and aluminium substrates was evaluated, and results revealed that the less stiff substrate determines the strength of the joint and energy absorption; thus, properties of substrates play a key role in the structural integrity of adhesive joints [28].
Drop-weight impact test (DWIT) is one of the methods that is employed for impact tests. DWIT can characterise materials under a low-velocity impact of ≤10 m/s and strain rates of up to 102 s −1 [29]. In this study, DWIT was employed to propagate cracks in DCB specimens under mode I loading to obtain diagrams of the crack propagation rate (da/dN) as a function of the maximum strain energy release rate (G Imax ).
In this paper, the effect of the adhesive bond thickness on the mechanical performance of DCB specimens under mode I quasistatic, cyclic impact and constant amplitude sinusoidal loading conditions was analysed. For this purpose, the fracture toughness and crack growth rate under the mentioned dynamic loading conditions were measured by using an adhesive and adherends of a typical joint system of the T-90 Calima aircraft. Details of the materials and geometry of the specimens as wells as characterization techniques are given in the materials and methods section, in addition, crack growth and fracture toughness diagrams are presented in the results section and are divided according to the type of loading the specimens were subjected to, and, to conclude, the effect of the adhesive bond thickness on the crack propagation rate and strain energy release rate is discussed.
Materials and Methods
The adherends of the specimens comprised a glass-fibre cloth as the dispersed phase and an epoxy laminating resin as the continuous phase or matrix. The laminates were stacked using a wet lay-up procedure with eight plies and a ±45 • sequence and cured at 66 • C for 15 h. The nominal thickness of each fibre reinforced polymer (FRP) panel was 3 mm. The adhesive comprised an epoxy resin and a filler substance called a flocked cotton fibre, which increased its viscosity and enabled it to be applied to surfaces with specific thicknesses [7].
Dimensions of DCB specimens as well as the test method were based on the ASTM D5528 [30], which describes the determination of the opening mode I interlaminar fracture toughness G IC . Dimensions of the DCB specimens ( Figure 1) included a width of 25.4 mm, a length of 190.5 mm with a precrack of 30 mm located in the middle of the adhesive layer, which started at one end of the specimen. Precracks were generated by means of 0.5 mm polymer sheets and located within the adhesive using aluminium spacers according to the nominal thickness of the joint (H). DCB specimens were manufactured using three thicknesses for the adhesive bonded joints: 3.18 mm, 6.35 mm and 12.70 mm and aluminium spacers were also used in this step to separate FRP panels and give the desire adhesive joint thickness to DCB specimens. A curing time of seven days at 20 • C was used before the specimens were removed from their mould and were cut to final dimensions by means of high-pressure water jet machine.
The opening mode I interlaminar fracture toughness was determined by the modified beam theory (MBT) method (Equation (1)). Large displacement effects were corrected by the inclusion of a parameter F (Equation (2)) in the calculation of G IC . The specimen was loaded at a constant crosshead rate of 1 mm/min in an Instron 3367 universal testing machine. Crack length was measured by two methods: visual measurement with graduated marks along each of the specimens and throughout the specimens using Kyowa crack propagation gauges. 66 °C for 15 h. The nominal thickness of each fibre reinforced polymer (FRP) panel was 3 mm. The adhesive comprised an epoxy resin and a filler substance called a flocked cotton fibre, which increased its viscosity and enabled it to be applied to surfaces with specific thicknesses [7]. Dimensions of DCB specimens as well as the test method were based on the ASTM D5528 [30], which describes the determination of the opening mode I interlaminar fracture toughness GIC. Dimensions of the DCB specimens ( Figure 1) included a width of 25.4 mm, a length of 190.5 mm with a precrack of 30 mm located in the middle of the adhesive layer, which started at one end of the specimen. Precracks were generated by means of 0.5 mm polymer sheets and located within the adhesive using aluminium spacers according to the nominal thickness of the joint (H). DCB specimens were manufactured using three thicknesses for the adhesive bonded joints: 3.18 mm, 6.35 mm and 12.70 mm and aluminium spacers were also used in this step to separate FRP panels and give the desire adhesive joint thickness to DCB specimens. A curing time of seven days at 20 °C was used before the specimens were removed from their mould and were cut to final dimensions by means of high-pressure water jet machine.
The opening mode I interlaminar fracture toughness was determined by the modified beam theory (MBT) method (Equation (1)). Large displacement effects were corrected by the inclusion of a parameter F (Equation (2)) in the calculation of GIC. The specimen was loaded at a constant crosshead rate of 1 mm/min in an Instron 3367 universal testing machine. Crack length was measured by two methods: visual measurement with graduated marks along each of the specimens and throughout the specimens using Kyowa crack propagation gauges. (1) Where P-load, -load point displacement, b-specimen width, a-crack length and t-distance between the attachment point of the piano hinge and a quarter of the total specimen thickness including the adhesive and substrates. was experimentally determined by generating a least-squares plot of the cube root of compliance C 1/3 as a function of crack length; thus, is the distance between the origin and the intercept of the linear interpolation of C 1/3 with the abscissa. The compliance C is the ratio of the load point displacement to the applied load /P. Cyclic impact tests were performed using a drop-weight impact testing machine. Details of tests are described in [7]. DCB specimens used for cyclic impact tests had to be stiffened by adhering AISI 1020 steel plates with a thickness of 4.76 mm on the upper and lower surfaces of the specimen to reduce the load point displacement per cycle. A diagram of the crack propagation rate (da/dN) as a function of the mode I maximum strain energy where P-load, δ-load point displacement, b-specimen width, a-crack length and t-distance between the attachment point of the piano hinge and a quarter of the total specimen thickness including the adhesive and substrates. ∆ was experimentally determined by generating a least-squares plot of the cube root of compliance C 1/3 as a function of crack length; thus, ∆ is the distance between the origin and the intercept of the linear interpolation of C 1/3 with the abscissa. The compliance C is the ratio of the load point displacement to the applied load δ/P.
Cyclic impact tests were performed using a drop-weight impact testing machine. Details of tests are described in [7]. DCB specimens used for cyclic impact tests had to be stiffened by adhering AISI 1020 steel plates with a thickness of 4.76 mm on the upper and lower surfaces of the specimen to reduce the load point displacement per cycle. A diagram of the crack propagation rate (da/dN) as a function of the mode I maximum strain energy release rate (G Imax ) was obtained by plotting the derivative of the crack length (a) as a function of the number of impacts. G Imax was obtained by the application of the previously explained MBT method. The crack length was measured by visual inspection with graduated marks along the specimen length. Crack propagation tests under constant amplitude fatigue were performed in a servo hydraulic MTS 370 axial fatigue machine under the displacement control mode, inducing a constant amplitude sinusoidal waveform using a peak-valley compensator. Figure 2 shows the experimental setup. The displacement ratio was 0.1, and the load point displacement was measured by adhering markers to the specimens and monitoring them using a highresolution video camera. Crack propagation gauges were stuck on one side of the DCB specimens to measure the crack length per cycle. The opening load was applied by adhering piano hinges to both surfaces of the specimen using an epoxy adhesive.
Materials 2021, 14, x FOR PEER REVIEW 5 of 17 release rate (GImax) was obtained by plotting the derivative of the crack length (a) as a function of the number of impacts. GImax was obtained by the application of the previously explained MBT method. The crack length was measured by visual inspection with graduated marks along the specimen length. Crack propagation tests under constant amplitude fatigue were performed in a servo hydraulic MTS 370 axial fatigue machine under the displacement control mode, inducing a constant amplitude sinusoidal waveform using a peak-valley compensator. Figure 2 shows the experimental setup. The displacement ratio was 0.1, and the load point displacement was measured by adhering markers to the specimens and monitoring them using a high-resolution video camera. Crack propagation gauges were stuck on one side of the DCB specimens to measure the crack length per cycle. The opening load was applied by adhering piano hinges to both surfaces of the specimen using an epoxy adhesive. The fracture surface was observed by scanning electron microscopy (SEM) equipped with back-scattering electron imaging. SEM images were recorded on a JEOL scanning electron microscope (model JSM 6490-LV). Five images were recorded per specimen in different zones of the failure surface at 300× magnification, a voltage of 20 kV and a vacuum pressure of 20 Pa. The failure mechanism was classified according to the ASTM 5573 constant amplitude [31].
Mode I Critical Strain Energy Release Rate Determination
Opening quasistatic load was applied to 3.18-mm-, 6.35-mm-and 12.70-mm-thick DCB specimens to obtain the mode I critical strain energy release rate for each specimen. Five repetitions per adhesive thickness were performed. In all of the examined specimens, a crack starts to propagate inside the adhesive due to induced precracks; nevertheless, after some crack increase, it changes its direction of propagation until the crack reaches one of the substrates of the specimens, delaminating it and growing between the substrate plies until failure ( Figure 3). The fracture surface was observed by scanning electron microscopy (SEM) equipped with back-scattering electron imaging. SEM images were recorded on a JEOL scanning electron microscope (model JSM 6490-LV). Five images were recorded per specimen in different zones of the failure surface at 300× magnification, a voltage of 20 kV and a vacuum pressure of 20 Pa. The failure mechanism was classified according to the ASTM 5573 constant amplitude [31].
Mode I Critical Strain Energy Release Rate Determination
Opening quasistatic load was applied to 3.18-mm-, 6.35-mm-and 12.70-mm-thick DCB specimens to obtain the mode I critical strain energy release rate for each specimen. Five repetitions per adhesive thickness were performed. In all of the examined specimens, a crack starts to propagate inside the adhesive due to induced precracks; nevertheless, after some crack increase, it changes its direction of propagation until the crack reaches one of the substrates of the specimens, delaminating it and growing between the substrate plies until failure (Figure 3). release rate (GImax) was obtained by plotting the derivative of the crack length (a) as a function of the number of impacts. GImax was obtained by the application of the previously explained MBT method. The crack length was measured by visual inspection with graduated marks along the specimen length. Crack propagation tests under constant amplitude fatigue were performed in a servo hydraulic MTS 370 axial fatigue machine under the displacement control mode, inducing a constant amplitude sinusoidal waveform using a peak-valley compensator. Figure 2 shows the experimental setup. The displacement ratio was 0.1, and the load point displacement was measured by adhering markers to the specimens and monitoring them using a high-resolution video camera. Crack propagation gauges were stuck on one side of the DCB specimens to measure the crack length per cycle. The opening load was applied by adhering piano hinges to both surfaces of the specimen using an epoxy adhesive. The fracture surface was observed by scanning electron microscopy (SEM) equipped with back-scattering electron imaging. SEM images were recorded on a JEOL scanning electron microscope (model JSM 6490-LV). Five images were recorded per specimen in different zones of the failure surface at 300× magnification, a voltage of 20 kV and a vacuum pressure of 20 Pa. The failure mechanism was classified according to the ASTM 5573 constant amplitude [31].
Mode I Critical Strain Energy Release Rate Determination
Opening quasistatic load was applied to 3.18-mm-, 6.35-mm-and 12.70-mm-thick DCB specimens to obtain the mode I critical strain energy release rate for each specimen. Five repetitions per adhesive thickness were performed. In all of the examined specimens, a crack starts to propagate inside the adhesive due to induced precracks; nevertheless, after some crack increase, it changes its direction of propagation until the crack reaches one of the substrates of the specimens, delaminating it and growing between the substrate plies until failure ( Figure 3). until complete debonding of one of the adherends from the system. Each colour of the curve represents different loading cycles. When a load drop off occurred, cracks propagated along the composite and the specimen was unloaded totally after measuring the crack length, then a new loading cycle started. A change in the slope of each loading cycle is observed owing to a stiffness degradation of the specimen with crack evolution, thus a lower peak force is required to propagate the crack as it grows. Figure 4 shows the force vs. load point displacement plot for a 6.35-mm-thick DCB specimen, which was obtained by loading and unloading the same specimen many times until complete debonding of one of the adherends from the system. Each colour of the curve represents different loading cycles. When a load drop off occurred, cracks propagated along the composite and the specimen was unloaded totally after measuring the crack length, then a new loading cycle started. A change in the slope of each loading cycle is observed owing to a stiffness degradation of the specimen with crack evolution, thus a lower peak force is required to propagate the crack as it grows. Figure 5 shows the typical linear fitting curve of the cubic root of compliance as a function of the load point displacement. The intercept of the linear interpolation of C 1/3 with the abscissa affords , which is used as a correction factor for the mode I critical strain energy release rate due to the rotation that occurs at the delamination front of the specimen. Figure 5 shows the typical linear fitting curve of the cubic root of compliance as a function of the load point displacement. The intercept of the linear interpolation of C 1/3 with the abscissa affords ∆, which is used as a correction factor for the mode I critical strain energy release rate due to the rotation that occurs at the delamination front of the specimen. Figure 4 shows the force vs. load point displacement plot for a 6.35-mm-thick DCB specimen, which was obtained by loading and unloading the same specimen many times until complete debonding of one of the adherends from the system. Each colour of the curve represents different loading cycles. When a load drop off occurred, cracks propagated along the composite and the specimen was unloaded totally after measuring the crack length, then a new loading cycle started. A change in the slope of each loading cycle is observed owing to a stiffness degradation of the specimen with crack evolution, thus a lower peak force is required to propagate the crack as it grows. Figure 5 shows the typical linear fitting curve of the cubic root of compliance as a function of the load point displacement. The intercept of the linear interpolation of C 1/3 with the abscissa affords , which is used as a correction factor for the mode I critical strain energy release rate due to the rotation that occurs at the delamination front of the specimen. Figure 6 shows G IC values for each loading cycle shown in Figure 4 for a DCB specimen with an adhesive thickness of 3.18 mm in terms of the crack length. The final fracture toughness G IC for each specimen is calculated from the average G IC values for each loading cycle. From Figure 7, the average fracture toughness G IC increases with the increase in the adhesive thickness; nevertheless, due to scatter, a significant difference between means is not observed. Figure 6 shows GIC values for each loading cycle shown in Figure 4 for a DCB specimen with an adhesive thickness of 3.18 mm in terms of the crack length. The final fracture toughness GIC for each specimen is calculated from the average GIC values for each loading cycle. From Figure 7, the average fracture toughness GIC increases with the increase in the adhesive thickness; nevertheless, due to scatter, a significant difference between means is not observed.
Crack Propagation Rate Test under Dynamic Loading Conditions
Mode I crack propagation rate (da/dN) as a function of the maximum strain energy release rate GImax diagrams were obtained under repeated impact and constant amplitude fatigue to compare the influence of the load in the crack propagation behaviour. GImax was computed with the peak force of each cycle for both types of dynamic loads. Nine specimens were tested for each loading type, three specimens per nominal adhesive thickness. The crack growth path for all the specimens is similar to that shown in Figure 3, explained by the fact that the mode I fracture toughness of the bulk adhesive was considerably greater than that of the glass-fibre composite laminate; hence, it is the easiest route for crack growth. men with an adhesive thickness of 3.18 mm in terms of the crack length. The final fracture toughness GIC for each specimen is calculated from the average GIC values for each loading cycle. From Figure 7, the average fracture toughness GIC increases with the increase in the adhesive thickness; nevertheless, due to scatter, a significant difference between means is not observed.
Crack Propagation Rate Test under Dynamic Loading Conditions
Mode I crack propagation rate (da/dN) as a function of the maximum strain energy release rate GImax diagrams were obtained under repeated impact and constant amplitude fatigue to compare the influence of the load in the crack propagation behaviour. GImax was computed with the peak force of each cycle for both types of dynamic loads. Nine specimens were tested for each loading type, three specimens per nominal adhesive thickness. The crack growth path for all the specimens is similar to that shown in Figure 3, explained by the fact that the mode I fracture toughness of the bulk adhesive was considerably greater than that of the glass-fibre composite laminate; hence, it is the easiest route for crack growth.
Crack Propagation Rate Test under Dynamic Loading Conditions
Mode I crack propagation rate (da/dN) as a function of the maximum strain energy release rate G Imax diagrams were obtained under repeated impact and constant amplitude fatigue to compare the influence of the load in the crack propagation behaviour. G Imax was computed with the peak force of each cycle for both types of dynamic loads. Nine specimens were tested for each loading type, three specimens per nominal adhesive thickness. The crack growth path for all the specimens is similar to that shown in Figure 3, explained by the fact that the mode I fracture toughness of the bulk adhesive was considerably greater than that of the glass-fibre composite laminate; hence, it is the easiest route for crack growth. Figures 8 and 9 show the typical curves of the crack length (a) vs. the number of impact (N) or vs. number of cycles in the case of constant amplitude fatigue, showing logarithmic and power growth under repeated impact and constant amplitude fatigue respectively. This suggest that the mode I crack propagation rate (da/dN) is high at the first cycles and starts to decrease in a nonlinear manner until specimen failure, considering that da/dN was obtained by deriving the function found for the crack length vs. number of cycles plots. This behaviour was expected for both types of dynamic loads. Regarding constant amplitude fatigue, it is evident that the displacement control mode produces initially in the specimen a large crack length, which is proportional to the applied constant load point displacement, but then this constant opening will lead to a reduction in the crack growth during each cycle. Moreover, is important to notice that the real measurement of the change in the crack length corresponds to each step shown in Figure 9. Thus, the instantaneous crack length between steps is unknown due to the crack gauge. As a crack progresses, the grid lines of the crack gauge, which are separated one after another, are being broken, and an output value changes in the process, giving a reading of the crack evolution only when each grid is damage. Remarkably, an increase in the load point displacement generates a major deflection on substrates, absorbing an important fraction of the energy given by the impact phenomenon, which results in low crack propagation rates.
respectively. This suggest that the mode I crack propagation rate (da/dN) is high at the first cycles and starts to decrease in a nonlinear manner until specimen failure, considering that da/dN was obtained by deriving the function found for the crack length vs. number of cycles plots. This behaviour was expected for both types of dynamic loads. Regarding constant amplitude fatigue, it is evident that the displacement control mode produces initially in the specimen a large crack length, which is proportional to the applied constant load point displacement, but then this constant opening will lead to a reduction in the crack growth during each cycle. Moreover, is important to notice that the real measurement of the change in the crack length corresponds to each step shown in Figure 9. Thus, the instantaneous crack length between steps is unknown due to the crack gauge. As a crack progresses, the grid lines of the crack gauge, which are separated one after another, are being broken, and an output value changes in the process, giving a reading of the crack evolution only when each grid is damage. Remarkably, an increase in the load point displacement generates a major deflection on substrates, absorbing an important fraction of the energy given by the impact phenomenon, which results in low crack propagation rates. first cycles and starts to decrease in a nonlinear manner until specimen failure, considering that da/dN was obtained by deriving the function found for the crack length vs. number of cycles plots. This behaviour was expected for both types of dynamic loads. Regarding constant amplitude fatigue, it is evident that the displacement control mode produces initially in the specimen a large crack length, which is proportional to the applied constant load point displacement, but then this constant opening will lead to a reduction in the crack growth during each cycle. Moreover, is important to notice that the real measurement of the change in the crack length corresponds to each step shown in Figure 9. Thus, the instantaneous crack length between steps is unknown due to the crack gauge. As a crack progresses, the grid lines of the crack gauge, which are separated one after another, are being broken, and an output value changes in the process, giving a reading of the crack evolution only when each grid is damage. Remarkably, an increase in the load point displacement generates a major deflection on substrates, absorbing an important fraction of the energy given by the impact phenomenon, which results in low crack propagation rates. Figures 10 and 11 show the evolution of the maximum dynamic load from which G Imax was determined. It can be observed a sudden load-drop in the first cycles for both plots, followed by a levelling to a constant value. This behaviour could be related with the fact that the crack is initially in the middle zone of the adhesive, thus the force required to propagate a crack when its tip remains in the bulk adhesive is much higher than the required once the crack is between the substrate's plies. Figures 10 and 11 show the evolution of the maximum dynamic load from which GImax was determined. It can be observed a sudden load-drop in the first cycles for both plots, followed by a levelling to a constant value. This behaviour could be related with the fact that the crack is initially in the middle zone of the adhesive, thus the force required to propagate a crack when its tip remains in the bulk adhesive is much higher than the required once the crack is between the substrate's plies. Finally, da/dN as a function of GImax diagrams obtained under repeated impact, and shown in Figures 12-14, reveal two regions of the crack behaviour. Region I is located beneath thresholds of 541 ± 17.8 J/m 2 , 471 ± 4.9 J/m 2 and 490 ± 20.5 J/m 2 for adhesive thicknesses of 3.18 mm, 6.35 mm and 12.70 mm, respectively, below which the crack does not grow. In region II, stable crack propagation is observed, where da/dN follows a power law shown in Equation (3), widely known as the modified Paris law: Figure 10. Evolution of the maximum impact force. Figures 10 and 11 show the evolution of the maximum dynamic load from which GImax was determined. It can be observed a sudden load-drop in the first cycles for both plots, followed by a levelling to a constant value. This behaviour could be related with the fact that the crack is initially in the middle zone of the adhesive, thus the force required to propagate a crack when its tip remains in the bulk adhesive is much higher than the required once the crack is between the substrate's plies. Finally, da/dN as a function of GImax diagrams obtained under repeated impact, and shown in Figures 12-14, reveal two regions of the crack behaviour. Region I is located beneath thresholds of 541 ± 17.8 J/m 2 , 471 ± 4.9 J/m 2 and 490 ± 20.5 J/m 2 for adhesive thicknesses of 3.18 mm, 6.35 mm and 12.70 mm, respectively, below which the crack does not grow. In region II, stable crack propagation is observed, where da/dN follows a power law shown in Equation (3), widely known as the modified Paris law: Figure 11. Evolution of the maximum force per cycle.
Finally, da/dN as a function of G Imax diagrams obtained under repeated impact, and shown in Figures 12-14, reveal two regions of the crack behaviour. Region I is located beneath thresholds of 541 ± 17.8 J/m 2 , 471 ± 4.9 J/m 2 and 490 ± 20.5 J/m 2 for adhesive thicknesses of 3.18 mm, 6.35 mm and 12.70 mm, respectively, below which the crack does not grow. In region II, stable crack propagation is observed, where da/dN follows a power law shown in Equation (3), widely known as the modified Paris law: where C and m are material constants. For each of examined DCB specimens, the power law that describes the crack propagation behaviour is shown in the upper left corner of Figures 12-14, which are obtained from the power-law fitting of the points in Region II of those figures. Notably, with the increase in the specimen thickness, the m constant in the modified Paris law equation increases, showing a slightly more susceptibility to increase the propagation rate with minor changes in G Imax when the adhesive layer is thicker.
where C and m are material constants. For each of examined DCB specimens, the power law that describes the crack propagation behaviour is shown in the upper left corner of Figures 12-14, which are obtained from the power-law fitting of the points in Region II of those figures. Notably, with the increase in the specimen thickness, the m constant in the modified Paris law equation increases, showing a slightly more susceptibility to increase the propagation rate with minor changes in GImax when the adhesive layer is thicker. where C and m are material constants. For each of examined DCB specimens, the power law that describes the crack propagation behaviour is shown in the upper left corner of Figures 12-14, which are obtained from the power-law fitting of the points in Region II of those figures. Notably, with the increase in the specimen thickness, the m constant in the modified Paris law equation increases, showing a slightly more susceptibility to increase the propagation rate with minor changes in GImax when the adhesive layer is thicker. In addition, the da/dN diagrams as a function of GImax for constant amplitude fatigue (Figures 15-17) reveal the failure mechanism of three regions. Region I is located beneath a threshold of 156 ± 11.7 J/m 2 for DCB specimens with a nominal adhesive thickness of 3.18 mm; a threshold of 200 ± 19.4 J/m 2 for DCB specimens with a nominal adhesive thickness of 6.35 mm and a threshold of 255 ± 3.7 J/m 2 for DCB specimens with a nominal adhesive thickness of 12.70 mm. Region II exhibits stable crack propagation, where da/dN follows the modified Paris law as mentioned previously. Region III, where crack propagation starts to become unstable, is noted in the first cycles of Figures 15-17. This change of slope in the diagram starts to show at ~40% underneath the critical energy release rate value. In addition, the da/dN diagrams as a function of G Imax for constant amplitude fatigue (Figures 15-17) reveal the failure mechanism of three regions. Region I is located beneath a threshold of 156 ± 11.7 J/m 2 for DCB specimens with a nominal adhesive thickness of 3.18 mm; a threshold of 200 ± 19.4 J/m 2 for DCB specimens with a nominal adhesive thickness of 6.35 mm and a threshold of 255 ± 3.7 J/m 2 for DCB specimens with a nominal adhesive thickness of 12.70 mm. Region II exhibits stable crack propagation, where da/dN follows the modified Paris law as mentioned previously. Region III, where crack propagation starts to become unstable, is noted in the first cycles of Figures 15-17. This change of slope in the diagram starts to show at~40% underneath the critical energy release rate value. In addition, the da/dN diagrams as a function of GImax for constant amplitude fatigue (Figures 15-17) reveal the failure mechanism of three regions. Region I is located beneath a threshold of 156 ± 11.7 J/m 2 for DCB specimens with a nominal adhesive thickness of 3.18 mm; a threshold of 200 ± 19.4 J/m 2 for DCB specimens with a nominal adhesive thickness of 6.35 mm and a threshold of 255 ± 3.7 J/m 2 for DCB specimens with a nominal adhesive thickness of 12.70 mm. Region II exhibits stable crack propagation, where da/dN follows the modified Paris law as mentioned previously. Region III, where crack propagation starts to become unstable, is noted in the first cycles of Figures 15-17. This change of slope in the diagram starts to show at ~40% underneath the critical energy release rate value. Figures 18 and 19 show the SEM images of the fracture surface of two DCB specimen tested under constant amplitude fatigue and impact fatigue respectively: tearing and debonding of the fibre from the composite material matrix are observed in both. According to ASTM D5573 [31], this type of failure occurs within the FRP substrate near the surface, which is characterised by a thin layer of the FRP resin matrix on the adhesive, with few glass fibres transferred from the substrate to the adhesive; it is classified as fibre-tear failure. This result indicates that an adhesive bond is adequate due to the capability of the adhesive to delaminate the substrate. Figures 18 and 19 show the SEM images of the fracture surface of two DCB specimen tested under constant amplitude fatigue and impact fatigue respectively: tearing and debonding of the fibre from the composite material matrix are observed in both. According to ASTM D5573 [31], this type of failure occurs within the FRP substrate near the surface, which is characterised by a thin layer of the FRP resin matrix on the adhesive, with few glass fibres transferred from the substrate to the adhesive; it is classified as fibre-tear failure. This result indicates that an adhesive bond is adequate due to the capability of the adhesive to delaminate the substrate. Figures 18 and 19 show the SEM images of the fracture surface of two DCB specimen tested under constant amplitude fatigue and impact fatigue respectively: tearing and debonding of the fibre from the composite material matrix are observed in both. According to ASTM D5573 [31], this type of failure occurs within the FRP substrate near the surface, which is characterised by a thin layer of the FRP resin matrix on the adhesive, with few glass fibres transferred from the substrate to the adhesive; it is classified as fibre-tear failure. This result indicates that an adhesive bond is adequate due to the capability of the adhesive to delaminate the substrate.
Discussion
Diagrams of the mode I crack propagation rate da/dN as a function of the maximum strain energy release rate were obtained for DCB specimens under repeated impact and constant amplitude fatigue. The modified Paris law equation for all of the examined thicknesses of the adhesive layers reveals no significant difference among adhesive thicknesses under both types of dynamic load induced on the specimens, i.e., repeated impact and constant amplitude fatigue, respectively. This result is caused by the fact that the propagation of the crack between the substrate plies occurs in a noncohesive manner; hence, the mode I critical strain energy release rates are similar for the examined thicknesses of the three adhesive layers. The same failure mechanism is observed during the determination
Discussion
Diagrams of the mode I crack propagation rate da/dN as a function of the maximum strain energy release rate were obtained for DCB specimens under repeated impact and constant amplitude fatigue. The modified Paris law equation for all of the examined thicknesses of the adhesive layers reveals no significant difference among adhesive thicknesses under both types of dynamic load induced on the specimens, i.e., repeated impact and constant amplitude fatigue, respectively. This result is caused by the fact that the propagation of the crack between the substrate plies occurs in a noncohesive manner; hence, the mode I critical strain energy release rates are similar for the examined thicknesses of the three adhesive layers. The same failure mechanism is observed during the determination
Discussion
Diagrams of the mode I crack propagation rate da/dN as a function of the maximum strain energy release rate were obtained for DCB specimens under repeated impact and constant amplitude fatigue. The modified Paris law equation for all of the examined thicknesses of the adhesive layers reveals no significant difference among adhesive thicknesses under both types of dynamic load induced on the specimens, i.e., repeated impact and constant amplitude fatigue, respectively. This result is caused by the fact that the propagation of the crack between the substrate plies occurs in a noncohesive manner; hence, the mode I critical strain energy release rates are similar for the examined thicknesses of the three adhesive layers. The same failure mechanism is observed during the determination of the critical strain energy release rate; accordingly, significant effects of the adhesive bond line thickness neither on the crack propagation rate nor on the critical strain energy release rate of the joint are not observed. This can be attributed to the plastic zone formed at the crack tip, which is completely developed under the intervals of the adhesive thicknesses tested and not enough restrictions of the adherends are done regarding the plastic deformation at the crack tip. Total development of the plastic zone is between 0.25-0.5 mm, which are values much lower than those use in the aircraft [32,33]. Notably, no significant relation between the adhesive bond line thickness and the previously explained crack propagation parameters is only valid for this specific joint configuration, considering the material type, manufacturing process, stacking sequence and substrate geometry.
A fatigue sensitivity expression was proposed in [34], corresponding to the ratio of the maximum strain energy release rate threshold G th to the critical strain energy release rate G IC , i.e., G th /G IC . As the fatigue sensitivity is lower, a crack is more susceptible to propagate under low strain energy release rates; thus, it is a parameter that provides an overview of the susceptibility of the crack to grow under load conditions. The fatigue sensitivity values obtained for specimens tested under repeated impact are 0.46, 0.37 and 0.35 for adhesive thicknesses of 3.18 mm, 6.35 mm and 12.70 mm, respectively; thus, cracks are more susceptible to grow with thick adhesives under low-energy, repeated impact. On the other hand, the fatigue sensitivity values obtained for specimens tested under constant amplitude fatigue are 0.13, 0.17 and 0.16 for adhesive thicknesses of 3.18 mm, 6.35 mm and 12.70 mm, respectively, indicating that significant differences in terms of fatigue sensitivity among different adhesive thicknesses for those specimens tested under constant amplitude fatigue are not observed. Moreover, from fatigue sensitivity results, the crack is more susceptible to propagation under constant amplitude fatigue loading conditions; nevertheless, at the same interval of the maximum strain energy release rate da/dN, the crack growth velocity is three orders of magnitude higher for specimens tested under repeated impact (Figure 20). High load rates in polymeric-based adhesives lead to an increase in their mechanical strength [35]; hence, it could be related to the diameter of the plastic zone at the crack tip. Large plastic deformation at the crack tip generates a large damage zone; thus, most of the energy applied under dynamic regimes is absorbed in plastic deformation. Under the impact fatigue regime, cracks propagate under high load rates, with less development of the plastic zone; therefore, the material exhibits a more brittle behaviour, accounting for the accelerated crack growth under repeated impact [36]. The comparison of the constant m of the modified Paris law, which is only valid for region II, between constant amplitude and impact fatigue testing reveals no significant variation; in addition, this result could be related to the fact that low loading rates decrease the maximum strain energy release threshold due to the increment of the damage area as explained previously.
The impact fatigue crack propagation rate was measured previously [37] using lap strap joint specimens of CRFP substrates and an EA 9628 adhesive, and a fatigue sensitivity of 0.3 was reported; this value is similar to that obtained herein. In addition, similar results were reported for the crack propagation rate under constant amplitude fatigue. Previously [38], DCB specimens comprising unidirectional composite adherends bonded together with a structural adhesive were examined under a constant amplitude fatigue regime, and a fatigue sensitivity of~0.1 was obtained. In another study [39], a mode I delamination test also was performed on a carbon-epoxy laminate under constant amplitude fatigue, and a curve of the crack growth rate vs. the normalized strain energy release rate was plotted; a fatigue sensitivity of 0.2 was obtained. In addition, the strain energy release rate at the Paris limit was~65% of the fracture toughness. The Paris limit for the strain energy release rate in this study was~70% of the critical strain energy release rate; this value is in agreement with that reported previously [37]. This change in the slope of the log-log diagram (da/dN vs. G Imax) before G IC could be related to dynamic parameters, such as strain rate, load frequency and stress ratio, that affect the behaviour of substrates, which are viscoelastic materials that are sensitive to load rates; moreover, its thickness and configuration make them low-stiffness materials in comparison to steel, which could affect the Paris law limits [40]. and configuration make them low-stiffness materials in comparison to steel, which could affect the Paris law limits [40].
Conclusions
In this study, effects of the adhesive joint thickness of DCB specimens were analysed under quasistatic, cyclic impact and constant amplitude fatigue loading. Crack growth in DCB specimens under mode I quasistatic and dynamic loading regimes followed the same path, i.e., cracks propagated in a cohesive manner in the precrack zone and continued growing between the substrate plies until failure due to the inclusion of very thick adhesive layers and the fact that fracture toughness of the FRP interfaces is lower than the corresponding for the bulk adhesive. Additionally, cracks are more susceptible to propagation under constant amplitude fatigue loading conditions according to the fatigue sensitivity expression indicator; nevertheless, at the same interval of the maximum strain energy release rate da/dN, the crack growth velocity is three orders of magnitude higher for specimens tested under repeated low energy impacts, thus, it is imperative to consider in the fatigue life design of structures this type of loads when they may be subjected to them.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 11,687.2 | 2021-08-01T00:00:00.000 | [
"Engineering"
] |